Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom

gradientai / Llama-3-70B-Instruct-Gradient-1048k like 5

submited by
Style Pass
2024-05-04 23:00:07

Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai.

This model extends LLama-3 70B's context length from 8k to > 1048K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 34M tokens for this stage, and ~430M tokens total for all stages, which is < 0.003% of Llama-3's original pre-training data.

We build on top of the EasyContext Blockwise RingAttention library [5] to scalably and efficiently train on very long contexts on Crusoe Energy high performance L40S cluster.

We layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices.

Leave a Comment
Related Posts