The dolphin-2.2-yi-34b model is based on the 34B LLM, Yi, released by the 01.AI team. Yi is converted to the llama2 format by Charles Goddard and then further fine-tuned by Eric Hartford.
We will use the Rust + Wasm stack to develop and deploy applications for this model. There is no complex Python packages or C++ toolchains to install! See why we choose this tech stack.
Step 3: Download a cross-platform portable Wasm file for the chat app. The application allows you to chat with the model on the command line. The Rust source code for the app is here.
The Yi-34B model is an impressive bilingual LLM with a large context length. It is then fine-tuned with the Dolphin dataset to “learn” reasoning and uncensor its biases. The result is an excellent LLM that can be further tuned for personality and alignment. Using WasmEdge, I am… pic.twitter.com/DWHwSQXi5B
An OpenAI-compatible web API allows the model to work with a large ecosystem of LLM tools and agent frameworks such as flows.network, LangChain and LlamaIndex.