Chat with LLMs offline or in secure environments using Pieces. Seamlessly switch between cloud and local models, now powered by Ollama. Pieces is the

Pieces now powered by Ollama for enhanced local model integration

submited by
Style Pass
2025-01-10 15:00:19

Chat with LLMs offline or in secure environments using Pieces. Seamlessly switch between cloud and local models, now powered by Ollama.

Pieces is the only AI developer assistant that has a deep managed integration with local models, allowing you to chat with an LLM offline, or in privacy or security-focused environments. 

This deep integration is completely seamless for you as a user, you can select the model of your choice, from the massive cloud models like Gemini, Claude, or GPT-4o, then with a couple of clicks, change to an on-device model running locally with access to all the context you would want, from file and folders of code to the Pieces Long-Term Memory. 

You can even do this mid-conversation, starting a discussion with Claude, then switching to Llama if you go offline, such as on a plane, and continuing the same conversation with the same context.

Most of the time it works, but there have been a few users who have just been unable to run on their GPU. By using Ollama, all these hiccups should go away. 

Leave a Comment