This means you can use LM Studio with any framework that currently knows how to use OpenAI tools, and utilize local models for tool use instead. This

LM Studio 0.3.6 | LM Studio Blog

submited by
Style Pass
2025-01-08 12:00:07

This means you can use LM Studio with any framework that currently knows how to use OpenAI tools, and utilize local models for tool use instead. This capability is in beta and we'd love to get your bug reports and feedback.

Among other new features in this 0.3.6 are support for new vision-input models: The Qwen2VL family + Qwen/QVQ (a large vision + reasoning model) in both LM Studio's MLX as well as llama.cpp engines.

Temporary Note: in-app updates from 0.3.5 (stable) will only start later this week as we transition to a new updater system. Updates are already fully operational for LM Studio 0.3.5 b10 and newer. Install LM Studio manually to get the latest.

One of the high notes of this release is that we've automated our entire build and release pipeline, which means it's going to be much easier to release new LM Studio app and engine updates. We're very excited about that.

Update your llama.cpp or MLX engine as soon as updates becomes available, without waiting for LM Studio app update in most cases

Leave a Comment