Using an LLM or Large Language Model is a process that typically requires data being sent to a server. Local LLMs are different, as they allow you to

Opera One Developer becomes the first browser with built-in local LLMs – ready for you to test

submited by
Style Pass
2024-04-04 08:00:04

Using an LLM or Large Language Model is a process that typically requires data being sent to a server. Local LLMs are different, as they allow you to process your prompts directly on your machine without the data you’re submitting to the local LLM leaving your computer.  

Today, as part of our AI Feature Drops program, we are adding experimental support for 150 local LLM variants from ~50 families of models to our browser.  This marks the first time local LLMs can be easily accessed and managed from a major browser through a built-in feature. Among them, you will find:

Using Local Large Language Models means users’ data is kept locally, on their device, allowing them to use AI without the need to send information to a server. We are testing this new set of local LLMs in the developer stream of Opera One as part of our AI Feature Drops Program, which allows you to test early, often experimental versions of our AI feature set. 

As of today, the Opera One Developer community is getting the opportunity to select the model they want to process their input with, which is quite beneficial to the early adopter community that might have a preference for one model over another. This is so bleeding edge, that it might even break. But innovation wouldn’t be fun without experimental projects, would it? 

Leave a Comment