This project provides a Dockerized Nginx server configured to act as a reverse proxy for Ollama, a local AI model serving platform. The proxy includes

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-10-14 11:00:07

This project provides a Dockerized Nginx server configured to act as a reverse proxy for Ollama, a local AI model serving platform. The proxy includes built-in authentication using a custom Authorization header and exposes the Ollama service over the internet using a Cloudflare Tunnel.

For systems that expect OpenAI's models to "be there", it is useful to rename Ollama models by copying them to a new name using the Ollama CLI.

This command copies the model llama3.2:3b-instruct-fp16 to a new model named gpt-4o, making it easier to reference in API requests.

This project relies on Cloudflare as a middleman for the Cloudflare Tunnel. If you trust Cloudflare, the setup ensures that no one else can eavesdrop on your traffic or access your data.

If privacy beyond this is a concern, note that local traffic within the container is not encrypted, although it is isolated from external networks.

Note: The Nginx configuration has been carefully set up to handle CORS headers appropriately. You can refer to the nginx-default.conf.template file to understand the specifics.

Leave a Comment