When you first start building AI applications with generative AI, you'll likely end up using OpenAI's API at some point in your project's journey. And

How We Saved 10s of Thousands of Dollars Deploying Low Cost Open Source AI Technologies At Scale with Kubernetes

submited by
Style Pass
2024-05-15 02:00:07

When you first start building AI applications with generative AI, you'll likely end up using OpenAI's API at some point in your project's journey. And for good reason! Their API is well-structured, fast, and supported by great libraries. At a small scale or when you’re just getting started, using OpenAI can be relatively economical. There’s also a huge amount of really great educational material out there that walks you through the process of building AI applications and understanding complex techniques using OpenAI’s API.

One of my personal favorite OpenAI resources these days is the OpenAI Cookbook: this is an excellent way to start learning how their different models work, how to start taking advantage of the many cutting edge techniques in the AI space, and how to start integrating your data with AI workloads.

However, as soon as you need to scale up your generative AI operations, you'll quickly encounter a pretty significant obstacle: the cost. Once you start generating thousands (and eventually tens of thousands) of texts via GPT-4, or even the lower-cost GPT-3.5 models, you'll quickly find your OpenAI bill is also growing into the thousands of dollars every month.

Leave a Comment