As everything good in life, serverless also comes with its downsides. One of them is the infamous “cold start”. In this article, we’ll cover wha

Can we solve serverless cold starts?

submited by
Style Pass
2021-09-28 15:00:10

As everything good in life, serverless also comes with its downsides. One of them is the infamous “cold start”. In this article, we’ll cover what they are, what influences serverless startup latency, and how to mitigate its impacts in our applications.

Cold start refers to the state our function when serving a p articular invocation request. A serverless function is served by one or multiple micro-containers. When a request comes in, our function will check whether there is a container already running to serve the invocation. When an idle container is already available, we call it a “warm” container. If there isn’t a container readily available, the function will spin up a new one and this is what we call a “cold start”.

When a function in a cold state is invoked, the request will take additional time to be completed, because there’s a latency in starting up a new container. That’s the problem with cold starts: they make our application respond slower. In the “instant-age” of the 21st century, this could be a big problem.

Now that we know what is a “cold start”, let’s dig into how they work. The inner workings may differ from the service you’re using (AWS Lambda, Azure Functions, etc) or open-source project (OpenFaas, Kubeless, OpenWhisk, etc), but in general, these principles apply to all serverless compute architecture.

Leave a Comment