Cloudflare enables its customers to run serverless code at the edge globally at blazing speeds with almost zero cold startup time. They achieve this w

Web Scale (Systems Architecture & Systems Programming)

submited by
Style Pass
2024-11-02 15:30:16

Cloudflare enables its customers to run serverless code at the edge globally at blazing speeds with almost zero cold startup time.

They achieve this with their V8-powered deployment architecture in contrast to going forward with the traditional containers and Kubernetes approach.

The prime reason behind not using containers or VMs is achieving sub-millisecond serverless latency and supporting a significantly large number of tenants who could run their workloads independently without sharing memory and state at the edge.

We are aware that serverless instances spin up to handle requests and spin down when idle to save costs. This is a trade-off between latency and running costs. It takes from 500 ms to 10 seconds to spin up a container or a VM to process a request, resulting in an unpredictable code execution time.

Cloudflare's V8 isolate architecture, in contrast, warms up a function in under 5 milliseconds or less. However, this approach has trade-offs like any other design decision, which I'll discuss in the later part of this post.  

Leave a Comment