Container orchestration is a complex problem. There are many different components communicating with each other in support of your application contain

Speeding up Amazon ECS container deployments

submited by
Style Pass
2024-05-04 23:00:07

Container orchestration is a complex problem. There are many different components communicating with each other in support of your application container deployment. First the orchestrator starts your application. Then the orchestrator needs to make a decision about whether your application is ready for web traffic. Later the application might be scaled down or a new version needs to be rolled out, so the old version of the application needs to be stopped. The orchestrator must decide whether the application is safe to stop. The orchestrator wants to maintain your application’s availability while doing a rolling deployment.

As a result of this you can sometimes end up in situations where your container deployments seem to be taking longer than you expect. Have thought to yourself “Why is this new version of my container taking 15 minutes to roll out?” If so this is usually because parts of your container orchestration are configured to be excessively “safe”. Here are some tips and tricks for configuring slightly less safe, but considerably faster container deployments on Amazon ECS:

By default the load balancer requires 5 passing healthchecks, each of which is made 30 seconds apart, before the target container is considered healthy. With a little math (5 * 30 / 60 = 2.5) we can see that means 2 minutes and 30 seconds. Because Amazon ECS uses the load balancer healthcheck as part of determining container health, this means that by default it takes a minimum of 2 minutes and 30 seconds before ECS considers a freshly launched container to be healthy.

Leave a Comment