Automated AWS Load Balancer Warm-Up

Automate AWS load balancer to avoid issues with huge traffic spikes

Luis Sena
3 min readNov 11, 2021

Chances are, you’ve used AWS Load Balancer at some point. If you have small or constant traffic, you’d be excused to believe it works like magic without having to face the reality that, like any other system, it has machines behind it that can fail.

On the other hand, if you’ve faced a scenario where you’re traffic can increase 10x almost instantly, you’ve probably got to see the dreadful error 500 coming from the load balancer itself.

This is exactly what we faced when building a sneakers drop app in the past.

Each time we had a “drop event”, the server traffic would increase 100x in less than a minute which resulted in the AWS load balancer taking a while to scale out. During the scale-out, most users were unable to use the app.

So the solution seemed obvious, let's pre-warm our load balancer and be done with it!

Unfortunately, AWS doesn’t provide an automated way to do that and you’re forced to raise a ticket each time you need to warm up for a specific event.

This wouldn’t do, everything else was already automated, having a recurrent manual step would make it too easy for human error.

We needed to automate this somehow and the diagram of our solution can be seen here:

each SQS msg will trigger a lambda function. X messages trigger X concurrent lambda functions
  • 30 minutes before each event timestamp, an async process starts sending events to SQS.
  • Every single SQS message triggers a single Lambda function.
  • Each lambda generates a pre-defined amount of traffic
  • The Fibonacci sequence was used as a guide for the increasing number of messages the backend should produce(finally a good excuse to use the Fibonacci recursive function for something useful!).
  • Since those messages are produced in parallel, AWS provisions a single Lambda function to handle each individual SQS message.
  • The Load Balancer starts to receive incremental traffic and that forces it to deploy more nodes behind the scenes, resulting in increased capacity.

This solution worked incredibly well. Not only was it really cheap (we used the cheapest lambda config that has 1 million free invocations), but it also allowed everything to be on autopilot!

A python example of how the lambda code would look in python:

lambda code for the aws load balancer auto warmer

The deployment can be automated with serverless.js, the yaml script:

this configuration creates a new lambda with a trigger connected to a specific SQS queue

In our case, the traffic spikes were very easy to predict. We knew that it would always happen around the event timestamp.

For scenarios where there’s a pattern but it’s hard to convert that pattern to a specific schedule, what I’ve seen being done with success is to train a Machine Learning model on your time-series data and then use it to predict when you should warm up your load balancer.

How does this all sound? Is there anything you’d like me to expand on? Let me know your thoughts in the comments section below (and hit the clap if this was useful)!

Stay tuned for the next post. Follow so you won’t miss it!

--

--

Responses (1)