Achieving Sub-Millisecond Latencies With Redis by Using Better Serializers.
How some simple changes can result in less latency and better memory usage.
Redis Strings are probably the most used (and abused) Redis data structure.
One of their main advantages is that they are binary-safe — This means you can save any type of binary data in Redis.
But as it turns out, most Redis users are serializing objects to JSON strings and storing them inside Redis.
What’s the problem you might ask?
- JSON serialization/deserialization is incredibly inefficient and costly
- You end up using more space in storage (which is expensive in Redis since it’s an in-memory database)
- You increase your overall service latency without any real benefit
Using JSON to store data in Redis will increase your latency and resource usage without bringing any real benefit.
One other “simple” optimization you can use is compression.
This one will depend on each use case since it will be a trade-off between size, latency, and CPU usage.
Algorithms like ZSTD or LZ4 can be used with minimal CPU overhead, resulting in some good storage savings.
The following charts show how much you gain just by switching from JSON to a binary format like MessagePack.
These charts are also including the serialization/deserialization times.
We can also see that we can save some storage/memory by using compression at the expense of some latency.
While the previous charts showed a fairly complex JSON object that LZ4 can handle pretty well (compression ratio wise). When we need to compress arrays of floats, we see that ZSTD has the advantage in the next charts.
Here I ran the benchmarks with different sized arrays.
As you can see, just by switching from JSON to MessagePack, you can reduce your latency by more than 3x without any real disadvantage!
Simple example using python to set/get a Redis String using JSON and MessagePack:
As you can see, it’s as simple as using JSON.
Using Redis to Build a Realtime “NIKE Sneakers Drop App” Backend
Let’s learn how to build a backend that can handle millions of users efficiently
How does this all sound? Is there anything you’d like me to expand on? Let me know your thoughts in the comments section below (and hit the clap if this was useful)!
Stay tuned for the next post. Follow so you won’t miss it!