This is about using Redis to store a huge number of data. Original article: http://instagram-engineering.tumblr.com/post/12202313862/storing-hundreds-of-millions-of-simple-key-value.
When transitioning systems, sometimes you have to build a little scaffolding. At Instagram, we recently had to do just that: for legacy reasons, we need to keep around a mapping of about 300 million photos back to the user ID that created them, in order to know which shard to query (see more info about our sharding setup). While eventually all clients and API applications will have been updated to pass us the full information, there are still plenty who have old information cached. We needed a solution that would:
- Look up keys and return values very quickly
- Fit the data in memory, and ideally within one of the EC2 high-memory types (the 17GB or 34GB, rather than the 68GB instance type)
- Fit well into our existing infrastructure
- Be persistent, so that we wouldn’t have to re-populate it if a server died
Continue reading How to store hundreds of millions of simple key-value pairs in Redis