Periscope is a live video streaming app, well integrated into the Twitter platform and used by millions of users worldwide.
Its backend infrastructure runs on Amazon Web Services (AWS), and open source software Redis is used at multiple layers in – particularly to handle the team’s high throughput and low latency goals needed to meet the scale of usage.
Caching is implemented at several layers inside Periscope’s backend and every application programming interface (API) touches Redis in some way. Periscope stores hundreds of gigabytes of data in Redis.
Redis is used alongside other databases and other commercial and homegrown solutions, and its usage is determined by data access patterns. Data that is needed frequently is stored in Redis, data needing slower/infrequent access is stored in other databases. Periscope’s stack is based on Go, uses Docker for deployment and uses many AWS services.
Twitter software engineer Mohammad Almalkawi and his team are responsible for API and backend database design for the Periscope service at Twitter. With scalability and reliability of the API as their primary concern, Mohammad’s team had set out to ensure low latency, high availability and reliability for their Redis deployment.
The team wanted high reliability for their Redis layer, but at the same time they wanted to minimise any operational overhead of running Redis. They did not want to spend in-house resources on operating and managing Redis.
While evaluating different options, the team found that, in most cases, they would have had to build substantial Redis operational expertise themselves to ensure a reliable deployment. With Redis Cloud, they found the solution that provided them with the zero-touch, zero-hassle service they needed.
High availability was another issue with other options evaluated – detecting and recovering from failures took too long and would have had too serious of an impact to availability. Redis Cloud, on the other hand offered seamless scaling without any downtime, in-memory replication with instant failure detection and automatic failover within seconds.
Almalkawi says: “Redis Labs’ service requires the least amount of operational effort and delivers true high availability to our Redis deployment.
“It also delivers new Redis functionality with fewer delays than other services. It’s the most cost-effective and least operational overhead way to deploy Redis.”
According to Almalkawi, Redis Labs’ Redis Cloud delivers the stable high performance needed for Twitter’s extensive infrastructure with almost zero engineering effort.
It doesn’t suffer from outages or latency issues and doesn’t require specialised Redis expertise to manage it. On comparing other alternatives to Redis Labs, Almalkawi found his choice was justified – other options would require too high an investment in building operational Redis expertise and too much worrying about availability or maintenance.
How much do you know about Twitter? Try our quiz!
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…