Periscope is a live video streaming app, well integrated into the Twitter platform and used by millions of users worldwide.
Its backend infrastructure runs on Amazon Web Services (AWS), and open source software Redis is used at multiple layers in – particularly to handle the team’s high throughput and low latency goals needed to meet the scale of usage.
Caching is implemented at several layers inside Periscope’s backend and every application programming interface (API) touches Redis in some way. Periscope stores hundreds of gigabytes of data in Redis.
Redis is used alongside other databases and other commercial and homegrown solutions, and its usage is determined by data access patterns. Data that is needed frequently is stored in Redis, data needing slower/infrequent access is stored in other databases. Periscope’s stack is based on Go, uses Docker for deployment and uses many AWS services.
Twitter software engineer Mohammad Almalkawi and his team are responsible for API and backend database design for the Periscope service at Twitter. With scalability and reliability of the API as their primary concern, Mohammad’s team had set out to ensure low latency, high availability and reliability for their Redis deployment.
The team wanted high reliability for their Redis layer, but at the same time they wanted to minimise any operational overhead of running Redis. They did not want to spend in-house resources on operating and managing Redis.
While evaluating different options, the team found that, in most cases, they would have had to build substantial Redis operational expertise themselves to ensure a reliable deployment. With Redis Cloud, they found the solution that provided them with the zero-touch, zero-hassle service they needed.
High availability was another issue with other options evaluated – detecting and recovering from failures took too long and would have had too serious of an impact to availability. Redis Cloud, on the other hand offered seamless scaling without any downtime, in-memory replication with instant failure detection and automatic failover within seconds.
Almalkawi says: “Redis Labs’ service requires the least amount of operational effort and delivers true high availability to our Redis deployment.
“It also delivers new Redis functionality with fewer delays than other services. It’s the most cost-effective and least operational overhead way to deploy Redis.”
According to Almalkawi, Redis Labs’ Redis Cloud delivers the stable high performance needed for Twitter’s extensive infrastructure with almost zero engineering effort.
It doesn’t suffer from outages or latency issues and doesn’t require specialised Redis expertise to manage it. On comparing other alternatives to Redis Labs, Almalkawi found his choice was justified – other options would require too high an investment in building operational Redis expertise and too much worrying about availability or maintenance.
How much do you know about Twitter? Try our quiz!
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…