Twitter Builds Data Centre To Combat Outages

Twitter on July 21 said it is building a data centre in Salt Lake City to load up on servers and other gear that will help it eliminate the notorious outages that have plagued the microblog.

The company, racked by service availability issues since the website became popular in 2008, is in the process of moving its technical operations infrastructure into the custom data centre.

The move is designed to give the service more capacity as the company seeks to accommodate the 300,000 users a day signing up for new accounts on Twitter, which has more than 100 million users.

Twitter will have full control over network and systems equipment, which will be geared for high availability and redundancy.

Open source OS and apps

In the tradition of Internet companies such as Google and Facebook, the data centre will employ commodity servers running open-source operating systems and applications.

“Importantly, having our own data centre will give us the flexibility to more quickly make adjustments as our infrastructure needs change,” said Twitter engineering team member JP Cozzatti in a blog post.

Twitter plans to bring additional Twitter-managed data centres online over the next 24 months. In the meantime, the company will continue to work with infrastructure provider NTT America to host its current equipment.

Regular outages

The data centre is one of a few solutions to make Twitter a more reliable and stable platform in the wake of regular outages. Twitter suffered roughly 5 hours of downtime in June, the most since October 2009.

These downtime instances are so pronounced that most of the company’s engineering efforts are currently focused on this issue, with the company moving team members from other projects to hash it out.

For example, on July 20, a fault in the database that stores Twitter user records caused problems on both Twitter.com and the company’s API, which lets third-party programmers build applications atop Twitter. Users were unable to sign up, log in or update their profiles.

“The short, nontechnical explanation is that a mistake led to some problems that we were able to fix without losing any data,” Cozzatti said in a separate blog post.

Even so, Twitter was able to survive the mad tweeting that accompanied the World Cup in June and July.

To meet demand, the company doubled the capacity of its internal network, and doubled the throughput to the database that stores tweets, among other speed and tuning changes.

Clint Boulton eWEEK USA 2012. Ziff Davis Enterprise Inc. All Rights Reserved

Share
Published by
Clint Boulton eWEEK USA 2012. Ziff Davis Enterprise Inc. All Rights Reserved
Tags: twitter

Recent Posts

Baltic Sea Power Cable Severed In Latest Incident

Undersea internet and power cable in Baltic sea between Finland and Estonia suffers outage. Finland…

19 hours ago

US Begins Investigation Into Legacy Chinese Chips

The Biden Administration has launched a last-minute investigation into older Chinese-made legacy semiconductors - weeks…

22 hours ago

Iran Lifts Ban On WhatsApp, Google Play

State media reports the Iranian regime has lifted the ban on WhatsApp and Google Play,…

23 hours ago

Spyware Maker NSO Group Found Liable In US Court

Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…

4 days ago

Microsoft Diversifying 365 Copilot Away From OpenAI

Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…

4 days ago

Albania Bans TikTok For One Year After Stabbing

Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…

4 days ago