Twitter Builds Data Centre To Combat Outages
Twitter claims it is building a data centre in Salt Lake City to load up on servers that will pare the web service’s downtime
Twitter on July 21 said it is building a data centre in Salt Lake City to load up on servers and other gear that will help it eliminate the notorious outages that have plagued the microblog.
The company, racked by service availability issues since the website became popular in 2008, is in the process of moving its technical operations infrastructure into the custom data centre.
The move is designed to give the service more capacity as the company seeks to accommodate the 300,000 users a day signing up for new accounts on Twitter, which has more than 100 million users.
Twitter will have full control over network and systems equipment, which will be geared for high availability and redundancy.
Open source OS and apps
In the tradition of Internet companies such as Google and Facebook, the data centre will employ commodity servers running open-source operating systems and applications.
“Importantly, having our own data centre will give us the flexibility to more quickly make adjustments as our infrastructure needs change,” said Twitter engineering team member JP Cozzatti in a blog post.
Twitter plans to bring additional Twitter-managed data centres online over the next 24 months. In the meantime, the company will continue to work with infrastructure provider NTT America to host its current equipment.
Regular outages
The data centre is one of a few solutions to make Twitter a more reliable and stable platform in the wake of regular outages. Twitter suffered roughly 5 hours of downtime in June, the most since October 2009.
These downtime instances are so pronounced that most of the company’s engineering efforts are currently focused on this issue, with the company moving team members from other projects to hash it out.
For example, on July 20, a fault in the database that stores Twitter user records caused problems on both Twitter.com and the company’s API, which lets third-party programmers build applications atop Twitter. Users were unable to sign up, log in or update their profiles.
“The short, nontechnical explanation is that a mistake led to some problems that we were able to fix without losing any data,” Cozzatti said in a separate blog post.
Even so, Twitter was able to survive the mad tweeting that accompanied the World Cup in June and July.
To meet demand, the company doubled the capacity of its internal network, and doubled the throughput to the database that stores tweets, among other speed and tuning changes.