Twitter said July 21 it is building a data center in Salt Lake City to load up on servers and other gear that will pare the Web service's downtime. Twitter will have full control over network and systems gear, which will be geared for high availability and redundancy.
Twitter on July 21 said it is building a data center in Salt
Lake City to load up on servers and other gear that
will help it eliminate the notorious outages that have plagued the microblog.
The company, racked by service availability issues since the Website became
popular in 2008, is in the process of moving its technical operations
infrastructure into the custom data center.
The move is designed to give the service more capacity as the company seeks
to accommodate the 300,000 users a day signing up for new accounts on Twitter,
which has more than 100 million users.
Twitter will have full control over network and systems equipment
which will be geared for high availability and redundancy.
In the tradition of Internet companies such as Google and Facebook, the data
center will employ commodity servers running open-source operating systems and
"Importantly, having our own data center will give us the flexibility
to more quickly make adjustments as our infrastructure needs change," said
Twitter engineering team member J.P. Cozzatti in a blog
Twitter plans to bring additional Twitter-managed data centers online over
the next 24 months. In the meantime, the company will continue to work with
infrastructure provider NTT America to host
its current equipment.
The data center is one of a few solutions to make Twitter a more reliable
and stable platform in the wake of regular outages. Twitter suffered
roughly 5 hours of downtime
in June, the most since October 2009.
These downtime instances are so pronounced that most of the company's
engineering efforts are currently focused on this issue, with the company
moving team members from other projects to hash it out.
For example, on July 20, a fault in the database that stores Twitter user
records caused problems on both Twitter.com and the company's API,
which lets third-party programmers build applications atop Twitter. Users were
unable to sign up, log in or update their profiles.
"The short, nontechnical explanation is that a mistake led to some
problems that we were able to fix without losing any data," Cozzatti said
in a separate blog post.
Even so, Twitter was able to survive the mad tweeting that accompanied the
World Cup in June and July.
To meet demand, the company doubled the capacity of its internal network,
and doubled the throughput to the database that stores tweets, among other
speed and tuning changes.