Facebook Always in Storage-Buying Mode

 
 
By Chris Preimesberger  |  Posted 2012-09-17 Email Print this article Print
 
 
 
 
 
 
 



Facebook opened its first wholly owned data center in spring 2011 in Prineville, Ore., following a two-and-a-half-year construction period. It is custom-built for Facebook's purposes and uses the company's Open Compute Project architecture. It has two huge, 330,000 square-foot buildings on the site; one is for the daily operations, and one is for cold storage.

If you ask anybody at Facebook how much storage the company is running at any given time, you'll never get a straight answer, because they honestly do not know.

Let's just say that Facebook never leaves storage-buying mode.

Facebook launched the OCP on April 7, 2011. This is an unprecedented attempt to open-source the specifications it employs for its hardware and data center to efficiently power a social network comprising 950 million-plus people.

As part of the project, Facebook has published specs and mechanical designs used to construct the motherboards, power supplies, server chassis, and server and battery cabinets for its data center. That's unprecedented enough for a company of Facebook's growing scale, but the social network is also open-sourcing specs for its data center's electrical and mechanical construction.

The move is surprising because Facebook closely secures the information inside its network walled garden. It has had to endure its share of flack from users about how it handles personal information, which the company relies upon to earn income.

Key Storage Rule: Facebook Does Not Partition Data

Above and beyond all the well-documented security headaches Facebook has faced is the continuing battle it has with handling the enormous amount of data coming into Prineville and the other data centers it rents.

"One thing we established early on is that our data infrastructure is shared across the entire company, with some constraints on user access," Parikh said. "The challenge here is that it is not easily partitionable. We've hit these scaling limitations of this system, mainly because of our growth and because we try to keep it all together.

"A lot of times companies take the easy way out and say, 'OK, it's time to partition because we can't do this. We'll just separate this team from that team; we'll take the bigger thing and divide it into smaller and smaller pieces over time, and that's how we'll manage scale."

But breaking up a centralized IT system into smaller parts simply adds more complexity, cost and staff time.

"That has been unacceptable for us here at Facebook," Parikh said. "That's not how our product works, that's not how our team works, and that is a unique thing about how we work and how we face these challenges."

Parikh said his team is always looking for ways to speed up analysis of big data workloads.

"High-volume financial trading systems, for example, are now down to micro- or nano-second response times. That is the kind of competitive advantage that hedge funds are getting now in being able to process large volumes of data in extremely near-real-time sense," Parikh said.

This goes back to the united-system approach. "You shouldn't have any friction that prevents somebody from accessing another organization's data that's going to help you drive more sales or better efficiency," Parikh said.

eWEEK will have more on this topic soon.

Chris Preimesberger is Editor of Features and Analysis for eWEEK. Twitter: @editingwhiz



 
 
 
 
Chris Preimesberger Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University. He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983. He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl. A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work. He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel