MENLO PARK, Calif. — Facebook is much like the Starship Enterprise in that it likes to go to places no company has gone before.
This is probably because not too many IT companies, especially young ones, have had to serve upwards of 950 million registered users — including a high percentage on a real-time basis — daily. Not many have to sell advertising to about 1 million customers or have dozens of new products in the works, all at the same exact time.
Facebook, which has a clear do-it-yourself IT approach, also designs its own servers and networking. It designs and builds its own data centers. Its staff writes most of its own applications and creates virtually all of its own middleware. Everything about its operational IT unites it in one extremely large system that is used by internal and external folks alike.
For example, Facebook’s human resources group, the accounting office, Mark Zuckerberg on email and even you at your laptop checking your status are all using exactly the same gigantic, amorphous data center system that circles the globe in its power and scope.
Everything Facebook Does Involves Big Data
“So just about everything we do turns out to be a big data problem,” said Jay Parikh, vice president of Infrastructure Engineering at Facebook, who spoke recently to a small group of journalists at the company headquarters. “This affects every layer of our stack. We’ve talked with some of you about the servers, storage, networking and the data center, as well as all the software, the operations, the visibility, the tools — it all comes together in this one application that we have to provide to all our users.”
Big data simply is about having insight and using it to make impact on your business, Parikh said.
“It’s really very simplistic. If you aren’t taking advantage of the data you are collecting and being kept in your business, then you just have a pile of a lot of data,” Parikh said. “We are getting more and more interested in doing things with the data we are collecting.”
Facebook doesn’t always know what it wants to do with the user lists, Web statistics, geographic information, photos, stories, messages, Web links, videos and everything else that the company collects, Parikh said. “But we want to collect everything, we want to instrument everything: cameras, when that door opens and closes, the temperature in this room, who walks in and out the lobby.
“We want to know who visits the site, what activities they do, where they do it on the site. So everything is interesting to us,” he said.
Facebook Always in Storage-Buying Mode
Facebook opened its first wholly owned data center in spring 2011 in Prineville, Ore., following a two-and-a-half-year construction period. It is custom-built for Facebook’s purposes and uses the company’s Open Compute Project architecture. It has two huge, 330,000 square-foot buildings on the site; one is for the daily operations, and one is for cold storage.
If you ask anybody at Facebook how much storage the company is running at any given time, you’ll never get a straight answer, because they honestly do not know.
Let’s just say that Facebook never leaves storage-buying mode.
Facebook launched the OCP on April 7, 2011. This is an unprecedented attempt to open-source the specifications it employs for its hardware and data center to efficiently power a social network comprising 950 million-plus people.
As part of the project, Facebook has published specs and mechanical designs used to construct the motherboards, power supplies, server chassis, and server and battery cabinets for its data center. That’s unprecedented enough for a company of Facebook’s growing scale, but the social network is also open-sourcing specs for its data center’s electrical and mechanical construction.
The move is surprising because Facebook closely secures the information inside its network walled garden. It has had to endure its share of flack from users about how it handles personal information, which the company relies upon to earn income.
Key Storage Rule: Facebook Does Not Partition Data
Above and beyond all the well-documented security headaches Facebook has faced is the continuing battle it has with handling the enormous amount of data coming into Prineville and the other data centers it rents.
“One thing we established early on is that our data infrastructure is shared across the entire company, with some constraints on user access,” Parikh said. “The challenge here is that it is not easily partitionable. We’ve hit these scaling limitations of this system, mainly because of our growth and because we try to keep it all together.
“A lot of times companies take the easy way out and say, ‘OK, it’s time to partition because we can’t do this. We’ll just separate this team from that team; we’ll take the bigger thing and divide it into smaller and smaller pieces over time, and that’s how we’ll manage scale.”
But breaking up a centralized IT system into smaller parts simply adds more complexity, cost and staff time.
“That has been unacceptable for us here at Facebook,” Parikh said. “That’s not how our product works, that’s not how our team works, and that is a unique thing about how we work and how we face these challenges.”
Parikh said his team is always looking for ways to speed up analysis of big data workloads.
“High-volume financial trading systems, for example, are now down to micro- or nano-second response times. That is the kind of competitive advantage that hedge funds are getting now in being able to process large volumes of data in extremely near-real-time sense,” Parikh said.
This goes back to the united-system approach. “You shouldn’t have any friction that prevents somebody from accessing another organization’s data that’s going to help you drive more sales or better efficiency,” Parikh said.
eWEEK will have more on this topic soon.
Chris Preimesberger is Editor of Features and Analysis for eWEEK. Twitter: @editingwhiz