BURLINGAME, Calif.—At the EclipseCon 2005 conference here, a leading Google Inc. engineer gave a rare glimpse into the workings of the search powerhouse.
In a keynote Wednesday titled “A Look Behind the Scenes at Google,” Urs Hoelzle, vice president of engineering, essentially described the companys secret sauce as “the return of batch computing.” Large numbers of cheap hardware, plus networking and intelligent software to support fault tolerance and other key functions, have gone a long way with the Mountain View, Calif., company, he said.
Hoelzles talk also had a subplot: “the things behind search—how it works and how its organized.”
Hoelzle described Googles mission as “to organize the worlds information and make it universally accessible and useful.” This mission, he added, “drives a lot of the engineering we do.”
With the Web consisting of 10 billion pages and the average page going 10 kilobytes, the total size is “many tens of terabytes,” Hoelzle said. Yet, “its very big, but its actually tractable. We need a lot of computers and disks and networking and software.”
In fact, Google runs its system on commodity hardware running Linux, he said.
“The underlying hardware is pretty damn cheap, but you have to build it into a system thats scalable,” he said.
The primary components of the Google system are hardware, networking, distributed systems software, search algorithms, machine learning, information retrieval and user interfaces, Hoelzle said. He added that the hardware environment consists of racks and racks of 88 commodity PCs.
“These things are cheap, and you can buy them anywhere,” he said. “The problem is these things break. Things break everyday. If you have a thousand of these machines, expect to lose one a day. So you have to deal with that, and you better deal with that in an automated way. You deal with it in software by replication and redundancy.”
Indeed, fault-tolerant software makes cheap hardware practical, Hoelzle said.
And “sometimes things go very wrong,” he said as he displayed a slide showing three fire trucks parked in front of a Google location. “I cant tell you exactly what happened, but it was not very good, and it was not just one machine going down.”
Yet, Hoelzle described Googles fault-tolerant solutions as “very robust,” claiming the system “can tolerate massive failures.” The company once lost 1,800 out of 2,000 machines in one environment, he said, but the operation continued to run—a bit slower, but it continued to work with 90 percent of its machines out of operation.
Google uses an index, similar to a books index, which takes several days on hundreds of machines to compile, Hoelzle said. It has more than 8 billion Web documents and 1.1 billion images.
Then Google uses its PageRank system for ranking and ordering the Web pages, he said. “Then we split them into pieces called shards, small enough to put on various machines. And we replicate the shards.”
So an incoming query would hit the Google Web server and then the index server and eventually a document server that contains copies of the Web pages Google downloads.
Managing the overall system is the Google File System, which features a master that manages metadata. Data transfers occur directly between clients and chunkservers, files are broken into 64MB chunks, and chunks are triplicated across three machines for safety, Hoelzle said.
“The machines are cheap and not reliable, so we take our files and put them into chunks and spread them across a few machines and randomly distribute copies,” he said. “So you need to have a master that tells you where the chunks are.” The master will look at one chunkserver, and if it gets no response it assumes it is dead and it seeks out the next one.
Hoelzle said there are more than 30 clusters at Google, with clusters as large as 2,000 machines to address a petabyte-sized file system.
“Youd like to be able to write an application that can run on 1,000 machines in parallel,” he said.
Googles MapReduce framework provides automatic and efficient parallelization, fault tolerance, I/O scheduling and status monitoring.
“MapReduce basically does a grep over the Web on a thousand machines,” Hoelzle said. Grep is a Unix/Linux function that searches one or more input files for lines containing a match to a specified pattern.
As far as scheduling, the Google system has one master and many workers, and tasks are assigned to workers dynamically, Hoelzle said. The master assigns each map task to a free worker.
MapReduce has broad applicability, Hoelzle said. “Its parallel to Eclipse. If you have a good tool that is easy to use, your users come out of the woodwork. In the first year we had hundreds of MapReduce jobs being written. Our production index system is written on top of MapReduce.”
As a demonstration, Hoelzle produced a diagram of the Google activity around searches of the term “eclipse” over the last few years. The diagram showed three spikes, all in line with the occurrence of a solar eclipse.