Linking the CERN Processors
HP switches, including 600 ProCurve 3400cl, 400 ProCurve 3500yl and 20 ProCurve 5400-series devices, link the CERN processors at 1G bps, with 10 Gigabit uplinks into the grids core backbone. The network uses primarily fiber connectivity, although it also uses some UTP (unshielded twisted-pair) Category 6 copper cabling for 1G bps links. Sixteen 10G bps routers from Force10 Networks in the core backbone link the CERN network to other participants in the grid.Key features in the ProCurve switches include security and manageability. "We must be able to automate management to run such a large network, and we want to secure it to the level that we allow only authorized MAC [media access control] addresses to access the network," he said. HP officials simplified management of the switches for CERN by enabling different types of management functions to be executed on the switches using the industry-standard SNMP. HP made it possible to "get the temperature of the switch via SNMP and do configuration of the switch using SNMP," said Pierre Bugnon, account manager for HP, in Geneva. "We also need to make sure we can work with [HP] in the future and make sure they are open to collaboration to figure out how to do more. The relationship we established and their technology road map are very good, too," Grey said. "CERN is a very important customer for us in terms of relationship," said Victor Svensson, business development manager for HP, in Grenoble, France. "Were providing a very high level of support directly to CERN. It goes beyond the sales engagement to include strategic collaboration between the companies." HPs strict adherence to standards was also a key in its selection. "One thing that really helped is that we worked off standards. That was a key requirement," said Svensson. "[CERN] didnt want proprietary features. They have a lot of different [networking] firms involved," added Bugnon. Once the experiments begin, 7,000 scientists will analyze subsets of the data, looking for proof of the elusive Higgs boson particle or theoretical supersymmetric particles not yet proved. The Higgs boson particle has never been seen, but scientists believe that if it can be identified, it could help explain why an electron has a negative charge and a proton has a positive charge. While scientists using telescopes can perceive 3 percent of what the universe is made up of, the other 97 percent remains a mystery. "There are many candidates for what dark matter could beone could be supersymmetric particles. The hope is to find them," Grey said. "Its fairly esoteric, but its also pretty fundamental. Its about understanding the universe." The glue that holds the project together and makes the data capture and analysis possible is the grid middleware, "a layer of software that allows you to do your analysis without having to worry where the data is or the computing power on the grid," Grey said. The middleware, which optimizes use of the grid, includes such elements as resource brokers that determine at any given point in time which data centers have the necessary capacity for a task submitted by an authorized physicist and determines where the task will be handled. The types of jobs that the middleware, developed in-house, will distribute out across the processors fall into three categories. "It is either simulation of physics interaction in the detector; reconstruction of real detector signals or of simulated data; [or] physics analysis, where the outcome of many positions are sampled in a statistical way," Meinhard said. The middleware also implements authentication and authorization to ensure that research institutions supporting the grid have appropriate access and that "others are not getting a free ride," Grey said. Grid computing efforts fight Avian flu. Click here to read more. The grid middleware, which represents more than 1 million lines of code created using the open-source Globus Toolkit, also performs accounting functions to "make sure nobodys hogging the grid," Grey said. "It also implements security and monitoring to ensure the grid is available 24 by 7. "It is a big engineering effort to make sure the middleware is stable and runs well when the real data comes out later this year," Grey said. "Its being continuously improved and re-engineered and hardened to make sure it is [up-to-date]." To date, as the accelerator is being completed, CERN and its partners are running simulations across the grid, shifting "gigabytes of files and large amounts of data" to test its mettle, Grey said. CERN is also leading the charge to create a European multiscience grid that will support a range of scientists and experiments. "The long-term vision with these grids is like the Web," Grey said. "At some point, they link up and standards develop, so that, as a scientist, you just submit [computing tasks] to the grid and dont ask which one [will complete it]. But were quite a ways from this." Once the experiments start in November, the project will gather data for 15 years, although the data could be studied for many years after the LHC shuts down. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
HP and Force10 Networks were chosen for the LHC grid project because of their feature set, cost-effectiveness and "a great willingness [by HP] to work with us at an engineering level on the challenges," Foster said.