The Argonne Leadership Computing Facility (ALCF) started deployment of Mira in January 2012 with the delivery of two single-rack systems. By early March, all of the Early Science Program (ESP) project application teams got their codes built and running on these racks. When completed this fall, Mira will have 48 racks and 786,432 processors, and weigh 104 tons. It will be 20 times faster and five times more energy-efficient than Argonne’s current system, Intrepid, and capable of 10 quadrillion floating-point operations per second. The ALCF is committed to delivering 768 million core hours on Mira in 2013, based on a production schedule starting October 1, 2012, although it is likely that Mira will reach production status much sooner. When it goes into full production, more than 5 billion computing hours will be allotted to scientists on Mira every year. Any researcher in the world can apply for time on Mira to run programs for their experiments.
Installation of Sequoia, a 20-petaflop IBM BlueGene/Q system, at Lawrence Livermore National Laboratory is nearing completion, and shakeout testing has begun with unclassified applications. When testing is complete, Sequoia will be dedicated to research for the National Nuclear Security Administration’s program to ensure the safety, security and reliability of the nation’s nuclear deterrent without underground testing. This is a photo of Sequoia being installed at Lawrence Livermore national Laboratory’s Terascale Simulation Facility.
3U.K. Met Office Hadley Centre
In March, the United Kingdom’s National Weather Service, the Met Office, installed a new IBM Power 775 supercomputer along with 33 petabytes of storage as part of the United Kingdom’s $60 million investment in the Met Office Hadley Centre’s Climate Programme aimed at maintaining the United Kingdom’s place as a global leader in climate research and modeling. The technology will help the Hadley Centre continue to provide the cutting-edge evidence needed to develop plans to prepare the United Kingdom for a changing climate. Minister for Universities and Science David Willets said: “Supercomputing is fundamental to modern research, especially as it becomes more complex and produces an increasing amount of data. This investment will keep the U.K. at the forefront of climate science and modeling, underpinned by the leading knowledge and expertise of the Met Office.”
SciNet announced in April that, as part of the new Southern Ontario Smart Computing and Innovation Partnership (SOSCIP) with IBM, the Canadian Federal Government, the Government of Ontario, the University of Toronto and six other Ontario universities, it will be running what is expected to be Canada’s largest computer, an IBM Blue Gene/Q able to execute almost a half quadrillion mathematical operations per second. The system will be used for such flagship government or industrial projects as designing racing cars, aircraft engines or nuclear reactors. The SOSCIP project will also make the supercomputer available to small and midsized enterprises in Ontario to speed product research and development, which will help drive innovation for Ontario’s economy.
5STFC’s Daresbury Laboratory
The United Kingdom’s Science and Technology Facilities Council (STFC) and IBM announced in March a collaboration to create one of the world’s foremost centers in software development, the International Centre of Excellence for Computational Science and Engineering (ICE-CSE). Located at STFC’s Daresbury Laboratory in Cheshire, the ICE-CSE aims to establish high-performance computing as a tool for the United Kingdom’s industry, accelerating economic growth. Under the initial three-year agreement, STFC will use the IBM BlueGene/Q and IBM iDataplex to develop the necessary software to run on the next generation of supercomputers. With a peak performance of 1.4 petaflops, which is roughly the equivalent of 1 million iPads, the Blue Gene/Q system at Daresbury will be the United Kingdom’s most powerful machine by a considerable margin. It is also the most energy-efficient supercomputer in the United Kingdom, as it is eight times more efficient than most other available supercomputers, according to STFC.
IBM announced in March that its high-performance computing technology and services will power the National Oceanic and Atmospheric Administration’s (NOAA) operational weather and climate supercomputers to support fast, reliable weather forecasts. During a competitive bid process, IBM was awarded the $502 million federal contract to bolster NOAA’s ability to provide weather, climate, ocean and space weather information with increased accuracy and shorter lead times. NOAA will use the system for weather and climate modeling, which involves managing, analyzing and visualizing vast amounts of data. The accuracy of daily weather forecasts, as well as warnings of severe weather, such as tornadoes, hurricanes, thunderstorms and snowstorms, depends on the performance of these supercomputers.
7Victorian Life Sciences Computation Initiative
The Victorian Life Sciences Computation Initiative (VLSCI), hosted by the University of Melbourne in Australia, is planning to use an IBM Blue Gene/Q to advance the study of human disease. The acquisition of the IBM supercomputer is the second stage of an agreement between IBM and the University of Melbourne to provide state-of-the-art computational capacity for life sciences research within the VLSCI in conjunction with the IBM Research Collaboratory for Life SciencesâMelbourne. The Victorian Government and the university established the $100 million (AUS) VLSCI to strengthen the research capabilities and outcomes of Victorian life sciences research. The VLSCI has drawn computation and biology experts from around the world to manage the supercomputer resource and provide training and support to researchers unaccustomed to working at this scale. Expected to be operational by June 2012, the IBM supercomputer will provide 836 teraflops of processing power—the equivalent computing power of more than 20,000 desktop computers—making it one of the fastest supercomputers in Australia, based on the Top 500 list, and the fastest supercomputer dedicated to life sciences research in the Southern Hemisphere.
8Slovak Academy of Science in Slovakia
In December, IBM announced that the Slovak Academy of Science chose IBM technology to launch an important high-performance computing project in Slovakia. Working with local business partner DATALAN, IBM will provide a 96-teraflop supercomputer based on IBM’s Power 775 and 755 systems and DS5300 storage. The supercomputer will be deployed for scientific research requiring high computing capacity in physical chemistry, biochemistry, materials sciences, particle physics, quantum physics and more. The supercomputer will be shared by three departments of the Slovak Academy of Science (Computer Center, IT Institute in Bratislava and Institute of Experimental Physics in Kosice) and a consortium of four major universities (Slovak University of Technology in Bratislava, Zilina University, Matej Bel University in Banska Bystrica and Technical University in Kosice). It will also be made available for use by the broader academic community in Slovakia.
In November 2011, the National Center for Atmospheric Research (NCAR) announced that IBM would install critical components of a petascale supercomputing system at the new NCAR-Wyoming Supercomputing Center. The system, named Yellowstone, will be made available to scientists to study complex processes in the atmosphere and throughout the Earth ecosystem. It will also be used to accelerate research into climate change, severe weather, geomagnetic storms, carbon sequestration, aviation safety, wildfires and other critical geoscience topics. It is planned to be up and running this summer.
10ABB in Switzerland
Scientists at IBM and ABB, the world’s largest builder of electricity grids, announced in November that they are using supercomputers to study and potentially develop a new type of high-voltage insulator that will improve the efficiency of power-transmission grids. An improved insulator has the potential to transform the power grid by reducing energy loss and outages caused by material deterioration when exposed to weather. Using an IBM Blue Gene supercomputer and massively parallel algorithms, the scientists were able to simulate and study the individual molecules used in the silicon rubber to better understand how it reacts to damage caused by the environment. With advanced simulations and the computing power available, the scientists were able to simulate realistic models of the material comprising approximately 1 million atoms. These simulations will lead to testing new materials in the silicon rubber composition to improve their resiliency to damage.
11Vestas Wind Systems
In October, IBM announced that Danish energy company Vestas Wind Systems will use IBM big data analytics software and powerful IBM systems to improve wind-turbine placement for optimal energy output. Turbine placement is a major challenge for the renewable energy industry, and Vestas expects to accelerate the adoption of wind energy internationally and expand its business into new markets by overcoming this challenge.Â Vestas is addressing the issue of turbine placement by using IBM BigInsights software and an IBM iDataPlex supercomputer named “Firestorm” to analyze petabytes of structured and unstructured data, such as weather reports, tidal phases, geospatial and sensor data, satellite images, deforestation maps and weather modeling research to pinpoint optimal installation locations. The analysis, which used to take weeks, can now be done in less than one hour. Â
12KEK in Japan
IBM announced in September that it reached an agreement with the High Energy Accelerator Research Organization of Japan to develop the KEK Central Computer System to further advance man’s understanding of the universe. Researchers will use the system to analyze data generated from experiments in elementary particle and nuclear research. The system will also be used for theoretical calculations to develop new particle accelerators. The KEK Central Computer System went into operation in April, and consists of more than 370 highly scalable IBM iDataPlex systems. The organization currently manages several petabytes of experiment data, but it estimates that the amount will likely balloon to more than 10 petabytes in the near future.