The Fleet Numerical Meteorology and Oceanography Center of the U.S. Navy uses supercomputers and storage technology to support the Department of Defense's tsunami relief initiatives.
No technology accepted by the scientific community today can accurately forecast underwater earthquakes and the ensuing disasters they can cause. However, supercomputing and storage systems are being put to use to assist in the repair of damage done by the recent catastrophic tsunami in Asia, by processing air- and sea-condition models.
The Fleet Numerical Meteorology and Oceanography Center of the U.S. Navy uses supercomputers and storage technology from Silicon Graphics Inc. To support the Department of Defenses tsunami relief initiatives, the center is running a high-resolution, regional weather-prediction model called COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System), which forecasts conditions along the coasts of Indonesia, including Sumatra.
In addition, to help relief planes select optimum flight paths into the region, the center is running a model for aircraft routing.
The weather centers modeling programs rely on observations collected around the world from ships, aircraft, land stations and satellites. On average, more than 6 million observations come into the center each day. From that input, analysts develop approximately 500,000 charts and forecasts of oceanic and atmospheric conditions, which they distribute to the military around the world.
"We process about 1TB of data through our system per day. We really cant afford downtime," said Mike Clancy, acting technical director at the center, in Monterey, Calif. "We have customers relying on [the delivery of] our products in a timely manner."
Operating 24 hours a day, 365 days a year, the center generates a weather forecast for the entire globe every 12 hours. In addition to providing data to the military, the center makes much of its information available to the public via the National Weather Service and an agreement with The Weather Channel, as well as through its Web site.
FNMOC began using SGI servers, supercomputers and storage technology in 2001. Today, the network includes two Origin 3800 machines, two Origin 3900s and two 12-processor Origin systems, which are clustered because the centers work is continuous and the output of one job often contributes to the input of the next, Clancy said. The systems are connected through SGIs shared-file system, called InfiniteStorage Shared Filesystem CXFS, which permits data to be passed among operating systems without any replicating.
"We have a very complicated operation run, with a number of jobs that run in sequence," Clancy said. "Theyre very interdependent."
Next Page: An ever-growing amount of data.