Grids Get a Grip on Your Brain

 
 
By Peter Coffee  |  Posted 2006-10-16 Email Print this article Print
 
 
 
 
 
 
 

Massive grids and fast network processors redraw IT road maps.

Semi-surrounded by the curving movie screen that took up one whole end of the briefing room, I watched a three-dimensional animation of a human brain that was visibly losing mass. "The symptoms of chronic methamphetamine use are similar to those of Alzheimers disease," explained a calm narrative voice. Without a trace of irony, it added, "This is your brain on drugs." This movie should be shown in high schools.

I was visiting the Laboratory of Neuro Imaging, or LONI, at the University of California, Los Angeles, where a 306-node cluster of Opteron-based Sun Fire servers is shifting the frontier of interactive visualization of complex data sets. "The challenge we face is combining images from hundreds or thousands of subjects," explained LONIs Lab Director, Dr. Arthur Toga.

A doctor who deals with human hearts can look at two different patients whose hearts arent in the same position, but can nonetheless identify the left ventricle in each. The structures of the brain that perform different tasks arent yet mapped with nearly this level of precision, but the massive processing power of the Sun N1 grid at LONI is aiding the process of understanding brain structure and function in much more specific ways.

Toga said LONI is "probably a little bit by ourselves" in the required synthesis of computer science, mathematics and neuroscience—as well as in the development of high-level tools that help researchers visualize the stages of processing enormous data sets, like those that come from magnetic resonance imaging scans of the brain.

Just one such scan can generate a data set thats tens of gigabytes in size; the useful aggregation of many such data sets, and the organization of that data into hierarchies that correspond to particular brain structures and behaviors, are tasks that quickly enter the realm of petabytes of data.

That realm is the planned destination of a project announced late last month to build a new high-performance computing facility at the Texas Advanced Computing Center at The University of Texas at Austin, where the National Science Foundation has awarded a $59 million grant to construct a grid thats capable of 400 trillion floating-point operations per second. The new TACC facility will use quad-core Opteron processors, I learned in a conversation at UCLA with Marc Hamilton, Sun Microsystems director of high-performance computing solutions.

With more than 13,000 CPUs, TACCs system will have more than 52,000 cores addressing more than 100TB (kilo-gigabytes) of memory and backed by 1.7 petabytes (mega-gigabytes) of storage.

High-performance computing facilities such as LONI or the new TACC complex are made far more useful by their networked availability to researchers at other locations. Toga said researchers at other sites often lack the processing power they need to process data theyve collected there, so they send it over the wire to LONI. Perversely, therefore, Ive often found in visits to large-scale supercomputing sites such as LONI that the limiting factor in what they can contribute to the research community is often their back-room bandwidth rather than their more photogenic computational capacity.

It was, therefore, an interesting coincidence that in the same week as my LONI briefing, I also had a conversation with Y.J. Kim, marketing director at Cavium Networks, of Mountain View, Calif. On Oct. 9, the company introduced its next generation of network-oriented microprocessors, which are marching nicely down the Moores Law curve.

Were not just talking quad cores here: Caviums Octeon Plus CN58XX processor family will offer, in volume shipment in the second quarter of next year, up to 16 64-bit cores on a chip consuming less than 40 watts at a 1GHz clock rate. In practical terms, this will enable full-duplex traffic at 10G bps, with optional on-board coprocessors for executing security algorithms, compressing data and performing pattern-matching tasks.

Whether scanning my brain tomorrow, or optimizing my retirement plan 10 years from now, this kind of power redefines mainstream IT capability.

Technology Editor Peter Coffee can be reached at peter_coffee@ziffdavis.com.

 
 
 
 
Peter Coffee is Director of Platform Research at salesforce.com, where he serves as a liaison with the developer community to define the opportunity and clarify developers' technical requirements on the company's evolving Apex Platform. Peter previously spent 18 years with eWEEK (formerly PC Week), the national news magazine of enterprise technology practice, where he reviewed software development tools and methods and wrote regular columns on emerging technologies and professional community issues.Before he began writing full-time in 1989, Peter spent eleven years in technical and management positions at Exxon and The Aerospace Corporation, including management of the latter company's first desktop computing planning team and applied research in applications of artificial intelligence techniques. He holds an engineering degree from MIT and an MBA from Pepperdine University, he has held teaching appointments in computer science, business analytics and information systems management at Pepperdine, UCLA, and Chapman College.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel