Computers have become more powerful and the cost of computing has declined sharply over the past several decades. The industry has gone from rooms full of vacuum tube computers of the 1950s with the processing power of a hand-held calculator to enormous supercomputers stuffed with thousands of speedy multi-core CPUs, GPUs and other accelerators assembled on compact systems-on-a-chip components.
However, Moore’s Law is starting to nudge up against the physical limits of chip technology while the engineering challenges and costs of continually putting more transistors into ever-smaller spaces continue to mount
Meanwhile, computing workloads are becoming more complex and the amount of data generated each year is growing. There also is a need to bring computing out to the network edge and closer to the systems and devices generating the data. New computing architectures and new ways of thinking about those architectures are required.
“We’re standing on the shoulders of giants and we see where we need to go, and it’s a long way,” Paul Teich, an analyst with Tirias Research, told eWEEK.
According to the most recent Top500 list of the world’s fastest supercomputers released in November 2016, the number-one system was China’s Sunway TaihuLight, at 93 petaflops. The performance of the upcoming exascale systems promise to dwarf that of TaihuLight.
Such performance is going to be crucial in the near future, according to Teich. The simulations researchers are running—mapping the human genome to studying the global impact of climate change—are becoming increasingly complex.
Scientists and enterprises want to make greater use of big data analytics that involves processing petabytes of data to derive useful information in almost real-time. This effort will increasingly involve deep learning and artificial intelligence. Teich cited the example of getting traffic to move efficiently through a smart, connected city in ways that accounts for toll roads, intersections, traffic lights, pedestrian crossings, road closures, traffic jams, and the like.
“Managing the flow of traffic and people is huge,” he said, noting that engineers and researchers have a challenge in figuring out how to put these exascale systems together. “The problem for us is on the design side. Our products and infrastructure are getting more complex. We need to be able to model more complex infrastructure.”
Designing exascale systems is an exercise in architectural balance. According to officials with the ECP, not only must computer scientists consider the hardware architecture and specifications of next-generation supercomputers, but they also need to look at what will be needed in the software stacks that will drive them.
They need to look at the applications that will run on top of the supercomputers to ensure that they can be productively used by businesses, academic institutions and research centers. Engineers and architects are looking not only at the processors that will power the systems, but also the memory and storage technologies that are required to efficiently manage the enormous input and output of data.
Exascale systems will consist of large numbers of compute nodes, will be highly parallel and will have to be highly energy efficient, fitting into a power envelope of 20 to 30 megawatts.
Teich said there is a broad range of technologies that may play a role in future exascale systems, though many—such as optical computing and optical interconnects, graphene as a possible replacement for silicon, and quantum computing—aren't mature enough for practical application. Some may not be ready for a decade or more.