Intel officials are confirming that the company’s first 10-nanometer server processor will launch in 2020, with two more 14nm chips coming to market during the next two years.
At the chip vendor’s Data-Centric Innovation Summit Aug. 8, Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, unveiled a Xeon roadmap that shows 14nm Cascade Lake chips being released later this year, followed by the 14nm Cooper Lake chips in late 2019 and the 10nm Ice Lake processors in 2020.
Each of the generations of Xeon processors will come with a range of enhancements that will drive system performance improvements in traditional data center applications as well as emerging workloads such as artificial intelligence (AI), machine learning and data analytics, according to Shenoy.
They also highlight the chip maker’s evolving platform and solutions strategy that includes a fast-expanding portfolio that goes beyond processors and extends into such areas memory, networking, storage, security and AI.
That strategy is a key part of Intel’s efforts at a time when the data center chip space is becoming more competitive with a stronger Advanced Micro Devices, IBM and Arm-based processor makers like Cavium, with its ThunderX2 systems-on-a-chip (SoCs) all working to win bigger market shares.
Shenoy and other Intel executives also emphasized the company’s broad product portfolio and how it fits into the larger market strategy when asked by analysts at the event about the increasingly competitive chip landscape. A number of analysts talked about concern in the industry about Intel’s delay in launching 10nm processors at a time when AMD is getting ready to roll out 7nm chips as early as next year.
Shenoy said discussions with customers involve more than simply silicon.
“We’re not talking to them about just microprocessors,” he said. “We talk to them about silicon photonics, we talk to them about smart NICs, and we talk to them about Optane persistent memory and we talk to them about FPGAs [field-programmable gate arrays] and we talk to them about custom ASICs. That breadth of our portfolio that we have and our ability to stitch those capabilities together to deliver higher levels of performance is a big-time difference between the way we were viewed five years ago and the way at least that we think about ourselves now.”
The upcoming Xeon chips will leverage many of these other technologies. Cascade Lake will include a new integrated memory controller that will allow support for Optane DC, a new technology that Intel began shipping to select customers this week. Optane DC will deliver fast and affordable memory technology that will drive performance gains for certain analytics queries of as much as eight times that of configurations that rely on DRAM only.
It also will include an embedded AI accelerator called DL Boost, which will speed deep learning inference workloads, including image recognition capabilities 11 times faster than the Skylake Xeon chips released last year.
Cooper Lake will include more performance improvements, new I/O features and another new AI extension called Bfloat16 that will speed up AI and deep learning training. There also will be more innovations related to Optane DC. Ice Lake will roll out in 2020 and will share a common platform with Cooper Lake.
Along with the new chips, Intel also will roll out new pre-validated, pre-integrated software and hardware solutions with Cascade Lake. The company launched several such solutions developed with about 30 system makers to make it easier for customers to deploy and use the technologies in such areas as high-performance computing, analytics and hybrid clouds. The new offerings will cover AI, blockchain and SAP HANA in-memory systems.
Shenoy said that with the expanded product portfolio and broader strategy, the addressable market that Intel is looking at will hit about $200 billion by 2022, and that the company now only holds about 20 percent of those technology markets.
Segments such as AI already are paying off for Intel. Last year, the company generated more than $1 billion in revenue from end users running AI workloads on Xeon-based systems in the data center.
“We’ve entered a new era of data-centric computing,” he wrote in a post on the company blog. “The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data,” Shenoy wrote.
“When you add the growth in processing power, breakthroughs in connectivity, storage, memory and algorithms, we end up with a completely new way of thinking about infrastructure,” Shenoy asserted.