System makers Fujitsu and Huawei Technologies reportedly are both planning to develop processors optimized for artificial intelligence workloads, moves that will put them into competition with the likes of Intel, Google, Nvidia and Advanced Micro Devices.
Tech vendors are pushing hard to bring artificial intelligence (AI) and deep learning capabilities into their portfolios to meet the growing demand generated by a broad range of workloads, from data analytics to self-driving vehicles.
Microsoft, Google, IBM and others are creating AI business units and building out products and services that can leverage the technologies. Chip makers also are making the move.
Intel last week unveiled its latest generation Xeon server chips that, among other improvements, deliver 2.2 times the performance for deep learning training and inference tasks than their predecessors. The company also offers field-programmable gate arrays (FPGAs), which will play an increasing role in the future of AI, and has plans for “Lake Crest,” an upcoming processor aimed at deep learning codes.
Nvidia for the past couple of years has shifted much of the focus of its business towards AI and deep learning and AMD is looking to developer Radeon GPUs for AI workloads. Google has its tensor processing units (TCUs) that are designed specifically for AI workloads. Startup Graphcore is developing what it’s calling an intelligent processing unit, or IPU.
All this comes as industry analysts expect the market will expand rapidly in the coming years. Gartner analysts predict that by 2020, essentially all software and services will include AI technologies, although they noted that software makers’ desire to be seen on the leading edge of AI is causing confusion in the market over what is and what isn’t actual artificial intelligence.
Now Fujitsu and Huawei are working on their own AI-focused processors. Fujitsu engineers for the past couple of years have been working on what the company is calling a deep learning unit (DLU), but last month gave more details on the component during the International Supercomputing show.
According to a report on the Top500 site, which includes the twice-yearly list of the world’s fastest supercomputers, Fujitsu’s DLU will rely on low-precision formats to drive both performance and energy efficiency. I will include the company’s Tofu interconnect, which was developed for the high-performance computing (HPC) K computer.
The chip reportedly will include 16 deep learning processing elements, with each of them housing eight single-instruction, multiple data execution units. Fujitsu is predicting that the chip will offer 10 times the performance per watt of competitors’ produce. Company officials have said that the plan is to initially release it next years as a coprocessor to a more traditional CPU, and later integrate the DLU into the CPU.
The DLU is part of a larger effort by Fujitsu to establish itself in the fast-growing AI space. Last fall, the company announced new AI services for its Human Centric AI Zinrai platform.
Reports out of Asia said that at the 2017 China Internet Conference last week, Huawei CEO Yu Chengdong announced the company is building an AI-focused processor. Not a lot of detail has been released, but the chip—which will be built by Huawei’s HiSIlicon chip-making arm—reportedly will integrate a CPU, GPU and AI features onto a single piece of silicon and will likely be based on new AI-focused chip designs introduced earlier this year by ARM at Computex.
The Cortex-A75 and Cortex-A55 systems-on-a-chip (SoCs) are based on ARM’s new DynamIQ architecture and will come with AI-specific instructions and enhanced security.
Huawei’s new chip is expected to be introduced later this year.