A startup that is designing a processor aimed specifically at machine learning and artificial intelligence applications is getting a $30 million boost from a range of investors, including Samsung.
Graphcore, which has been working for the past two years in Bristol, England, laying the groundwork for its upcoming intelligent processing unit (IPU), came out of stealth mode Oct. 31 and announced the $30 million in Series A funding, which included not only investments from the Samsung Catalyst Fund but other venture capital firms like Robert Bosch Venture Capital, Amadeus Capital Partners and C4 Ventures.
Company officials said the goal is to create processors that will speed up machine learning by 10 to 100 times over systems running on current processors from the likes of Intel and graphics engines from Nvidia and others. At the same time, Graphcore officials want to make artificial intelligence (AI) more accessible to a wide range of businesses, developers and devices.
“Graphcore’s mission is to make machine learning faster, easier and more intelligent,” Graphcore co-founder and CEO Nigel Toon wrote in a post on the company blog. “Our technology will reduce the cost of accelerating AI applications in the cloud; the same technology will bring AI to low power consumer devices. It will enable recent deep learning applications to evolve more rapidly towards useful, general artificial intelligence. While innovation at the algorithmic level in machine learning has been unprecedented, the same cannot be said about processors.”
GPUs are getting a lot of attention in the deep learning space because they are the only parallel-processing chips on the market, Toon wrote. Microsoft is investigating field-programmable gate arrays (FPGAs)—chips that can be reprogrammed—but they are expensive and difficult to program, the CEO said.
“These are stopgaps not long term solutions,” Toon wrote. “Machine intelligence has a very different compute workload from anything that has come before and needs a new approach.”
Graphcore engineers are looking to build highly parallel processors that come with software tools and libraries that are faster, more flexible and easier to use than current offerings, he wrote. Graphcore officials expect to bring the first IPUs to market in 2017, powering what they are calling the IPU-Appliance that is designed to speed up AI applications in the cloud and in enterprise data centers while driving down the overall cost. The system will accelerate the training and inference tasks in machine learning operations by as much as 100 times, they said.
The company is targeting such jobs as natural language capabilities, self-driving vehicles, personalized health care, intelligent mobile devices and robotics.
AI refers to systems that can collect input (such as voice commands or images from the environments around them), process the data instantly and then react accordingly. It can be seen in such technologies as Siri on Apple iPhones and movie recommendation programs on Netflix, and will play an increasingly important role in other spaces, such as autonomous cars. Marc Hamilton, vice president of solutions architecture and engineering at Nvidia, told eWEEK in September that “AI is everywhere. We believe it’s the most important computing technology in the industry right now.”