IBM research scientists have advanced the company’s effort to extend cognitive computing with a breakthrough that could lead to the development of neuromorphic computers.
Neuromorphic computing, or brain-inspired computing, is the use of computing technology built to perform like the neuro-biological architectures in the human nervous system. A team of scientists at IBM Research in Zurich has developed technology that imitates the way neurons spike, such as when a person touches something sharp or very hot.
In a blog post on the new discovery, IBM research scientist Manuel Le Gallo said IBM has developed artificial neurons that can be used to detect patterns and discover correlations in big data, with power budgets and at densities comparable to those seen in biology.
Le Gallo co-authored a paper titled, “Stochastic phase-change neurons,” which appeared this week in the Nature Nanotechnologyjournal. This is the second scientific breakthrough IBM has published in that journal this week. Earlier this week, IBM published a paper on the company’s efforts to build new lab-on-a-chip technology to help fight cancer and other diseases.
Le Gallo said the artificial neurons are built to mimic what a biological neuron does, though they won’t have the exact same functionality. Yet, it is close enough to achieve computation similar to that of the brain, he said.
He also noted that typically, artificial neurons are built with standard complementary metal oxide semiconductor (CMOS)-based circuits, which is the stuff most of today’s computers are made of. However, IBM is using non-CMOS devices, such as phase-change devices, to reproduce similar functionality at lower power consumption and increased areal density, Le Gallo said.
The goal is to imitate the computational power of a massive amount of neurons to accelerate cognitive computing for analyzing things such as the explosion of information coming from the internet of things (IoT) and other sources of big data.
In its paper, the IBM Research team demonstrated how the neurons could detect correlations from multiple streams of events.
“Events could be, for example, Twitter data, weather data or sensory data collected by the internet of things,” Le Gallo said. “Assume that you have multiple streams of binary events and you want to find which streams are temporarily correlated; for example, when the 1s come concurrently. We show in the paper how we could do this discrimination using just one neuron connected to multiple plastic synapses receiving the events.”
Le Gallo said neuromorphic computing is simply more efficient than conventional computing because computing and storage are co-located in a neural network. In conventional computing, memory and logic are separate. To perform a computation, you must first access the memory, obtain data and transfer it to the logic unit, which returns the computation, he said.
“And whenever you get a result, you have to send it back to the memory,” said Le Gallo. And this process goes back and forth continuously. “So if you’re dealing with huge amounts of data, it will become a real problem.”
However, with computing and storage co-located in a neural network, “You don’t have to establish communication between logic and memory; you just have to make appropriate connections between the different neurons,” he noted. “That’s the main reason we think our approach will be more efficient, especially for processing large amounts of data.”
IBM’s artificial neurons consist of phase-change materials, including germanium antimony.
“We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” IBM Fellow Evangelos Eleftheriou, said in a statement.
Eleftheriou added the new memory techniques demonstrate the capabilities of phase-change-based artificial neurons, “which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds using very little energy.”