New hardware used in ultra-fast analog deep learning

Artificial intelligence (AI), or machine learning, is taking the computing world by storm, even though it has been in development for decades. AI tools are changing the way we use data and computers in everything from medicine to traffic control. New research shows how we can make AI even more efficient and useful.

The name ‘artificial intelligence’ often appeals to the imagination and conjures up images of sentient robots. But the reality is different. Machine learning does not mimic human intelligence. what it is doing What we can do, however, is to mimic the complex neural pathways that exist in our own brains.

This mimicry is the key to which AI owes its strength. But it is power that comes at a high cost – both financially and in terms of the energy required to run the machines.

Research from Massachusetts Institute of Technology (MIT) and published in Science is part of a growing subset of AI research focused on AI architecture that is cheaper to build, faster and more energy efficient.


Read more: Australian researchers develop coherent quantum simulator


The multidisciplinary team used programmable resistors to produce ‘analog deep learning’ machines. Just as transistors are at the heart of digital processors, the resistors are built into repeating arrays to create a complex, layered network of artificial “neurons” and “synapses.” The machine can perform complex tasks such as image recognition and natural language processing.

Humans learn by weakening and strengthening the synapses that connect our neurons – the brain cells.

While digital deep learning weakens and strengthens the connections between artificial neurons through algorithms, analog deep learning occurs by increasing or decreasing the electrical conductivity of the resistors.

Increased conductivity in the resistors is achieved by pushing more protons into them, attracting more electron current. This is done using a battery-like electrolyte that allows protons to pass but blocks electrons.

“The device’s mechanism of action is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Since we are working with very thin devices, we could accelerate the movement of this ion by using a strong electric field and pushing these ionic devices into the nanosecond operating regime,” said senior author Bilge Yildiz, professor of Nuclear Science and Engineering, and Materials Science and Engineering departments at MIT.

Using inorganic phosphosilicate glass (PSG) as the inorganic base compound for the resistors, the team found that their analog deep learning device could process information a million times faster than previous attempts. This makes their machine about a million times faster than firing our own synapses.

“The action potential in biological cells rises and falls on a time scale of milliseconds as the voltage difference of about 0.1 volts is limited by the stability of water,” said senior author Ju Li, professor of materials science and engineering. “Here we apply up to 10 volts to a special nanoscale thick glass film that conducts protons without permanently damaging them. And the stronger the field, the faster the ionic devices.”

The resistor can run for millions of cycles without breaking down thanks to the fact that the protons don’t damage the material.

“The speed was certainly surprising. Normally we would not apply such extreme fields to different devices, so as not to turn them into ash. But instead, protons shuttled across the device stack at tremendous speeds, notably a million times faster compared to what we had before. And this movement doesn’t do any damage, thanks to the small size and low mass of protons,” said lead author and MIT postdoc Murat Onen.

“The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.

PSG also makes the device extremely energy efficient and compatible with silicon fabrication techniques. It also means that the device can be integrated into commercial computer hardware.

“With that important insight and the very powerful nanofabrication techniques, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate at reasonable voltages,” said senior author Jesús A. del Alamo, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really brought these devices to a point where they now look really promising for future applications.”

“Once you have an analog processor, you no longer train networks that everyone is working on. You train networks of unprecedented complexity that no one else can afford, and as a result, outperform them all. In other words, this is not a faster car, this is a spacecraft,” adds Onen.

Analog deep learning has two main advantages over its digital cousin.

Onen says that the calculation is done in the memory device instead of being transferred from memory to the processors.

Analog processors also perform operations simultaneously, rather than taking more time to perform new calculations.

Now that the device’s effectiveness has been demonstrated, the team plans to develop it for large-scale production. They also plan to remove factors that limit the voltage needed for the protons to work efficiently.

“The collaboration we have will be essential to innovate in the future. The path forward will still be quite challenging, but at the same time it is very exciting,” said Professor del Alamo.



Leave a Comment