Search This Blog

Wednesday, December 18, 2019

Materials and neuromorphic computing

(In response to a topic suggestion from the Pizza Perusing Physicist....)

Neuromorphic computing is a trendy concept aimed at producing computing devices that are structured and operate like biological neural networks.  

In standard digital computers, memory and logic are physically separated and handled by distinct devices, and both are (nearly always) based on binary states and highly regular connectivity.  That is, logic gates take inputs that are two-valued (1 or 0), and produce outputs that are similarly two-valued; logic gates have no intrinsic memory of past operations that they've conducted; memory elements are also binary, with data stored as a 1 or 0; and everything is organized in a regular, immutable pattern - memory registers populated and read by clocked, sequential logic gates via a bus.

Natural neural networks, on the other hand, are very different.  Each neuron can be connected to many others via synapses.  Somehow memory and logic are performed by the same neuronal components.   The topology of the connections varies with time - some connections are reinforced by repeated use, while others are demoted, in a continuous rather than binary way.  Information traffic involves temporal trains of pulses called spikes.  

All of these things can be emulated with standard digital computers.  Deep learning methods do this, with multiple layers playing the roles of neurons, and weighted links between nodes modeling the connections and strengths.  This is all a bit opaque and doesn't necessarily involve simulating the spiking dynamics at all.  Implementing neural networks via standard hardware loses some of the perceived benefits of biological neural nets, like very good power efficiency.

In the last few years, as machine learning and big data have become increasingly important, there has been a push to try to implement in device hardware architectures that look a lot more like the biological analogs.  To do this, you might want nonvolatile memory elements that can also be used for logic, and can have continuously graded values of "on"-ness determined by their history.  Resistive switching memory elements, sometimes called memristors (though that is a loaded term - see here and here), can fit the bill, as in this example.  Many systems can act as resistive switches, with conduction changes often set by voltage-driven migration of ions or vacancies in the material.

On top of this, there has been a lot of interest in using strongly correlated materials in such applications.  There are multiple examples of correlated materials (typically transition metal oxides) that undergo dramatic metal-insulator transitions as a function of temperature.  These materials then offer a chance to emulate spiking - driving a current can switch such a material from the insulating to the metallic state via local Joule heating or more nontrivial mechanisms, and then revert to the insulating state.  See the extensive discussion here.  

Really implementing all of this at scale is not simple.  The human brain involves something like 100,000,000,000 neurons, and connections run in three dimensions.  Getting large numbers of effective solid-state neurons with high connectivity via traditional 2D planar semiconductor-style fab (basically necessary if one wants to have many millions of neurons) is not easy, particularly if it requires adapting processing techniques to accommodate new classes of materials.

If you're interested in this and how materials physics can play a role, check out this DOE report and this recent review article.

3 comments:

Anonymous said...

Some comments.

1) One way neurons work is by "firing" once the sum of the signals reaching their synapses achieves a certain threshold. We could try to create a device that models this behavior.
2) In todays's CMOS-based neural networks, the core calculations are big matrix multiplications, implemented in "MAC" hardware (multiplier-accumulate circuit). One idea for making this operation more efficient would be to implement it using analog devices rather than digital ones. The analog circuit would compute a weighted sum using a resistive network. Some could call these "neuromorphic" devices, although they do not really reflect how our neurons work.

Finally, we do not *really* know how the brain works and how neurons cooperate to create memories, or to generate higher brain functions. Even backpropagation, one of the basic operations used in today's neural chips, isn't even used by real neurons. We could even think of individual neurons as mini-CPU's, given the computational capability of individual cells. In that sense, mimicking a real neuron with a transistor-like "neuromorphic" device is a bit vain, at least in terms of hoping to create an artificial (human) brain. But that doesn't mean that we won't make anything useful out of neuromorphic networks!

Pizza Perusing Physicist said...

Thanks for this post (and for the shoutout!).

This is a very interesting concept especially in light of the rise in deep learning applications. The more we can find ways to make these algorithms cheaper, the broader the potential benefits of deep learning can be spread, particularly in fields like medicine.

In reading up a bit further, including some of the documents you linked to, one phrase I came across multiple times was 'Quantum Materials' for Energy-Efficient Neuromorphic Computing. The impression I get reading this is that the authors seem to believe that the hardware necessary for a practical neuromorphic computer can only be achieved by having 'Quantum' materials that underly that hardware. I wonder what your opinion on this term and phrase is. Is this just a buzzword? Or are there inherent limits to the energy-efficiency of neuromorphic devices that can only be overcome using inherently quantum-mechanical properties like entanglement/superposition?

Douglas Natelson said...

PPP, I don't speak from any kind of authorty on this, but I think that phrase is not meant to be about entanglement and superpositions as some inherently critical part of neuromorphic computing. (Personally I don't think biological brains work in a way that requires entanglement or superposition except in the sense that chemical reactions and molecular structure do, so that biases me.) Rather, I think "quantum materials" here is mostly shorthand for the kinds of correlated materials that can have transitions between states of very different electronic properties (i.e., Mott insulators and metal-insulator transitions). Also, "quantum materials" seems to be taking on broad usage to mean systems beyond standard CMOS and III-V semiconductors - I've heard people call graphene and TMDs quantum materials, for example.