Saturday, May 18, 2024

Power and computing

The Wall Street Journal last week had an article (sorry about the paywall) titled "There’s Not Enough Power for America’s High-Tech Ambitions", about how there is enormous demand for more data centers (think Amazon Web Services and the like), and electricity production can't readily keep up.  I've written about this before, and this is part of the motivation for programs like FuSE (NSF's Future of Semiconductors call).  It seems that we are going to be faced with a choice: slow down the growth of computing demand (which seems unlikely, particularly with the rise of AI-related computing, to say nothing of cryptocurrencies); develop massive new electrical generating capacity (much as I like nuclear power, it's hard for me to believe that small modular reactors will really be installed at scale at data centers); or develop approaches to computing that are far more energy efficient; or some combination.  

The standard computing architecture that's been employed since the 1940s is attributed to von Neumann.  Binary numbers (1, 0) are represented by two different voltage levels (say some \(V\) for a 1 and \(V \approx 0\) for a 0); memory functions and logical operations happen in two different places (e.g., your DRAM and your CPU), with information shuttled back and forth as needed.  The key ingredient in conventional computers is the field-effect transistor (FET), a voltage-activated switch, in which a third (gate) electrode can switch the current flow between a source electrode and a drain electrode.  

The idea that we should try to lower power consumption of computing hardware is far from new.  Indeed, NSF ran a science and technology center for a decade at Berkeley about exploring more energy-efficient approaches.  The simplest approach, as Moore's Law cooked along in the 1970s, 80s, and 90s, was to steadily try to reduce the magnitude of the operating voltages on chips.  Very roughly speaking, power consumption goes as \(V^{2}\).  The losses in the wiring and transistors scale like \(I \cdot V\); the losses in the capacitors that are parts of the transistors scale like some fraction of the stored energy, which is also like \(V^{2}\).  For FETs to still work, one wants to keep the same amount of gated charge density when switching, meaning that the capacitance per area has to stay the same, so dropping \(V\) means reducing the thickness of the gate dielectric layer.  This went on for a while with SiO2 as the insulator, and eventually in the early 2000s the switch was made to a higher dielectric constant material because SiO2 could not be made any thinner.  Since the 1970s, the operating voltage \(V\) has fallen from 5 V to around 1 V.  There are also clever schemes now to try to vary the voltage dynamically.  For example, one might be willing to live with higher error rates in the least significant bits of some calculations (like video or audio playback) if it means lower power consumption.  With conventional architectures, voltage scaling has been taken about as far as it can go.

Way back in 2006, I went to a conference and Eli Yablonovitch talked at me over dinner about how we needed to be thinking about far lower voltage operations.  Basically, his argument was that if we are using voltages that are far greater than the thermal voltage noise in our wires and devices, we are wasting energy.  With conventional transistors, though, we're kind of stuck because of issues like subthreshold swing.  

So what are the options?  There are many ideas out there. 
  • Change materials.  There are materials that have metal-insulator transitions, for example, such that it might be possible to trigger dramatic changes in conduction (for switching purposes) with small stimuli, evading the device physics responsible for the subthreshold slope argument.  
  • Change architectures.  Having memory and logic physically separated isn't the only way to do digital computing.  The idea of "logic-in-memory" computing goes back to before I was born.  
  • Radically change architectures.  As I've written before, there is great interest in neuromorphic computing, trying to make devices with connectivity and function designed to mimic the way neurons work in biological brains.  This would likely mean analog rather than digital logic and memory, complex history-dependent responses, and trying to get vastly improved connectivity.  As was published last week in Science, 1 cubic millimeter of brain tissue contains 57,000 cells and 150,000,000 synapses.  Trying to duplicate that level of 3D integration at scale is going to be very hard.  The approach of just making something that starts with crazy but uncontrolled connectivity and training it somehow (e.g., this idea from 2002) may reappear.
A huge driving constraint on everything is economics.  We are not going to decide that computing is so important that we will sacrifice refrigeration, for example; basic societal needs will limit what fraction of total generating capacity we devote to computing, and that includes concerns about impact of power generation on climate.  Likewise, switching materials or architectures is going to be very expensive at least initially, and is unlikely to be quick.  It will be interesting to see where we are in another decade.... 

Tuesday, May 07, 2024

Wind-up nanotechnology

When I was a kid, I used to take allowance money and occasionally buy rubber-band-powered balsa wood airplanes at a local store.  Maybe you've seen these.  You wind up the rubber band, which stretches the elastomer and stores energy in the elastic strain of the polymer, as in Hooke's Law (though I suspect the rubber band goes well beyond the linear regime when it's really wound up, because of the higher order twisting that happens).  Rhett Alain wrote about how well you can store energy like this.  It turns out that the stored energy per mass of the rubber band can get pretty substantial. 

Carbon nanotubes are one of the most elastically strong materials out there.  A bit over a decade ago, a group at Michigan State did a serious theoretical analysis of how much energy you could store in a twisted yarn made from single-walled carbon nanotubes.  They found that the specific energy storage could get as large as several MJ/kg, as much as four times what you get with lithium ion batteries!

Now, a group in Japan has actually put this to the test, in this Nature Nano paper.  They get up to 2.1 MJ/kg, over the lithium ion battery mark, and the specific power (when they release the energy) at about \(10^{6}\) W/kg is not too far away from "non-cyclable" energy storage media, like TNT.  Very cool!