Saturday, June 22, 2024

What is turbulence? (And why are helicopters never quiet?)

Fluid mechanics is very often left out of the undergraduate physics curriculum.  This is a shame, as it's very interesting and directly relevant to many broad topics (atmospheric science, climate, plasma physics, parts of astrophysics).  Fluid mechanics is a great example of how it is possible to have comparatively simple underlying equations and absurdly complex solutions, and that's probably part of the issue.  The space of solutions can be mapped out using dimensionless ratios, and two of the most important are the Mach number (\(\mathrm{Ma} \equiv u/c_{s}\), where \(u\) is the speed of some flow or object, and \(c_{s}\) is the speed of sound) and the Reynolds number (\(\mathrm{Re} \equiv \rho u d/\mu\), where \(\rho\) is the fluid's mass density, \(d\) is some length scale, and \(\mu\) is the viscosity of the fluid). 

From Laurence Kedward, wikimedia commons

There is a nice physical interpretation of the Reynolds number.  It can be rewritten as \(\mathrm{Re} = (\rho u^{2})/(\mu u/d)\).  The numerator is the "dynamic pressure" of a fluid, the force per unit area that would be transferred to some object if a fluid of density \(\rho\) moving at speed \(u\) ran into the object and was brought to a halt.  This is in a sense the consequence of the inertia of the moving fluid, so this is sometimes called an inertial force.  The denominator, the viscosity multiplied by a velocity gradient, is the viscous shear stress (force per unit area) caused by the frictional drag of the fluid.  So, the Reynolds number is a ratio of inertial forces to viscous forces.  

When \(\mathrm{Re}\ll 1\), viscous forces dominate.  That means that viscous friction between adjacent layers of fluid tend to smooth out velocity gradients, and the velocity field \(\mathbf{u}(\mathbf{r},t) \) tends to be simple and often analytically solvable.  This regime is called laminar flow.  Since \(d\) is just some characteristic size scale, for reasonable values of density and viscosity for, say, water, microfluidic devices tend to live in the laminar regime.  

When \(\mathrm{Re}\gg 1\), frictional effects are comparatively unimportant, and the fluid "pushes" its way along.  The result is a situation where the velocity field is unstable to small perturbations, and there is a transition to turbulent flow.  The local velocity field has big, chaotic variations as a function of space and time.  While the microscopic details of \(\mathbf{u}(\mathbf{r},t)\) are often not predictable, on a statistical level we can get pretty far since mass conservation and momentum conservation can be applied to a region of space (the control volume or Eulerian approach).

Turbulent flow involves a cascade of energy flow down through eddies at length scales all the way down eventually to the mean free path of the fluid molecules.   This right here is why helicopters are never quiet.  Even if you started with a completely uniform downward flow of air below the rotor (enough of a momentum flux to support the weight of the helicopter), the air would quickly transition to turbulence, and there would be pressure fluctuations over a huge range of timescales that would translate into acoustic noise.  You might not be able to hear the turbine engine directly from a thousand feet away, but you can hear the resulting sound from the turbulent airflow.  

If you're interested in fluid mechanics, this site is fantastic, and their links page has some great stuff.

Friday, June 14, 2024

Artificial intelligence, extrapolation, and physical constraints

Disclaimer and disclosure:  The "arrogant physicist declaims about some topic far outside their domain expertise (like climate change or epidemiology or economics or geopolitics or....) like everyone actually in the field is clueless" trope is very overplayed at this point, and I've generally tried to avoid doing this.  Still, I read something related to AI earlier this week, and I wanted to write about it.  So, fair warning: I am not an expert about AI, machine learning, or computer science, but I wanted to pass this along and share some thoughts.  Feel even more free than usual to skip this and/or dismiss my views.

This is the series of essays, and here is a link to the whole thing in one pdf file.  The author works for OpenAI.  I learned about this from Scott Aaronson's blog (this post), which is always informative.

In a nutshell, the author basically says that he is one of a quite small group of people who really know the status of AI development; that we are within a couple of years of the development of artificial general intelligence; that this will lead essentially to an AI singularity as AGI writes ever-smarter versions of AGI; that the world at large is sleepwalking toward this and its inherent risks; and that it's essential that western democracies have the lead here, because it would be an unmitigated disaster if authoritarians in general and the Chinese government in particular should take the lead - if one believes in extrapolating exponential progressions, then losing the initiative rapidly translates into being hopelessly behind forever.

I am greatly skeptical of many aspects of this (in part because of the dangers of extrapolating exponentials), but it is certainly thought-provoking.  

I doubt that we are two years away from AGI.  Indeed, I wonder if our current approaches are somewhat analogous to Ptolemeiac epicycles.  It is possible in principle to construct extraordinarily complex epicyclic systems that can reproduce predictions of the motions of the planets to high precision, but actual newtonian orbital mechanics is radically more compact, efficient, and conceptually unified.  Current implementations of AI systems use enormous numbers of circuit elements that consume tens to hundreds of MW of electricity.  In contrast, your brain hosts a human-level intelligence, consumes about 20 W, and masses about 1.4 kg.  I just wonder if our current architectural approach is not the optimal one toward AGI.  (Of course, a lot of people are researching neuromorphic computing, so maybe that resolves itself.)

The author also seems to assume that whatever physical resources are needed for rapid exponential progress in AI will become available.  Huge numbers of GPUs will be made.  Electrical generating capacity and all associated resources will be there.  That's not obvious to me at all.  You can't just declare that vastly more generating capacity will be available in three years - siting and constructing GW-scale power plants takes years alone.  TSMC is about as highly motivated as possible to build their new facilities in Arizona, and the first one has taken three years so far, with the second one delayed likely until 2028.  Actual construction and manufacturing at scale cannot be trivially waved away.

I do think that AI research has the potential to be enormously disruptive.  It also seems that if a big corporation or nation-state thought that they could gain a commanding advantage by deploying something even if it's half-baked and the long-term consequences are unknown, they will 100% do it.  I'd be shocked if the large financial companies aren't already doing this in some form.  I also agree that broadly speaking as a species we are unprepared for the consequences of this research, good and bad.  Hopefully we will stumble forward in a way where we don't do insanely stupid things (like putting the WOPR in charge of the missiles without humans in the loop).   

Ok, enough of my uninformed digression.  Back to physics soon.

Update:  this is a fun, contrasting view by someone who definitely disagrees with Aschenbrenner about the imminence of AGI.

Sunday, June 02, 2024

Materials families: Halide perovskites

Looking back, I realized that I haven't written much about halide perovskites, which is quite an oversight given how much research impact they're having.  I'm not an expert, and there are multiple extensive review articles out there (e.g. here, here, here, here, here), so this will only be a very broad strokes intro, trying to give some context to why these systems are important, remarkable, and may have plenty of additional tricks to play.

From ACS Energy Lett. 5, 2, 604–610 (2020).

Perovskites are a class of crystals based on a structural motif (an example is ABX3, originally identified in the mineral CaTiO3, though there are others) involving octahedrally coordinated metal atoms.  As shown in the figure, each B atom is in the center of an octahedron defined by six X atoms.  There are many flavors of purely inorganic perovskites, including the copper oxide semiconductors and various piezo and ferroelectric oxides.  

The big excitement in recent years, though, involves halide perovskites, in which the X atom = Cl, Br, I, the B atom is most often Pb or Sn.  These materials are quite ionic, in the sense that the B atom is in the 2+ oxidation state, the X atom is in the 1- oxidation state, and whatever is in the A site is in the 1+ oxidation state (whether it's Cs+ or a molecular ion like methylammonium (MA = [CH3NH3]+) or foramidinium (FA = [HC(NH2)2]+).  

From Chem. Rev. 123, 13, 8154–8231 (2023).

There is an enormous zoo of materials based on these building blocks, made even more rich by the capability of organic chemists to toss in various small organic, covalent ligands to alter spacings between the components (and hence electronic overlap and bandwidths), tilt or rotate the octahedra, add in chirality, etc.  Forms that are 3D, effectively 2D (layers of corner-sharing octahedra), 1D, and "OD" (with isolated octahedra) exist.  Remarkably:

  • These materials can be processed in solution form, and it's possible to cast highly crystalline films.
  • Despite the highly ionic character of much of the bonding, many of these materials are semiconductors, with bandgaps in the visible.
  • Despite the differences in what chemists and semiconductor physicists usually mean by "pure", these materials can be sufficiently clean and free of the wrong kinds of defects that it is possible to make solar cells with efficiencies greater than 26% (!) (and very bright light emitting diodes).  
These features make the halide perovskites extremely attractive for possible applications, especially in photovoltaics and potentially light sources (even quantum emitters).  They are seemingly much more forgiving (in terms of high carrier mobility, vulnerability to disorder, and having a high dielectric polarizability and hence lower exciton binding energy and greater ease of charge extraction) than most organic semiconductors.  The halide perovskites do face some serious challenges (chemical stability under UV illumination and air/moisture exposure; the unpleasantness of Pb), but their promise is enormous

Sometimes nature seems to provide materials with particularly convenient properties.  Examples include water and the fact that ordinary ice is less dense than the liquid form; silicon and its outstanding oxide; gallium arsenide and the fact that it can be grown with great purity and stoichiometry even in an extremely As rich environment; I'm sure commenters can provide many more.  The halide perovskites seem to be another addition to this catalog, and as material properties continue to improve, condensed matter physicists are going to be looking for interesting things to do in these systems. 

Wednesday, May 29, 2024

Interesting reading - resonators, quantum geometry w/ phonons, and fractional quantum anomalous Hall

 Real life continues to be busy, but I wanted to point out three recent articles that I found interesting:

  • Mechanical resonators are a topic with a long history, going back to the first bells and the tuning fork.  I've written about micromachined resonators before, and the quest to try to get very high quality resonators.  This recent publication is very impressive.  The authors have succeeded in fabricating suspended Si3N4 resonators that are 70 nm thick but 3 cm (!!) long.  In terms of aspect ratio, that'd be like a diving board 3 cm thick and 12.8 km long.  By varying the shape of the suspended "string" along its length, they create phononic band gaps, so that some vibrations are blocked from propagating along the resonator, leading to reduced losses.  They are able to make such resonators that work at acoustic frequencies at room temperature (in vacuum) and have quality factors as high as \(6.5 \times 10^{9}\), which is amazing.  
  • Speaking of vibrations, this paper in Nature Physics is a thought-provoking piece of work.  Electrons in solids are coupled to lattice vibrations (phonons), and that's not particularly surprising.  The electronic band structure depends on how the atoms are stacked in space, and a vibration like a phonon is a particular perturbation of that atomic arrangement.  The new insight here is to look at what is being called quantum geometry and how that affects the electron-phonon coupling.  As I wrote here, electrons in crystals can be described by Bloch waves which include a function \(u_{\mathbf{k}}(\mathbf{r})\) that has the real-space periodicity of the crystal lattice.  How that function varies over \(\mathbf{k}\)-space is called quantum geometry and has all kinds of consequences (e.g., here and here).  It turns out that this piece of the band structure can have a big and sometimes dominant influence on the coupling between mobile electrons and phonons.
  • Speaking of quantum geometry and all that, here is a nice article in Quanta about the observation of the fractional quantum anomalous Hall effect in different 2D material systems.  In the "ordinary" fractional quantum Hall effect, topology and interactions combine at low temperatures and (usually) high magnetic fields in clean 2D materials to give unusual electronic states with, e.g., fractionally charged low energy excitations.  Recent exciting advances have found related fractional Chern insulator states in various 2D materials at zero magnetic field.  The article does a nice job capturing the excitement of these recent works.

Saturday, May 18, 2024

Power and computing

The Wall Street Journal last week had an article (sorry about the paywall) titled "There’s Not Enough Power for America’s High-Tech Ambitions", about how there is enormous demand for more data centers (think Amazon Web Services and the like), and electricity production can't readily keep up.  I've written about this before, and this is part of the motivation for programs like FuSE (NSF's Future of Semiconductors call).  It seems that we are going to be faced with a choice: slow down the growth of computing demand (which seems unlikely, particularly with the rise of AI-related computing, to say nothing of cryptocurrencies); develop massive new electrical generating capacity (much as I like nuclear power, it's hard for me to believe that small modular reactors will really be installed at scale at data centers); or develop approaches to computing that are far more energy efficient; or some combination.  

The standard computing architecture that's been employed since the 1940s is attributed to von Neumann.  Binary numbers (1, 0) are represented by two different voltage levels (say some \(V\) for a 1 and \(V \approx 0\) for a 0); memory functions and logical operations happen in two different places (e.g., your DRAM and your CPU), with information shuttled back and forth as needed.  The key ingredient in conventional computers is the field-effect transistor (FET), a voltage-activated switch, in which a third (gate) electrode can switch the current flow between a source electrode and a drain electrode.  

The idea that we should try to lower power consumption of computing hardware is far from new.  Indeed, NSF ran a science and technology center for a decade at Berkeley about exploring more energy-efficient approaches.  The simplest approach, as Moore's Law cooked along in the 1970s, 80s, and 90s, was to steadily try to reduce the magnitude of the operating voltages on chips.  Very roughly speaking, power consumption goes as \(V^{2}\).  The losses in the wiring and transistors scale like \(I \cdot V\); the losses in the capacitors that are parts of the transistors scale like some fraction of the stored energy, which is also like \(V^{2}\).  For FETs to still work, one wants to keep the same amount of gated charge density when switching, meaning that the capacitance per area has to stay the same, so dropping \(V\) means reducing the thickness of the gate dielectric layer.  This went on for a while with SiO2 as the insulator, and eventually in the early 2000s the switch was made to a higher dielectric constant material because SiO2 could not be made any thinner.  Since the 1970s, the operating voltage \(V\) has fallen from 5 V to around 1 V.  There are also clever schemes now to try to vary the voltage dynamically.  For example, one might be willing to live with higher error rates in the least significant bits of some calculations (like video or audio playback) if it means lower power consumption.  With conventional architectures, voltage scaling has been taken about as far as it can go.

Way back in 2006, I went to a conference and Eli Yablonovitch talked at me over dinner about how we needed to be thinking about far lower voltage operations.  Basically, his argument was that if we are using voltages that are far greater than the thermal voltage noise in our wires and devices, we are wasting energy.  With conventional transistors, though, we're kind of stuck because of issues like subthreshold swing.  

So what are the options?  There are many ideas out there. 
  • Change materials.  There are materials that have metal-insulator transitions, for example, such that it might be possible to trigger dramatic changes in conduction (for switching purposes) with small stimuli, evading the device physics responsible for the subthreshold slope argument.  
  • Change architectures.  Having memory and logic physically separated isn't the only way to do digital computing.  The idea of "logic-in-memory" computing goes back to before I was born.  
  • Radically change architectures.  As I've written before, there is great interest in neuromorphic computing, trying to make devices with connectivity and function designed to mimic the way neurons work in biological brains.  This would likely mean analog rather than digital logic and memory, complex history-dependent responses, and trying to get vastly improved connectivity.  As was published last week in Science, 1 cubic millimeter of brain tissue contains 57,000 cells and 150,000,000 synapses.  Trying to duplicate that level of 3D integration at scale is going to be very hard.  The approach of just making something that starts with crazy but uncontrolled connectivity and training it somehow (e.g., this idea from 2002) may reappear.
  • Update: A user on twitter pointed out that the time may finally be right for superconducting electronics.  Here is a recent article in IEEE Spectrum about this, and here is a youtube video of a pretty good intro.  The technology of interest is "rapid single-flux quantum" (RSFQ) logic, where information is stored in circulating current loops in devices based on Josephson junctions.  The compelling aspects include intrinsically ultralow power dissipation b/c of superconductivity, and intrinsically fast timescales (clock speeds of hundreds of GHz) because of the frequency scales associated with the Josephson effect.  I'm a bit skeptical, because these ideas have been around for 30+ years and the integration challenges are still significant, but maybe now the economic motivation is finally sufficient.
A huge driving constraint on everything is economics.  We are not going to decide that computing is so important that we will sacrifice refrigeration, for example; basic societal needs will limit what fraction of total generating capacity we devote to computing, and that includes concerns about impact of power generation on climate.  Likewise, switching materials or architectures is going to be very expensive at least initially, and is unlikely to be quick.  It will be interesting to see where we are in another decade.... 

Tuesday, May 07, 2024

Wind-up nanotechnology

When I was a kid, I used to take allowance money and occasionally buy rubber-band-powered balsa wood airplanes at a local store.  Maybe you've seen these.  You wind up the rubber band, which stretches the elastomer and stores energy in the elastic strain of the polymer, as in Hooke's Law (though I suspect the rubber band goes well beyond the linear regime when it's really wound up, because of the higher order twisting that happens).  Rhett Alain wrote about how well you can store energy like this.  It turns out that the stored energy per mass of the rubber band can get pretty substantial. 

Carbon nanotubes are one of the most elastically strong materials out there.  A bit over a decade ago, a group at Michigan State did a serious theoretical analysis of how much energy you could store in a twisted yarn made from single-walled carbon nanotubes.  They found that the specific energy storage could get as large as several MJ/kg, as much as four times what you get with lithium ion batteries!

Now, a group in Japan has actually put this to the test, in this Nature Nano paper.  They get up to 2.1 MJ/kg, over the lithium ion battery mark, and the specific power (when they release the energy) at about \(10^{6}\) W/kg is not too far away from "non-cyclable" energy storage media, like TNT.  Very cool!  

Monday, April 29, 2024

Moiré and making superlattices

One of the biggest condensed matter trends in recent years has been the stacking of 2D materials and the development of moiré lattices.  The idea is, take a layer of 2D material and stack it either (1) on itself but with a twist angle, or (2) on another material with a slightly different lattice constant.  Because of interactions between the layers, the electrons in the material have an effective potential energy that has a spatial periodicity associated with the moiré pattern that results.  Twisted stacking hexagonal lattice materials (like graphene or many of the transition metal dichalcogenides) results in a triangular moiré lattice with a moiré lattice constant that depends on twist angle.  Some of the most interesting physics in these systems seems to pop out when the moiré lattice constant is on the order of a few nm to 10 nm or so.  The upside of the moiré approach is that it can produce such an effective lattice over large areas with really good precision and uniformity (provided that the twist angle can really be controlled - see here and here, for example.)  You might imagine using lithography to make designer superlattices, but getting the kind of cleanliness and homogeneity at these very small length scales is very challenging.

It's not surprising, then, that people are interested in somehow applying superlattice potentials to nearby monolayer systems.  Earlier this year, Nature Materials ran three papers published sequentially in one issue on this topic, and this is the accompanying News and Views article.

  • In one approach, a MoSe2/WS2 bilayer is made and the charge in the bilayer is tuned so that the bilayer system is a Mott insulator, with charges localized in exactly the moiré lattice sites.  That results in an electrostatic potential that varies on the moiré lattice scale that can then influence a nearby monolayer, which then shows cool moiré/flat band physics itself.
  • Closely related, investigators used a small-angle twisted bilayer of graphene.  That provides a moiré periodic dielectric environment for a nearby single layer of WSe2.  They can optically excite Rydberg excitons in the WSe2, excitons that are comparatively big and puffy and thus quite sensitive to their dielectric environment.  
  • Similarly, twisted bilayer WS2 can be used to apply a periodic Coulomb potential to a nearby bilayer of graphene, resulting in correlated insulating states in the graphene that otherwise wouldn't be there.

Clearly this is a growth industry.  Clever, creative ways to introduce highly ordered superlattice potentials on very small lengthscales with other symmetries besides triangular lattices would be very interesting.