Saturday, July 20, 2024

The physics of squeaky shoes

In these unsettling and trying times, I wanted to write about the physics of a challenge I'm facing in my professional life: super squeaky shoes.  When I wear a particularly comfortable pair of shoes at work, when I walk in some hallways in my building (but not all), my shoes squeak very loudly with every step. How and why does this happen, physically?  

The shoes in question.

To understand this, we need to talk a bit about a friction, the sideways interfacial force between two surfaces when one surface is sheared (or attempted to be sheared) with respect to the other.  (Tribology is the study of friction, btw.)  In introductory physics we teach some (empirical) "laws" of friction, described in detail on the wikipedia page linked above as well as here:

  1.  For static friction (no actual sliding of the surfaces relative to each other), the frictional force \(F_{f} \le \mu_{s}N\), where \(\mu_{s}\) is the "coefficient of static friction" and \(N\) is the normal force (pushing the two surfaces together).  The force is directed in the plane and takes on the magnitude needed so that no sliding happens, up to its maximum value, at which point the surfaces start slipping relative to each other.
  2. For sliding or kinetic friction, \(F_{f} = \mu_{k}N\), where \(\mu_{k}\) is the coefficient of kinetic or sliding friction, and the force is directed in the plane to oppose the relative sliding motion.  The friction coefficients depend on the particular materials and their surface conditions.
  3. The friction forces are independent of the apparent contact area between the surfaces.  
  4. The kinetic friction force is independent of the relative sliding speed between the surfaces.
These "laws", especially (3) and (4), are truly weird once we know a bit more about physics, and I discuss this a little in my textbook.  The macroscopic friction force is emergent, meaning that it is a consequence of the materials being made up of many constituent particles interacting.  It's not a conservative force, in that energy dissipated through the sliding friction force doing work is "lost" from the macroscopic movement of the sliding objects and ends up in the microscopic vibrational motion (and electronic distributions, if the objects are metals).  See here for more discussion of friction laws.

Shoe squeaking happens because of what is called "stick-slip" motion.  When I put my weight on my right shoe, the rubber sole of the shoe deforms and elastic forces (like a compressed spring) push the rubber to spread out, favoring sliding rubber at the rubber-floor interface.  At some point, the local static friction maximum force is exceeded and the rubber begins to slide relative to the floor.  That lets the rubber "uncompress" some, so that the spring-like elastic forces are reduced, and if they fall back below \(\mu_{s}N\), that bit of sole will stick on the surface again.  A similar situation is shown in this model from Wolfram, looking at a mass (attached to an anchored spring) interacting with a conveyer belt.   If this start/stop cyclic motion happens at acoustic sorts of frequencies in the kHz, it sounds like a squeak, because the start-stop motion excites sound waves in the air (and the solid surfaces).  This stick-slip phenomenon is also why brakes on cars and bikes squeal, why hinges on doors in spooky houses creak, and why that one board in your floor makes that weird noise.  It's also used in various piezoelectric actuators

Macroscopic friction emerges from a zillion microscopic interactions and is affected by the chemical makeup of the surfaces, their morphology and roughness, any adsorbed layers of moisture or contaminants (remember: every surface around you right now is coated in a few molecular layers of water and hydrocarbon contamination), and van der Waals forces, among other things.  The reason my shoes squeak in some hallways but not others has to do with how the floors have been cleaned.  I could stop the squeaking by altering the bottom surface of my soles, though I wouldn't want to use a lubricant that is so effective that it seriously lowers \(\mu_{s}N\) and makes me slip.  

Friction is another example of an emergent phenomenon that is everywhere around us, of enormous technological and practical importance, and has some remarkable universality of response.  This kind of emergence is at the heart of the physics of materials, and trying to predict friction and squeaky shoes starting from elementary particle physics is just not do-able. 


Sunday, July 14, 2024

Brief items - light-driven diamagnetism, nuclear recoil, spin transport in VO2

Real life continues to make itself felt in various ways this summer (and that's not even an allusion to political madness), but here are three papers (two from others and a self-indulgent plug for our work) you might find interesting.

  • There has been a lot of work in recent years particularly by the group of Andrea Cavalleri, in which they use infrared light to pump particular vibrational modes in copper oxide superconductors (and other materials) (e.g. here).  There are long-standing correlations between the critical temperature for superconductivity, \(T_{c}\), and certain bond angles in the cuprates.  Broadly speaking, using time-resolved spectroscopy, measurements of the optical conductivity in these pumped systems show superconductor-like forms as a function of energy even well above the equilibrium \(T_{c}\), making it tempting to argue that the driven systems are showing nonequilibrium superconductivity.  At the same time, there has been a lot of interest in looking for other signatures, such as signs of the ways uperconductors expel magnetic flux through the famous Meissner effect.  In this recent result (arXiv here, Nature here), magneto-optic measurements in this same driven regime show signs of field build-up around the perimeter of the driven cuprate material in a magnetic field, as would be expected from Meissner-like flux expulsion.  I haven't had time to read this in detail, but it looks quite exciting.  
  • Optical trapping of nanoparticles is a very useful tool, and with modern techniques it is possible to measure the position and response of individual trapped particles to high precision (see here and here).  In this recent paper, the group of David Moore at Yale has been able to observe the recoil of such a particle due to the decay of a single atomic nucleus (which spits out an energetic alpha particle).  As an experimentalist, I find this extremely impressive, in that they are measuring the kick given to a nanoparticle a trillion times more massive than the ejected helium nucleus.  
  • From our group, we have published a lengthy study (arXiv here, Phys Rev B here) of local/longitudinal spin Seebeck response in VO2, a material with an insulating state that is thought to be magnetically inert.  This corroborates our earlier work, discussed here.  In brief, in ideal low-T VO2, the vanadium atoms are paired up into dimers, and the expectation is that the unpaired 3d electrons on those atoms form singlets with zero net angular momentum.  The resulting material would then not be magnetically interesting (though it could support triplet excitations called triplons).  Surprisingly, at low temperatures we find a robust spin Seebeck response, comparable to what is observed in ordered insulating magnets like yttrium iron garnet.  It seems to have the wrong sign to be from triplons, and it doesn't seem possible to explain the details using a purely interfacial model.  I think this is intriguing, and I hope other people take notice.
Hoping for more time to write as the summer progresses.  Suggestions for topics are always welcome, though I may not be able to get to everything.

Saturday, July 06, 2024

What is a Wigner crystal?

Last week I was at the every-2-years Gordon Research Conference on Correlated Electron Systems at lovely Mt. Holyoke.  It was very fun, but one key aspect of the culture of the GRCs is that attendees are not supposed to post about them on social media, thus encouraging presenters to show results that have not yet been published.  So, no round up from me, except to say that I think I learned a lot.

The topic of Wigner crystals came up, and I realized that (at least according to google) I have not really written about these, and now seems to be a good time.

First, let's talk about crystals in general.  If you bring together an ensemble of objects (let's assume they're identical for now) and throw in either some long-range attraction or an overall confining constraint, plus a repulsive interaction that is effective at short range, you tend to get formation of a crystal, if an object's kinetic energy is sufficiently small compared to the interactions.  A couple of my favorite examples of this are crystals from drought balls and bubble rafts.  As the kinetic energy (usually parametrized by a temperature when we're talking about atoms and molecules as the objects) is reduced, the system crystallizes, spontaneously breaking continuous translational and rotational symmetry, leading to configurations with discrete translational and rotational symmetry.  Using charged colloidal particles as buiding blocks, the attractive interaction is electrostatic, because the particles have different charges, and they have the usual "hard core repulsion".  The result can be all kinds of cool colloidal crystal structures.

In 1934, Eugene Wigner considered whether electrons themselves could form a crystal, if the electron-electron repulsion is sufficiently large compared to their kinetic energy.  For a cold quantum mechanical electron gas, where the kinetic energy is related to the Fermi energy of the electrons, the essential dimensionless parameter here is \(r_{s}\), the Wigner-Seitz radius.  Serious calculations have shown that you should get a Wigner crystal for electrons in 2D if \(r_{s} > \sim 31\).  (You can also have a "classical" Wigner crystal, when the electron kinetic energy is set by the temperature rather than quantum degeneracy; an example of this situation is electrons floating on the surface of liquid helium.)

Observing Wigner crystals in experiments is very challenging, historically.  When working in ultraclean 2D electron gases in GaAs/AlGaAs structures, signatures include looking for "pinning" of the insulating 2D electronic crystal on residual disorder, leading to nonlinear conduction at the onset of "sliding"; features in microwave absorption corresponding to melting of the crystal; changes in capacitance/screening, etc.  Large magnetic fields can be helpful in bringing about Wigner crystallization (tending to confine electronic wavefunctions, and quenching kinetic energy by having Landau Levels).  

In recent years, 2D materials and advances in scanning tunneling microscopy (STM) have led to a lot of progress in imaging Wigner crystals.  One representative paper is this, in which the moirĂ© potential in a bilayer system helps by flattening the bands and therefore reducing the kinetic energy.  Another example is this paper from April, looking at Wigner crystals at high magnetic field in Bernal-stacked bilayer graphene.   One aspect of these experiments that I find amazing is that the STM doesn't melt the crystals, since it's either injecting or removing charge throughout the imaging process.  The crystals are somehow stable enough that any removed electron gets rapidly replaced without screwing up the spatial order.  Very cool.

Two additional notes:

Saturday, June 22, 2024

What is turbulence? (And why are helicopters never quiet?)

Fluid mechanics is very often left out of the undergraduate physics curriculum.  This is a shame, as it's very interesting and directly relevant to many broad topics (atmospheric science, climate, plasma physics, parts of astrophysics).  Fluid mechanics is a great example of how it is possible to have comparatively simple underlying equations and absurdly complex solutions, and that's probably part of the issue.  The space of solutions can be mapped out using dimensionless ratios, and two of the most important are the Mach number (\(\mathrm{Ma} \equiv u/c_{s}\), where \(u\) is the speed of some flow or object, and \(c_{s}\) is the speed of sound) and the Reynolds number (\(\mathrm{Re} \equiv \rho u d/\mu\), where \(\rho\) is the fluid's mass density, \(d\) is some length scale, and \(\mu\) is the viscosity of the fluid). 

From Laurence Kedward, wikimedia commons

There is a nice physical interpretation of the Reynolds number.  It can be rewritten as \(\mathrm{Re} = (\rho u^{2})/(\mu u/d)\).  The numerator is the "dynamic pressure" of a fluid, the force per unit area that would be transferred to some object if a fluid of density \(\rho\) moving at speed \(u\) ran into the object and was brought to a halt.  This is in a sense the consequence of the inertia of the moving fluid, so this is sometimes called an inertial force.  The denominator, the viscosity multiplied by a velocity gradient, is the viscous shear stress (force per unit area) caused by the frictional drag of the fluid.  So, the Reynolds number is a ratio of inertial forces to viscous forces.  

When \(\mathrm{Re}\ll 1\), viscous forces dominate.  That means that viscous friction between adjacent layers of fluid tend to smooth out velocity gradients, and the velocity field \(\mathbf{u}(\mathbf{r},t) \) tends to be simple and often analytically solvable.  This regime is called laminar flow.  Since \(d\) is just some characteristic size scale, for reasonable values of density and viscosity for, say, water, microfluidic devices tend to live in the laminar regime.  

When \(\mathrm{Re}\gg 1\), frictional effects are comparatively unimportant, and the fluid "pushes" its way along.  The result is a situation where the velocity field is unstable to small perturbations, and there is a transition to turbulent flow.  The local velocity field has big, chaotic variations as a function of space and time.  While the microscopic details of \(\mathbf{u}(\mathbf{r},t)\) are often not predictable, on a statistical level we can get pretty far since mass conservation and momentum conservation can be applied to a region of space (the control volume or Eulerian approach).

Turbulent flow involves a cascade of energy flow down through eddies at length scales all the way down eventually to the mean free path of the fluid molecules.   This right here is why helicopters are never quiet.  Even if you started with a completely uniform downward flow of air below the rotor (enough of a momentum flux to support the weight of the helicopter), the air would quickly transition to turbulence, and there would be pressure fluctuations over a huge range of timescales that would translate into acoustic noise.  You might not be able to hear the turbine engine directly from a thousand feet away, but you can hear the resulting sound from the turbulent airflow.  

If you're interested in fluid mechanics, this site is fantastic, and their links page has some great stuff.

Friday, June 14, 2024

Artificial intelligence, extrapolation, and physical constraints

Disclaimer and disclosure:  The "arrogant physicist declaims about some topic far outside their domain expertise (like climate change or epidemiology or economics or geopolitics or....) like everyone actually in the field is clueless" trope is very overplayed at this point, and I've generally tried to avoid doing this.  Still, I read something related to AI earlier this week, and I wanted to write about it.  So, fair warning: I am not an expert about AI, machine learning, or computer science, but I wanted to pass this along and share some thoughts.  Feel even more free than usual to skip this and/or dismiss my views.

This is the series of essays, and here is a link to the whole thing in one pdf file.  The author works for OpenAI.  I learned about this from Scott Aaronson's blog (this post), which is always informative.

In a nutshell, the author basically says that he is one of a quite small group of people who really know the status of AI development; that we are within a couple of years of the development of artificial general intelligence; that this will lead essentially to an AI singularity as AGI writes ever-smarter versions of AGI; that the world at large is sleepwalking toward this and its inherent risks; and that it's essential that western democracies have the lead here, because it would be an unmitigated disaster if authoritarians in general and the Chinese government in particular should take the lead - if one believes in extrapolating exponential progressions, then losing the initiative rapidly translates into being hopelessly behind forever.

I am greatly skeptical of many aspects of this (in part because of the dangers of extrapolating exponentials), but it is certainly thought-provoking.  

I doubt that we are two years away from AGI.  Indeed, I wonder if our current approaches are somewhat analogous to Ptolemeiac epicycles.  It is possible in principle to construct extraordinarily complex epicyclic systems that can reproduce predictions of the motions of the planets to high precision, but actual newtonian orbital mechanics is radically more compact, efficient, and conceptually unified.  Current implementations of AI systems use enormous numbers of circuit elements that consume tens to hundreds of MW of electricity.  In contrast, your brain hosts a human-level intelligence, consumes about 20 W, and masses about 1.4 kg.  I just wonder if our current architectural approach is not the optimal one toward AGI.  (Of course, a lot of people are researching neuromorphic computing, so maybe that resolves itself.)

The author also seems to assume that whatever physical resources are needed for rapid exponential progress in AI will become available.  Huge numbers of GPUs will be made.  Electrical generating capacity and all associated resources will be there.  That's not obvious to me at all.  You can't just declare that vastly more generating capacity will be available in three years - siting and constructing GW-scale power plants takes years alone.  TSMC is about as highly motivated as possible to build their new facilities in Arizona, and the first one has taken three years so far, with the second one delayed likely until 2028.  Actual construction and manufacturing at scale cannot be trivially waved away.

I do think that AI research has the potential to be enormously disruptive.  It also seems that if a big corporation or nation-state thought that they could gain a commanding advantage by deploying something even if it's half-baked and the long-term consequences are unknown, they will 100% do it.  I'd be shocked if the large financial companies aren't already doing this in some form.  I also agree that broadly speaking as a species we are unprepared for the consequences of this research, good and bad.  Hopefully we will stumble forward in a way where we don't do insanely stupid things (like putting the WOPR in charge of the missiles without humans in the loop).   

Ok, enough of my uninformed digression.  Back to physics soon.

Update:  this is a fun, contrasting view by someone who definitely disagrees with Aschenbrenner about the imminence of AGI.

Sunday, June 02, 2024

Materials families: Halide perovskites

Looking back, I realized that I haven't written much about halide perovskites, which is quite an oversight given how much research impact they're having.  I'm not an expert, and there are multiple extensive review articles out there (e.g. here, here, here, here, here), so this will only be a very broad strokes intro, trying to give some context to why these systems are important, remarkable, and may have plenty of additional tricks to play.

From ACS Energy Lett. 5, 2, 604–610 (2020).

Perovskites are a class of crystals based on a structural motif (an example is ABX3, originally identified in the mineral CaTiO3, though there are others) involving octahedrally coordinated metal atoms.  As shown in the figure, each B atom is in the center of an octahedron defined by six X atoms.  There are many flavors of purely inorganic perovskites, including the copper oxide semiconductors and various piezo and ferroelectric oxides.  

The big excitement in recent years, though, involves halide perovskites, in which the X atom = Cl, Br, I, the B atom is most often Pb or Sn.  These materials are quite ionic, in the sense that the B atom is in the 2+ oxidation state, the X atom is in the 1- oxidation state, and whatever is in the A site is in the 1+ oxidation state (whether it's Cs+ or a molecular ion like methylammonium (MA = [CH3NH3]+) or foramidinium (FA = [HC(NH2)2]+).  

From Chem. Rev. 123, 13, 8154–8231 (2023).

There is an enormous zoo of materials based on these building blocks, made even more rich by the capability of organic chemists to toss in various small organic, covalent ligands to alter spacings between the components (and hence electronic overlap and bandwidths), tilt or rotate the octahedra, add in chirality, etc.  Forms that are 3D, effectively 2D (layers of corner-sharing octahedra), 1D, and "OD" (with isolated octahedra) exist.  Remarkably:

  • These materials can be processed in solution form, and it's possible to cast highly crystalline films.
  • Despite the highly ionic character of much of the bonding, many of these materials are semiconductors, with bandgaps in the visible.
  • Despite the differences in what chemists and semiconductor physicists usually mean by "pure", these materials can be sufficiently clean and free of the wrong kinds of defects that it is possible to make solar cells with efficiencies greater than 26% (!) (and very bright light emitting diodes).  
These features make the halide perovskites extremely attractive for possible applications, especially in photovoltaics and potentially light sources (even quantum emitters).  They are seemingly much more forgiving (in terms of high carrier mobility, vulnerability to disorder, and having a high dielectric polarizability and hence lower exciton binding energy and greater ease of charge extraction) than most organic semiconductors.  The halide perovskites do face some serious challenges (chemical stability under UV illumination and air/moisture exposure; the unpleasantness of Pb), but their promise is enormous

Sometimes nature seems to provide materials with particularly convenient properties.  Examples include water and the fact that ordinary ice is less dense than the liquid form; silicon and its outstanding oxide; gallium arsenide and the fact that it can be grown with great purity and stoichiometry even in an extremely As rich environment; I'm sure commenters can provide many more.  The halide perovskites seem to be another addition to this catalog, and as material properties continue to improve, condensed matter physicists are going to be looking for interesting things to do in these systems. 

Wednesday, May 29, 2024

Interesting reading - resonators, quantum geometry w/ phonons, and fractional quantum anomalous Hall

 Real life continues to be busy, but I wanted to point out three recent articles that I found interesting:

  • Mechanical resonators are a topic with a long history, going back to the first bells and the tuning fork.  I've written about micromachined resonators before, and the quest to try to get very high quality resonators.  This recent publication is very impressive.  The authors have succeeded in fabricating suspended Si3N4 resonators that are 70 nm thick but 3 cm (!!) long.  In terms of aspect ratio, that'd be like a diving board 3 cm thick and 12.8 km long.  By varying the shape of the suspended "string" along its length, they create phononic band gaps, so that some vibrations are blocked from propagating along the resonator, leading to reduced losses.  They are able to make such resonators that work at acoustic frequencies at room temperature (in vacuum) and have quality factors as high as \(6.5 \times 10^{9}\), which is amazing.  
  • Speaking of vibrations, this paper in Nature Physics is a thought-provoking piece of work.  Electrons in solids are coupled to lattice vibrations (phonons), and that's not particularly surprising.  The electronic band structure depends on how the atoms are stacked in space, and a vibration like a phonon is a particular perturbation of that atomic arrangement.  The new insight here is to look at what is being called quantum geometry and how that affects the electron-phonon coupling.  As I wrote here, electrons in crystals can be described by Bloch waves which include a function \(u_{\mathbf{k}}(\mathbf{r})\) that has the real-space periodicity of the crystal lattice.  How that function varies over \(\mathbf{k}\)-space is called quantum geometry and has all kinds of consequences (e.g., here and here).  It turns out that this piece of the band structure can have a big and sometimes dominant influence on the coupling between mobile electrons and phonons.
  • Speaking of quantum geometry and all that, here is a nice article in Quanta about the observation of the fractional quantum anomalous Hall effect in different 2D material systems.  In the "ordinary" fractional quantum Hall effect, topology and interactions combine at low temperatures and (usually) high magnetic fields in clean 2D materials to give unusual electronic states with, e.g., fractionally charged low energy excitations.  Recent exciting advances have found related fractional Chern insulator states in various 2D materials at zero magnetic field.  The article does a nice job capturing the excitement of these recent works.