Search This Blog

Sunday, February 22, 2026

AI/ML, multiscale modeling, and emergence

I've been attending a lot of talks lately about AI/machine learning and multiscale modeling for materials design and control.  This is a vast, rapidly evolving research area, so here is a little background and a few disorganized thoughts.  

For a recent review article about AI and materials discovery, see here.  There is a ton of work being done pursuing the grand goal of inverse design - name some desired properties, and have AI/ML formulate a material that fits those requirements and is actually synthesizable.  Major companies with publicly known efforts include Google Deepmind and GNoMEMicrosoft, Meta working on catalysts, Toyota Research Institute, IBM, and I'm certain that I'm missing major players.  There are also a slew of startup companies on this topic (e.g. Periodic).  

In addition to materials design and discovery, there is enormous effort being put into using AI/ML to bridge across length and timescales.  Quantum chemistry methods can look at microscopic physics and chemistry, for example, but extending this to macroscopic system sizes with realistic disorder is often computationally intractable.  There are approaches like time-dependent DFT and DMFT to try to capture dynamics, but following dynamics even as long as picoseconds is hard.  Using microscopic methods and ML to try to compute and then parametrize force fields between atoms (for example), one can look at larger systems and longer timescales using molecular dynamics for atomic motions.  However, getting from there to, e.g., the Navier-Stokes equations or understanding phase boundaries, is very difficult.  (At the same time, there are approaches that use AI/ML to learn about the solutions of partial differential equations, so that one can, for example, compute good fluid flows quickly without actually having to solve the N-S equations - see here.) 

We want to keep coarse-graining (looking at larger scales), while maintaining the microscopic physics constraints so that the results are accurate.  There seems to be a lot of hope that either by design or by the action of the AI/ML tools themselves we can come up with descriptors that are good at capturing the essential physics as we move to larger and larger scales.  To use a fluids example, somehow we are hoping that these tools will naturally capture that at scales much larger than one water molecule, it makes sense to track density, temperature, velocity fields, surface tension, liquid-vapor interfaces, etc.  

From the always fun xkcd
One rough description of emergence is the idea that at larger scales and numbers of constituents, new properties appear for the collective system that are extremely difficult to predict from the microscopic rules governing the constituents.  For example, starting from the Schroedinger equation and basic quantum mechanics, it's very hard to determine that snowflakes tend to have 6-fold symmetry and ice will float in water, even though the latter are of course consequences of the former.  A nice article about emergence in physics is here.  

It feels to me like in some AI/ML endeavors, we are hoping that these tools will figure out how emergence works better than humans have been able to do.  This is certainly a worthy challenge, and it may well succeed in a lot of systems, but then we may have the added meta-challenge of trying to understand how our tools did that.  Physics-informed and structured ML will hopefully take us well beyond the situation in the xkcd comic shown here.  



Friday, February 13, 2026

Updates: The US government and STEM research

Now that we're 6 weeks into the new year, I think it's worth it to do an incomplete roundup of where we are on US federal support of STEM research.  Feel free to skip this post if you don't want to read about this.  
  • Appropriators in Congress largely went against the FY26 presidential budget request, and various spending bills by and large slightly-less-than level-funded most US science agencies. A physics-oriented take is here. The devil is in the details.  The AAAS federal R&D dashboard lets you explore this at a finer level.  Nature has an interactive widget that visualizes what has been cut and what remains.
  • Bear in mind, that was just year 1 of the present administration.  All of the effort, all of the work pushing back against proffered absolutely draconian, agency-destroying cuts?  That likely will have to be done again this year.  And in subsequent years, if the administration still invests effort in pushing enormously slashed budgets in their budget requests.
  • There is an issue of Science with the whole news section about how the past year has changed the science funding and pipeline in the US.
  • In NSF news, the rate of awards remains very low, though there is almost certainly a major delay because of the lateness of the budget, coping with reduced staffing levels, and restructuring now that Divisions no longer exist.  How greater emphasis on specific strategic priorities (beyond what is in the program calls) will affect operations remains unclear, at least to me.
  • Also, some NSF graduate research fellowship applications, especially in the life sciences, seem to be getting kicked back without review - see here (sorry about the paywall).  This seems to be a broad research area issue, despite no information to applicants about this (that lack of information flow is perhaps unsurprising).  
  • I'm not well-immersed in the world of NIH and the FDA, but I know things are bad.  Fifteen out of 27 of the NIH institutes have vacant or acting director positions.  The FDA declined to even take the application for Moderna's mRNA flu vaccine, a move not popular even with the Wall Street Journal.  Moderna has also decided to shelve promising vaccines for a number of diseases because they no longer think the US will be a market for them, and it practically seems like someone wants to bring back polio.  (Note:   I will not have the comments become a back-and-forth about vaccines.)
  • The back and forth about indirect cost rates continues, along with the relevant court cases.  The recent appropriations have language to prevent sudden changes in rates.  The FAIR model is not yet passed.
  • Concerns still loom about impoundment.
  • There has been an exodus of technically trained PhDs from government service.
  • I could go on.  I know I've left out critical areas, and I haven't talked about DOE or NASA or DOD or EPA or NOAA explicitly.  
Honest people can have discussions about the right balance of federal vs state vs industrial vs philanthropic support for research.  There are no easy answers in the present time.  For those who think that robust public investment in science and engineering research is critical to societal good, economic competitiveness, and security, we need to keep pushing and not let fatigue or fatalism win the day.  


  

Sunday, February 08, 2026

Data centers in space make no sense to me

There seems to be a huge push lately in the tech world for the idea of placing data centers in space.  This is not just coming from Musk via the merging of SpaceX and XAi.  Google has some effort along these lines.  NVIDIA is thinking about it. TED talks are being given by startup people in San Francisco on this topic, so you know we've reached some well-defined hype level.    Somehow the idea has enough traction that even the PRC is leaning in this direction.  The arguments seem to be that (1) there is abundant solar power in space; (2) environmental impact on the earth will be less, with no competition for local electricity, water, real estate; (3) space is "cold", so cooling these things should be do-able; (4) it's cool and sounds very sci-fi/high frontier.  

At present (or near-future) levels of technology, as far as I can tell this idea makes no sense.  I will talk about physics reasons here, though there are also pragmatic economic reasons why this seems crazy.  I've written before that I think some of the AI/data center evangelists are falling victim to magical thinking, because they come from the software world and don't in their heart of hearts appreciate that there are actual hardware constraints on things like chip manufacturing and energy production.  

Others have written about this - see here for example.  The biggest physics challenges with this idea (beyond lofting millions of kg of cargo into orbit):
  • While the cosmic microwave background is cold, cooling things in space is difficult, because vacuum is an excellent thermal insulator.  On the ground, you can use conduction and convection to get rid of waste heat.  In space, your only option (beyond throwing mass overboard, which is not readily replenishible) is radiative cooling.  The key physics here is the Stefan-Boltzmann law, which is a triumph of statistical physics (and one of my favorite derivations to discuss in class - you combine the Planck result for the energy density of a "gas" of photons in thermal equilibrium at some temperature \(T\) with a basic kinetic theory of gases result for the flux of particles out of a small hole).  It tells you that the best you can ever do is for an ideal black body, the total power radiated away is proportional to the area of the radiator and \(T^{4}\), with fundamental constants making up the proportionality constant with zero adjustable parameters.  
A liquid droplet radiator, from this excellent site
Remember, data centers right now consume enormous amounts of power (and cooling water).  While you can use heat pumps to try to get the radiators up to well above the operating temperatures of the electronics, that increases mass and waste power, and realistically there is an upper limit on the radiator temperature below 1000 K.  An ideal black body radiator at 1000 K puts out about 57 kW per square meter, and you probably need to get rid of tens of megawatts, necessitating hundreds to thousands of square meters of radiator area.  There are clever ideas on how to try to do this.  For example, in the liquid droplet radiator, you could spray a bunch of hot droplets out into space, capitalizing on their large specific surface area.  Of course, you'd need to recapture the cooled droplets, and the hot liquid needs to have sufficiently low vapor pressure that you don't lose a lot of material.  Still, as far as I am aware, to date no one has actually deployed a large-scale (ten kW let alone MW level) droplet radiator in space.  

  • High end computational hardware is vulnerable to radiation damage.  There are no rad-hard GPUs.  Low earth orbit is a pretty serious radiation environment, with some flux of high energy cosmic rays quite a bit higher than on the ground.  While there are tests going on, and astronauts are going to bring smartphones on the next Artemis mission, it's rough.  Putting many thousands to millions of GPUs and huge quantities of memory in a harsh environment where they cannot be readily accessed or serviced seems unwise.  (There are also serious questions of vulnerability to attack.  Setting off a small nuclear warhead in LEO injects energetic electrons into the lower radiation belts and would be a huge mess.)
I think we will be faaaaaaar better off in the long run if we take a fraction of the money that people want to invest in space-based data centers, and instead plow those resources into developing energy-efficient computing.  Musk has popularized the engineering sentiment "The best part is no part".  The best way to solve the problem of supplying and radiating away many GW of power for data centers is to make data centers that don't consume many GW of power.  

Sunday, February 01, 2026

What is the Aharonov-Bohm effect?

After seeing this latest extremely good video from Veritasium, and looking back through my posts, I realized that while I've referenced it indirectly, I've never explicitly talked about the Aharonov-Bohm effect.  The video is excellent, and that wikipedia page is pretty good, but maybe some people will find another angle on this to be helpful.  

Still from this video.

The ultrabrief version:  The quantum interference of charged particles like electrons can be controllably altered by tuning a magnetic field in a region that the particles never pass through.  This is weird and spooky because it's an entirely quantum mechanical effect - classical physics, where motion is governed by local forces, says that zero field = unaffected trajectories.  

In quantum mechanics, we describe the spatial distribution of particles like electrons with a wavefunction, a complex-valued quantity that one can write as an amplitude and a phase \(\varphi\), where both depend on position \(\mathbf{r}\).  The phase is important because waves can interfere.  Crudely speaking, when the crests of one wave (say \(\varphi = 0\)) line up with the troughs of another wave (\(\varphi = \pi\)) at some location, the waves interfere destructively, so the total wave at that location is zero if the amplitudes of each contribution are identical.   As quantum particles propagate through space, their phase "winds" with distance \(\mathbf{r}\) like \(\mathbf{k}\cdot \mathbf{r}\), where \(\hbar \mathbf{k} = \mathbf{p}\) is the momentum.  Higher momentum = faster winding of phase = shorter wavelength.  This propagation, phase winding, and interference is the physics behind the famous two-slit experiment.  (In his great little popular book - read it if you haven't yet - Feynman described phase as a clockface attached to each particle.)  One important note:  The actual phase itself is arbitrary; it's phase differences that matter in interference experiments.  If you added an arbitrary amount \(\varphi_{0}\) to every phase, no physically measurable observables would change. 

Things get trickier if the particles that move around are charged.  It was realized 150+ years ago that formal conservation of momentum gets tricky if we consider electric and magnetic fields.  The canonical momentum that shows up in the Lagrange and Hamilton equations is \(\mathbf{p}_{c} = \mathbf{p}_{kin} + q \mathbf{A}\), where \(\mathbf{p}_{kin}\) is the kinetic momentum (the part that actually has to do with the classical velocity and which shows up in the kinetic energy), \(q\) is the charge of the particle, and \(\mathbf{A}(\mathbf{r}\)\) is the vector potential.  

Background digression: The vector potential is very often a slippery concept for students.  We get used to the idea of a scalar potential \(\phi(\mathbf{r})\), such that the electrostatic potential energy is \(q\phi\) and the electric field is given by \(\mathbf{E} = -\nabla \phi\) if there are no magnetic fields.  Adding an arbitrary uniform offset to the scalar potential, \(\phi \rightarrow \phi + \phi_{0}\), doesn't change the electric field (and therefore forces on charged particles), because the zero that we define for energy is arbitrary (general relativity aside).  For the vector potential, \(\mathbf{B} = \nabla \times \mathbf{A}\).   This means we can add an arbitrary gradient of a scalar function to the vector potential, \(\mathbf{A} \rightarrow \mathbf{A}+ \nabla f(\mathbf{r})\), and the magnetic field won't change.  Maxwell's equations mean that \(\mathbf{E} = -\nabla \phi - \partial \mathbf{A}/\partial t\).  "Gauge freedom" means that there is more than one way to choose internally consistent definitions of \(\phi\) and \(\mathbf{A}\).

TL/DR main points: (1)  The vector potential can be nonzero in places where \(\mathbf{B}\) (and hence the classical Lorentz force) is zero.  (2) Because the canonical momentum becomes the operator \(-i \hbar \nabla\) in quantum mechanics and the kinetic momentum is what shows up in the kinetic energy, charged propagating particles pick up an extra phase winding given by \(\delta \varphi = (q/\hbar)\int \mathbf{A}\cdot d\mathbf{r}\) along a path.  

This is the source of the creepiness of the Aharonov-Bohm effect.  Think of two paths (see still taken from the Veritasium video), and threading magnetic flux just through the little region using a solenoid will tune the intensity detected on the screen on the far right.  That field region can be made arbitrarily small and positioned anywhere inside the diamond formed by the paths, and the effect still works.  Something not mentioned in the video:  The shifting of the interference pattern is periodic in the flux through the solenoid, with a period of \(h/e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  

Why should you care about this?

  • As the video discusses, the A-B effect shows that the potentials are physically important quantities that affect motion, at least as much as the corresponding fields, and there are quantum consequences to this that are just absent in the classical world.
  • The A-B effect (though not with the super skinny field confinement) has been seen experimentally in many mesoscopic physics experiments (e.g., here, or here) and can be used as a means of quantifying coherence at these scales (e.g., here and here).
  • When dealing with emergent quasiparticles that might have unusual fractional charges (\(e^*\)), then A-B interferometers can have flux periodicities that are given by \(h/e^*\). (This can be subtle and tricky.)
  • Interferometry to detect potential-based phase shifts is well established.  Here's the paper mentioned in the video about a gravitational analog of the A-B effect.  (Quibblers can argue that there is no field-free region in this case, so it's not strictly speaking the A-B analog.)
Basically, the A-B effect has gone from an initially quite controversial prediction to an established piece of physics that can be used as a tool.  If you want to learn Aharonov's take on all this, please read this interesting oral history.   

Update: The always informative Steve Simon has pointed out to me a history of this that I had not known, that this effect had already been discovered a decade earlier by Ehrenberg and Siday.  Please see this arXiv paper about this.  Here is Ehrenberg and Siday's paper.  Aharonov and Bohm were unaware of it and arrived at their conclusions independently.  One lesson to take away:  Picking a revealing article title can really help your impact.

Sunday, January 25, 2026

What is superconductivity?

A friend pointed out that, while I've written many posts that have to do with superconductivity, I've never really done a concept post about it.  Here's a try, as I attempt to distract myself from so many things happening these days.

The superconducting state is a truly remarkable phase of matter that is hosted in many metals (though ironically not readily in the pure elements (Au, Ag, Cu) that are the best ordinary conductors of electricity - see here for some references).  First, some definitional/phenomenological points:

  • The superconducting state is a distinct thermodynamic phase.  In the language of phase transitions developed by Ginzburg and Landau back in the 1950s, the superconducting state has an order parameter that is nonzero, compared to the non-superconducting metal state.   When you cool down a metal and it becomes a superconductor, this really is analogous (in some ways) to when you cool down liquid water and it becomes ice, or (a better comparison) when you cool down very hot solid iron and it becomes a magnet below 770 °C.
  • In the superconducting state, at DC, current can flow with zero electrical resistance.  Experimentally, this can be checked by setting up a superconducting current loop and monitoring the current via the magnetic field it produces.  If you find that the current will decay over somewhere between \(10^5\) and \(\infty\) years, that's pretty convincing that the resistance is darn close to zero. 
  • This is not just "perfect" conduction.  If you placed a conductor in a magnetic field, turned on perfect conduction, and then tried to change the magnetic field, currents would develop currents that would preserve the amount of magnetic flux through the perfect conductor.  In contrast, a key signature of superconductivity is the Meissner-Oschenfeld Effect:  if superconductivity is turned on in the presence of a (sufficiently small) magnetic field, currents will develop spontaneously at the surface of the material to exclude all magnetic flux from the bulk of the superconductor.  (That is, the magnetic field from the currents will be oppositely directed to the external field and of just the right size and distribution to give \(\mathbf{B}=0\) in the bulk of the material.)  Observation of the bulk Meissner effect is among the strongest evidence for true superconductivity, much more robust than a measurement that seems to indicate zero voltage drop.  Indeed, as a friend of mine pointed out to me, a one-phrase description of a superconductor is "a perfect diamagnet".  
  • There are two main types of superconductors, uncreatively termed "Type I" and "Type II".  In Type I superconductors, an external \(\mathbf{H} = \mathbf{B}/\mu_{0}\) fails to penetrate the bulk of the material until it reaches a critical field \(H_{c}\), at which point the superconducting state is suppressed completely.  In a Type II superconductor, above some lower critical field \(H_{c,1}\) magnetic flux begins to penetrate the material in the form of vortices, each of which has a non-superconducting ("normal") core.  Above an upper critical field \(H_{c,2}\), superconductivity is suppressed. 
  • Interestingly, a lot of this can be "explained" by the London Equations, which were introduced in the 1930s despite a complete lack of a viable microscopic theory of superconductivity.
  • Magnetic flux through a conventional superconducting ring (or through a vortex core) is quantized precisely in units of \(h/2e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  
  • (It's worth noting that in magnetic fields and with AC currents, there are still electrical losses in superconductors, due in part to the motion of vortices.)
Physically, what is the superconducting state?  Why does it happen and why does it have the weird properties described above as well as others?  There are literally entire textbooks and semester-long courses on this, so what follows is very brief and non-authoritative.  
  • In an ordinary metal at low temperatures, neglecting e-e interactions and other complications, the electrons fill up states (because of the Pauli Principle) starting from the lowest energy up to some highest value, the Fermi energy.  (See here for some mention of this.)   Empty electronic states are available at essentially no energy cost - exciting electrons from filled states to empty states are "gapless".  
  • Electrical conduction takes place through the flow of these electronic quasiparticles.   (For more technical readers:  We can think of these quasiparticles like little wavepackets, and as each one propagates around the wavepacket accumulates a certain amount of phase.  The phases of different quasiparticles are arbitrary, but the change in the phase going around some trajectory is well defined.)
  • In a superconductor, there is some effective attractive interaction between electrons that we have thus far neglected.  In conventional superconductors, this involves lattice vibrations (as in this wikipedia description), though other attractive interactions are possible.  At sufficiently low temperatures, the ordinary metal state is unstable, and the system will spontaneously form pairs of electrons (or holes).  Those pairs then condense into a single coherent state described by an amplitude \(|\Psi|\) and a phase, \(\phi\), shared by all the pairs.  The conventional theory of this was formulated by Bardeen, Cooper, and Schrieffer in 1957.  A couple of nice lecture note presentations of this are here (courtesy Yuval Oreg) and here (courtesy Dan Arovas), if you want the technical details.  This leads to an energy gap that characterizes how much it costs to create individual quasiparticles.  Conduction in a superconductor takes place through the flow of pairs.  (A clue to this is the appearance of the \(2e\) in the flux quantization.)
  • This taking on of a global phase for the pairs of electrons is a spontaneous breaking of gauge symmetry - this is discussed pedagogically for physics students here.  Understanding this led to figuring out the Anderson-Higgs mechanism, btw. 
  • The result is a state with a kind of rigidity; precisely how this leads to the phenomenology of superconductivity is not immediately obvious, to me anyway.  If someone has a link to a great description of this, please put it in the comments.  (Interestingly google gemini is not too bad at discussing this.)
  • The existence of this global phase is hugely important, because it's the basis for the Josephson effect(s), which in turn has led to the basis of exquisite magnetic field sensing, all the superconducting approaches to quantum information, and the definition of the volt, etc.
  • The paired charge carriers are described by a pairing symmetry of their wave functions in real space.  In conventional BCS superconductors, each pair has no orbital angular momentum ("\(s\)-wave"), and the spins are in a singlet state.  In other superconductors, pairs can have \(l = 1\) orbital angular momentum ("\(p\)-wave", with spins in the triplet configuration), \(l = 2\) orbital angular momentum ("\(d\)-wave", with spins in a singlet again), etc.  The pairing state determines whether the energy gap is directionally uniform (\(s\)-wave) or whether there are directions ("nodes") along which the gap goes to zero.  
I have necessarily left out a ton here.  Superconductivity continues to be both technologically critical and scientifically fascinating.  One major challenge in understanding the microscopic mechanisms behind particular superconductors is that the superconducting state itself is in a sense generic - many of its properties (like phase rigidity) are emergent regardless of the underlying microscopic picture, which is amazing.

One other point, added after initial posting. In quantum computing approaches, a major challenge is how to build robust effective ("logical") qubits from individual physical qubits that are not perfect (meaning that they suffer from environmental decoherence among other issues).  The phase coherence of electronic quasiparticles in ordinary metals is generally quite fragile; inelastic interactions with each other, with phonons, with impurity spins, etc. can all lead to decoherence.  However, starting from those ingredients, superconductivity shows that it is possible to construct, spontaneously, a collective state with very long-lived coherence.  I'm certain I'm not the first to wonder about whether there are lessons to be drawn here in terms of the feasibility of and approaches to quantum error correction.

Sunday, January 11, 2026

What is the Kondo effect?

The Kondo effect is a neat piece of physics, an archetype of a problem involving strong electronic correlations and entanglement, with a long and interesting history and connections to bulk materials, nanostructures, and important open problems.  

First, some stage setting.  In the late 19th century, with the development of statistical physics and the kinetic theory of gases, and the subsequent discovery of electrons by JJ Thomson, it was a natural idea to try modeling the electrons in solids as a gas, as done by Paul Drude in 1900.  Being classical, the Drude model misses a lot (If all solids contain electrons, why aren't all solids metals?  Why is the specific heat of metals orders of magnitudes lower than what a classical electron gas would imply?), but it does introduce the idea of electrons as having an elastic mean free path, a typical distance traveled before scattering off something (an impurity? a defect?) into a random direction.  In the Drude picture, as \(T \rightarrow 0\), the only thing left to scatter charge carriers is disorder ("dirt"), and the resistivity of a conductor falls monotonically and approaches \(\rho_{0}\), the "residual resistivity", a constant set in part by the number of defects or impurities in the material.  In the semiclassical Sommerfeld model, and then later in nearly free electron model, this idea survives.

Resistivity growing at low \(T\)
for gold with iron impurities, fig 
One small problem:  in the 1930s (once it was much easier to cool materials down to very low temperatures), it was noticed that in many experiments (here and here, for example) the electrical resistivity of metals did not seem to fall and then saturate at some \(\rho_{0}\).  Instead, as \(T \rightarrow 0\), \(\rho(T)\) would go through a minimum and then start increasing again, approximately like \(\delta \rho(T) \propto - \ln(T/T_{0})\), where \(T_{0}\) is some characteristic temperature scale.  This is weird and problematic, especially since the logarithm formally diverges as \(T \rightarrow 0\).   

Over time, it became clear that this phenomenon was associated with magnetic impurities, atoms that have unpaired electrons typically in \(d\) orbitals, implying that somehow the spin of the electrons was playing an important role in the scattering process.  In 1964, Jun Kondo performed the definitive perturbative treatment of this problem, getting the \(\ln T\) divergence.  

[Side note: many students learning physics are at least initially deeply uncomfortable with the idea of approximations (that many problems can't be solved analytically and exactly, so we need to take limiting cases and make controlled approximations, like series expansions).  What if a series somehow doesn't converge?  This is that situation.]

The Kondo problem is a particular example of a "quantum impurity problem", and it is a particular limiting case of the Anderson impurity model.  Physically, what is going on here?  A conduction electron from the host metal could sit on the impurity atom, matching up with the unpaired impurity electron.  However (much as we can often get away with ignoring it) like charges repel, and it is energetically very expensive (modeled by some "on-site" repulsive energy \(U\)) to do that.  Parking that conduction electron long-term is not allowed, but a virtual process can take place, whereby a conduction electron with spin opposite to the localized moment can (in a sense) pop on there and back off, or swap places with the localized electron.  The Pauli principle enforces this opposed spin restriction, leading to entanglement between the local electron and the conduction electron as they form a singlet.  Moreover, this process generally involves conduction electrons at the Fermi surface of the metal, so it is a strongly interacting many-body problem.  As the temperature is reduced, this process becomes increasingly important, so that the impurity's scattering cross section of conduction electrons grows as \(T\) falls, causing the resistivity increase.  

Top: Cartoon of the Kondo scattering process. Bottom:
Ground state is a many-body singlet between the local
moment and the conduction electrons.

The eventual \(T = 0\) ground state of this system is a many-body singlet, with the localized spin entangled with a "Kondo cloud" of conduction electrons.  The roughly \(\ln T\) resistivity correction rolls over and saturates.   There ends up being a sharp peak (resonance) in the electronic density of states right at the Fermi energy.  Interestingly, this problem actually can be solved exactly and analytically (!), as was done by Natan Andrei in this paper in 1980 and reviewed here.  

This might seem to be the end of the story, but the Kondo problem has a long reach!  With the development of the scanning tunneling microscope, it became possible to see Kondo resonances associated with individual magnetic impurities (see here).  In semiconductor quantum dot devices, if the little dot has an odd number of electrons, then it can form a Kondo resonance that spans from the source electrode through the dot and into the drain electrode.  This leads to a peak in the conductance that grows and saturates as \(T \rightarrow 0\) because it involves forward scattering.  (See here and here).  The same can happen in single-molecule transistors (see here, here, here, and a review here).  Zero-bias peaks in the conductance from Kondo-ish physics can be a confounding effect when looking for other physics.

Of course, one can also have a material where there isn't a small sprinkling of magnetic impurities, but a regular lattice of spin-hosting atoms as well as conduction electrons.  This can lead to heavy fermion systems, or Kondo insulators, and more exotic situations.   

The depth of physics that can come out of such simple ingredients is one reason why the physics of materials is so interesting.  

Sunday, January 04, 2026

Updated: CM/nano primer - 2026 edition

This is a compilation of posts related to some basic concepts of the physics of materials and nanoscale physics.  I realized the other day that I hadn't updated this since 2019, and therefore a substantial audience may not have seen these.  Wikipedia's physics entries have improved greatly over the years, but hopefully these are a complement that's useful to students and maybe some science writers.  Please let me know if there are other topics that you think would be important to include.  

What is temperature?
What is chemical potential?
What is mass?
Fundamental units and condensed matter

What are quasiparticles?
Quasiparticles and what is "real"
What is effective mass?
What is a phonon?
What is a plasmon?
What are magnons?
What are skyrmions?
What are excitons?
What is quantum coherence?
What are universal conductance fluctuations?
What is a quantum point contact?  What is quantized conductance?
What is tunneling?

What are steric interactions?
(effectively) What is the normal force?
What is a flat band and why might you care? (example: Kagome lattice)
What is the Kondo effect?

What is a crystal?