Search This Blog

Sunday, February 08, 2026

Data centers in space make no sense to me

There seems to be a huge push lately in the tech world for the idea of placing data centers in space.  This is not just coming from Musk via the merging of SpaceX and XAi.  Google has some effort along these lines.  NVIDIA is thinking about it. TED talks are being given by startup people in San Francisco on this topic, so you know we've reached some well-defined hype level.    Somehow the idea has enough traction that even the PRC is leaning in this direction.  The arguments seem to be that (1) there is abundant solar power in space; (2) environmental impact on the earth will be less, with no competition for local electricity, water, real estate; (3) space is "cold", so cooling these things should be do-able; (4) it's cool and sounds very sci-fi/high frontier.  

At present (or near-future) levels of technology, as far as I can tell this idea makes no sense.  I will talk about physics reasons here, though there are also pragmatic economic reasons why this seems crazy.  I've written before that I think some of the AI/data center evangelists are falling victim to magical thinking, because they come from the software world and don't in their heart of hearts appreciate that there are actual hardware constraints on things like chip manufacturing and energy production.  

Others have written about this - see here for example.  The biggest physics challenges with this idea (beyond lofting millions of kg of cargo into orbit):
  • While the cosmic microwave background is cold, cooling things in space is difficult, because vacuum is an excellent thermal insulator.  On the ground, you can use conduction and convection to get rid of waste heat.  In space, your only option (beyond throwing mass overboard, which is not readily replenishible) is radiative cooling.  The key physics here is the Stefan-Boltzmann law, which is a triumph of statistical physics (and one of my favorite derivations to discuss in class - you combine the Planck result for the energy density of a "gas" of photons in thermal equilibrium at some temperature \(T\) with a basic kinetic theory of gases result for the flux of particles out of a small hole).  It tells you that the best you can ever do is for an ideal black body, the total power radiated away is proportional to the area of the radiator and \(T^{4}\), with fundamental constants making up the proportionality constant with zero adjustable parameters.  
A liquid droplet radiator, from this excellent site
Remember, data centers right now consume enormous amounts of power (and cooling water).  While you can use heat pumps to try to get the radiators up to well above the operating temperatures of the electronics, that increases mass and waste power, and realistically there is an upper limit on the radiator temperature below 1000 K.  An ideal black body radiator at 1000 K puts out about 57 kW per square meter, and you probably need to get rid of tens of megawatts, necessitating hundreds to thousands of square meters of radiator area.  There are clever ideas on how to try to do this.  For example, in the liquid droplet radiator, you could spray a bunch of hot droplets out into space, capitalizing on their large specific surface area.  Of course, you'd need to recapture the cooled droplets, and the hot liquid needs to have sufficiently low vapor pressure that you don't lose a lot of material.  Still, as far as I am aware, to date no one has actually deployed a large-scale (ten kW let alone MW level) droplet radiator in space.  

  • High end computational hardware is vulnerable to radiation damage.  There are no rad-hard GPUs.  Low earth orbit is a pretty serious radiation environment, with some flux of high energy cosmic rays quite a bit higher than on the ground.  While there are tests going on, and astronauts are going to bring smartphones on the next Artemis mission, it's rough.  Putting many thousands to millions of GPUs and huge quantities of memory in a harsh environment where they cannot be readily accessed or serviced seems unwise.  (There are also serious questions of vulnerability to attack.  Setting off a small nuclear warhead in LEO injects energetic electrons into the lower radiation belts and would be a huge mess.)
I think we will be faaaaaaar better off in the long run if we take a fraction of the money that people want to invest in space-based data centers, and instead plow those resources into developing energy-efficient computing.  Musk has popularized the engineering sentiment "The best part is no part".  The best way to solve the problem of supplying and radiating away many GW of power for data centers is to make data centers that don't consume many GW of power.  

Sunday, February 01, 2026

What is the Aharonov-Bohm effect?

After seeing this latest extremely good video from Veritasium, and looking back through my posts, I realized that while I've referenced it indirectly, I've never explicitly talked about the Aharonov-Bohm effect.  The video is excellent, and that wikipedia page is pretty good, but maybe some people will find another angle on this to be helpful.  

Still from this video.

The ultrabrief version:  The quantum interference of charged particles like electrons can be controllably altered by tuning a magnetic field in a region that the particles never pass through.  This is weird and spooky because it's an entirely quantum mechanical effect - classical physics, where motion is governed by local forces, says that zero field = unaffected trajectories.  

In quantum mechanics, we describe the spatial distribution of particles like electrons with a wavefunction, a complex-valued quantity that one can write as an amplitude and a phase \(\varphi\), where both depend on position \(\mathbf{r}\).  The phase is important because waves can interfere.  Crudely speaking, when the crests of one wave (say \(\varphi = 0\)) line up with the troughs of another wave (\(\varphi = \pi\)) at some location, the waves interfere destructively, so the total wave at that location is zero if the amplitudes of each contribution are identical.   As quantum particles propagate through space, their phase "winds" with distance \(\mathbf{r}\) like \(\mathbf{k}\cdot \mathbf{r}\), where \(\hbar \mathbf{k} = \mathbf{p}\) is the momentum.  Higher momentum = faster winding of phase = shorter wavelength.  This propagation, phase winding, and interference is the physics behind the famous two-slit experiment.  (In his great little popular book - read it if you haven't yet - Feynman described phase as a clockface attached to each particle.)  One important note:  The actual phase itself is arbitrary; it's phase differences that matter in interference experiments.  If you added an arbitrary amount \(\varphi_{0}\) to every phase, no physically measurable observables would change. 

Things get trickier if the particles that move around are charged.  It was realized 150+ years ago that formal conservation of momentum gets tricky if we consider electric and magnetic fields.  The canonical momentum that shows up in the Lagrange and Hamilton equations is \(\mathbf{p}_{c} = \mathbf{p}_{kin} + q \mathbf{A}\), where \(\mathbf{p}_{kin}\) is the kinetic momentum (the part that actually has to do with the classical velocity and which shows up in the kinetic energy), \(q\) is the charge of the particle, and \(\mathbf{A}(\mathbf{r}\)\) is the vector potential.  

Background digression: The vector potential is very often a slippery concept for students.  We get used to the idea of a scalar potential \(\phi(\mathbf{r})\), such that the electrostatic potential energy is \(q\phi\) and the electric field is given by \(\mathbf{E} = -\nabla \phi\) if there are no magnetic fields.  Adding an arbitrary uniform offset to the scalar potential, \(\phi \rightarrow \phi + \phi_{0}\), doesn't change the electric field (and therefore forces on charged particles), because the zero that we define for energy is arbitrary (general relativity aside).  For the vector potential, \(\mathbf{B} = \nabla \times \mathbf{A}\).   This means we can add an arbitrary gradient of a scalar function to the vector potential, \(\mathbf{A} \rightarrow \mathbf{A}+ \nabla f(\mathbf{r})\), and the magnetic field won't change.  Maxwell's equations mean that \(\mathbf{E} = -\nabla \phi - \partial \mathbf{A}/\partial t\).  "Gauge freedom" means that there is more than one way to choose internally consistent definitions of \(\phi\) and \(\mathbf{A}\).

TL/DR main points: (1)  The vector potential can be nonzero in places where \(\mathbf{B}\) (and hence the classical Lorentz force) is zero.  (2) Because the canonical momentum becomes the operator \(-i \hbar \nabla\) in quantum mechanics and the kinetic momentum is what shows up in the kinetic energy, charged propagating particles pick up an extra phase winding given by \(\delta \varphi = (q/\hbar)\int \mathbf{A}\cdot d\mathbf{r}\) along a path.  

This is the source of the creepiness of the Aharonov-Bohm effect.  Think of two paths (see still taken from the Veritasium video), and threading magnetic flux just through the little region using a solenoid will tune the intensity detected on the screen on the far right.  That field region can be made arbitrarily small and positioned anywhere inside the diamond formed by the paths, and the effect still works.  Something not mentioned in the video:  The shifting of the interference pattern is periodic in the flux through the solenoid, with a period of \(h/e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  

Why should you care about this?

  • As the video discusses, the A-B effect shows that the potentials are physically important quantities that affect motion, at least as much as the corresponding fields, and there are quantum consequences to this that are just absent in the classical world.
  • The A-B effect (though not with the super skinny field confinement) has been seen experimentally in many mesoscopic physics experiments (e.g., here, or here) and can be used as a means of quantifying coherence at these scales (e.g., here and here).
  • When dealing with emergent quasiparticles that might have unusual fractional charges (\(e^*\)), then A-B interferometers can have flux periodicities that are given by \(h/e^*\). (This can be subtle and tricky.)
  • Interferometry to detect potential-based phase shifts is well established.  Here's the paper mentioned in the video about a gravitational analog of the A-B effect.  (Quibblers can argue that there is no field-free region in this case, so it's not strictly speaking the A-B analog.)
Basically, the A-B effect has gone from an initially quite controversial prediction to an established piece of physics that can be used as a tool.  If you want to learn Aharonov's take on all this, please read this interesting oral history.   

Update: The always informative Steve Simon has pointed out to me a history of this that I had not known, that this effect had already been discovered a decade earlier by Ehrenberg and Siday.  Please see this arXiv paper about this.  Here is Ehrenberg and Siday's paper.  Aharonov and Bohm were unaware of it and arrived at their conclusions independently.  One lesson to take away:  Picking a revealing article title can really help your impact.

Sunday, January 25, 2026

What is superconductivity?

A friend pointed out that, while I've written many posts that have to do with superconductivity, I've never really done a concept post about it.  Here's a try, as I attempt to distract myself from so many things happening these days.

The superconducting state is a truly remarkable phase of matter that is hosted in many metals (though ironically not readily in the pure elements (Au, Ag, Cu) that are the best ordinary conductors of electricity - see here for some references).  First, some definitional/phenomenological points:

  • The superconducting state is a distinct thermodynamic phase.  In the language of phase transitions developed by Ginzburg and Landau back in the 1950s, the superconducting state has an order parameter that is nonzero, compared to the non-superconducting metal state.   When you cool down a metal and it becomes a superconductor, this really is analogous (in some ways) to when you cool down liquid water and it becomes ice, or (a better comparison) when you cool down very hot solid iron and it becomes a magnet below 770 °C.
  • In the superconducting state, at DC, current can flow with zero electrical resistance.  Experimentally, this can be checked by setting up a superconducting current loop and monitoring the current via the magnetic field it produces.  If you find that the current will decay over somewhere between \(10^5\) and \(\infty\) years, that's pretty convincing that the resistance is darn close to zero. 
  • This is not just "perfect" conduction.  If you placed a conductor in a magnetic field, turned on perfect conduction, and then tried to change the magnetic field, currents would develop currents that would preserve the amount of magnetic flux through the perfect conductor.  In contrast, a key signature of superconductivity is the Meissner-Oschenfeld Effect:  if superconductivity is turned on in the presence of a (sufficiently small) magnetic field, currents will develop spontaneously at the surface of the material to exclude all magnetic flux from the bulk of the superconductor.  (That is, the magnetic field from the currents will be oppositely directed to the external field and of just the right size and distribution to give \(\mathbf{B}=0\) in the bulk of the material.)  Observation of the bulk Meissner effect is among the strongest evidence for true superconductivity, much more robust than a measurement that seems to indicate zero voltage drop.  Indeed, as a friend of mine pointed out to me, a one-phrase description of a superconductor is "a perfect diamagnet".  
  • There are two main types of superconductors, uncreatively termed "Type I" and "Type II".  In Type I superconductors, an external \(\mathbf{H} = \mathbf{B}/\mu_{0}\) fails to penetrate the bulk of the material until it reaches a critical field \(H_{c}\), at which point the superconducting state is suppressed completely.  In a Type II superconductor, above some lower critical field \(H_{c,1}\) magnetic flux begins to penetrate the material in the form of vortices, each of which has a non-superconducting ("normal") core.  Above an upper critical field \(H_{c,2}\), superconductivity is suppressed. 
  • Interestingly, a lot of this can be "explained" by the London Equations, which were introduced in the 1930s despite a complete lack of a viable microscopic theory of superconductivity.
  • Magnetic flux through a conventional superconducting ring (or through a vortex core) is quantized precisely in units of \(h/2e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  
  • (It's worth noting that in magnetic fields and with AC currents, there are still electrical losses in superconductors, due in part to the motion of vortices.)
Physically, what is the superconducting state?  Why does it happen and why does it have the weird properties described above as well as others?  There are literally entire textbooks and semester-long courses on this, so what follows is very brief and non-authoritative.  
  • In an ordinary metal at low temperatures, neglecting e-e interactions and other complications, the electrons fill up states (because of the Pauli Principle) starting from the lowest energy up to some highest value, the Fermi energy.  (See here for some mention of this.)   Empty electronic states are available at essentially no energy cost - exciting electrons from filled states to empty states are "gapless".  
  • Electrical conduction takes place through the flow of these electronic quasiparticles.   (For more technical readers:  We can think of these quasiparticles like little wavepackets, and as each one propagates around the wavepacket accumulates a certain amount of phase.  The phases of different quasiparticles are arbitrary, but the change in the phase going around some trajectory is well defined.)
  • In a superconductor, there is some effective attractive interaction between electrons that we have thus far neglected.  In conventional superconductors, this involves lattice vibrations (as in this wikipedia description), though other attractive interactions are possible.  At sufficiently low temperatures, the ordinary metal state is unstable, and the system will spontaneously form pairs of electrons (or holes).  Those pairs then condense into a single coherent state described by an amplitude \(|\Psi|\) and a phase, \(\phi\), shared by all the pairs.  The conventional theory of this was formulated by Bardeen, Cooper, and Schrieffer in 1957.  A couple of nice lecture note presentations of this are here (courtesy Yuval Oreg) and here (courtesy Dan Arovas), if you want the technical details.  This leads to an energy gap that characterizes how much it costs to create individual quasiparticles.  Conduction in a superconductor takes place through the flow of pairs.  (A clue to this is the appearance of the \(2e\) in the flux quantization.)
  • This taking on of a global phase for the pairs of electrons is a spontaneous breaking of gauge symmetry - this is discussed pedagogically for physics students here.  Understanding this led to figuring out the Anderson-Higgs mechanism, btw. 
  • The result is a state with a kind of rigidity; precisely how this leads to the phenomenology of superconductivity is not immediately obvious, to me anyway.  If someone has a link to a great description of this, please put it in the comments.  (Interestingly google gemini is not too bad at discussing this.)
  • The existence of this global phase is hugely important, because it's the basis for the Josephson effect(s), which in turn has led to the basis of exquisite magnetic field sensing, all the superconducting approaches to quantum information, and the definition of the volt, etc.
  • The paired charge carriers are described by a pairing symmetry of their wave functions in real space.  In conventional BCS superconductors, each pair has no orbital angular momentum ("\(s\)-wave"), and the spins are in a singlet state.  In other superconductors, pairs can have \(l = 1\) orbital angular momentum ("\(p\)-wave", with spins in the triplet configuration), \(l = 2\) orbital angular momentum ("\(d\)-wave", with spins in a singlet again), etc.  The pairing state determines whether the energy gap is directionally uniform (\(s\)-wave) or whether there are directions ("nodes") along which the gap goes to zero.  
I have necessarily left out a ton here.  Superconductivity continues to be both technologically critical and scientifically fascinating.  One major challenge in understanding the microscopic mechanisms behind particular superconductors is that the superconducting state itself is in a sense generic - many of its properties (like phase rigidity) are emergent regardless of the underlying microscopic picture, which is amazing.

One other point, added after initial posting. In quantum computing approaches, a major challenge is how to build robust effective ("logical") qubits from individual physical qubits that are not perfect (meaning that they suffer from environmental decoherence among other issues).  The phase coherence of electronic quasiparticles in ordinary metals is generally quite fragile; inelastic interactions with each other, with phonons, with impurity spins, etc. can all lead to decoherence.  However, starting from those ingredients, superconductivity shows that it is possible to construct, spontaneously, a collective state with very long-lived coherence.  I'm certain I'm not the first to wonder about whether there are lessons to be drawn here in terms of the feasibility of and approaches to quantum error correction.

Sunday, January 11, 2026

What is the Kondo effect?

The Kondo effect is a neat piece of physics, an archetype of a problem involving strong electronic correlations and entanglement, with a long and interesting history and connections to bulk materials, nanostructures, and important open problems.  

First, some stage setting.  In the late 19th century, with the development of statistical physics and the kinetic theory of gases, and the subsequent discovery of electrons by JJ Thomson, it was a natural idea to try modeling the electrons in solids as a gas, as done by Paul Drude in 1900.  Being classical, the Drude model misses a lot (If all solids contain electrons, why aren't all solids metals?  Why is the specific heat of metals orders of magnitudes lower than what a classical electron gas would imply?), but it does introduce the idea of electrons as having an elastic mean free path, a typical distance traveled before scattering off something (an impurity? a defect?) into a random direction.  In the Drude picture, as \(T \rightarrow 0\), the only thing left to scatter charge carriers is disorder ("dirt"), and the resistivity of a conductor falls monotonically and approaches \(\rho_{0}\), the "residual resistivity", a constant set in part by the number of defects or impurities in the material.  In the semiclassical Sommerfeld model, and then later in nearly free electron model, this idea survives.

Resistivity growing at low \(T\)
for gold with iron impurities, fig 
One small problem:  in the 1930s (once it was much easier to cool materials down to very low temperatures), it was noticed that in many experiments (here and here, for example) the electrical resistivity of metals did not seem to fall and then saturate at some \(\rho_{0}\).  Instead, as \(T \rightarrow 0\), \(\rho(T)\) would go through a minimum and then start increasing again, approximately like \(\delta \rho(T) \propto - \ln(T/T_{0})\), where \(T_{0}\) is some characteristic temperature scale.  This is weird and problematic, especially since the logarithm formally diverges as \(T \rightarrow 0\).   

Over time, it became clear that this phenomenon was associated with magnetic impurities, atoms that have unpaired electrons typically in \(d\) orbitals, implying that somehow the spin of the electrons was playing an important role in the scattering process.  In 1964, Jun Kondo performed the definitive perturbative treatment of this problem, getting the \(\ln T\) divergence.  

[Side note: many students learning physics are at least initially deeply uncomfortable with the idea of approximations (that many problems can't be solved analytically and exactly, so we need to take limiting cases and make controlled approximations, like series expansions).  What if a series somehow doesn't converge?  This is that situation.]

The Kondo problem is a particular example of a "quantum impurity problem", and it is a particular limiting case of the Anderson impurity model.  Physically, what is going on here?  A conduction electron from the host metal could sit on the impurity atom, matching up with the unpaired impurity electron.  However (much as we can often get away with ignoring it) like charges repel, and it is energetically very expensive (modeled by some "on-site" repulsive energy \(U\)) to do that.  Parking that conduction electron long-term is not allowed, but a virtual process can take place, whereby a conduction electron with spin opposite to the localized moment can (in a sense) pop on there and back off, or swap places with the localized electron.  The Pauli principle enforces this opposed spin restriction, leading to entanglement between the local electron and the conduction electron as they form a singlet.  Moreover, this process generally involves conduction electrons at the Fermi surface of the metal, so it is a strongly interacting many-body problem.  As the temperature is reduced, this process becomes increasingly important, so that the impurity's scattering cross section of conduction electrons grows as \(T\) falls, causing the resistivity increase.  

Top: Cartoon of the Kondo scattering process. Bottom:
Ground state is a many-body singlet between the local
moment and the conduction electrons.

The eventual \(T = 0\) ground state of this system is a many-body singlet, with the localized spin entangled with a "Kondo cloud" of conduction electrons.  The roughly \(\ln T\) resistivity correction rolls over and saturates.   There ends up being a sharp peak (resonance) in the electronic density of states right at the Fermi energy.  Interestingly, this problem actually can be solved exactly and analytically (!), as was done by Natan Andrei in this paper in 1980 and reviewed here.  

This might seem to be the end of the story, but the Kondo problem has a long reach!  With the development of the scanning tunneling microscope, it became possible to see Kondo resonances associated with individual magnetic impurities (see here).  In semiconductor quantum dot devices, if the little dot has an odd number of electrons, then it can form a Kondo resonance that spans from the source electrode through the dot and into the drain electrode.  This leads to a peak in the conductance that grows and saturates as \(T \rightarrow 0\) because it involves forward scattering.  (See here and here).  The same can happen in single-molecule transistors (see here, here, here, and a review here).  Zero-bias peaks in the conductance from Kondo-ish physics can be a confounding effect when looking for other physics.

Of course, one can also have a material where there isn't a small sprinkling of magnetic impurities, but a regular lattice of spin-hosting atoms as well as conduction electrons.  This can lead to heavy fermion systems, or Kondo insulators, and more exotic situations.   

The depth of physics that can come out of such simple ingredients is one reason why the physics of materials is so interesting.  

Sunday, January 04, 2026

Updated: CM/nano primer - 2026 edition

This is a compilation of posts related to some basic concepts of the physics of materials and nanoscale physics.  I realized the other day that I hadn't updated this since 2019, and therefore a substantial audience may not have seen these.  Wikipedia's physics entries have improved greatly over the years, but hopefully these are a complement that's useful to students and maybe some science writers.  Please let me know if there are other topics that you think would be important to include.  

What is temperature?
What is chemical potential?
What is mass?
Fundamental units and condensed matter

What are quasiparticles?
Quasiparticles and what is "real"
What is effective mass?
What is a phonon?
What is a plasmon?
What are magnons?
What are skyrmions?
What are excitons?
What is quantum coherence?
What are universal conductance fluctuations?
What is a quantum point contact?  What is quantized conductance?
What is tunneling?

What are steric interactions?
(effectively) What is the normal force?
What is a flat band and why might you care? (example: Kagome lattice)
What is the Kondo effect?

What is a crystal?

Saturday, January 03, 2026

What are dislocations?

How do crystalline materials deform?  When you try to shear or stretch a crystalline solid, in the elastic regime the atoms just slightly readjust their positions (at right).  The "spring constant" that determines the amount of deformation originates from the chemical bonds - how and to what extent the electrons are shared between the neighboring atoms.  In this elastic regime, if the applied stress is removed, the atoms return to their original positions.  Now imagine cranking up the applied stress.  In the "brittle" limit, eventually bonds rupture and the material fractures abruptly in a runaway process.  (You may never have thought about this, but crack propagation is a form of mechanochemistry, in that bonds are broken and other chemical processes then have to take place to make up for those changes.) 

In many materials, especially metals, rather than abruptly ripping apart, materials can deform plastically, so that even when the external stress is removed, the atoms remain displaced somehow.  The material has been deformed "irreversibly", meaning that the microscopic bonding of at least some of the atoms has been modified.  The mechanism here is the presence and propagation of defects in the crystal stacking called dislocations, the existence of which was deduced back in the 1930s when people first came to appreciate that metals are generally far easier to deform than expectations from a simple calculation assuming perfect bonding.    

(a) Edge dislocation, where the copper-colored spheres
are an "extra" plane of atoms.  (b) A (red) path enclosing 
the edge dislocation; the Burgers vector is shown with 
the black arrow. (c) A screw dislocation.  (Images from 

Dislocations are topological line defects (as opposed to point defects like vacancies, impurities, or interstitials), characterized by a vector along the line of the defect, and a Burgers vector.  Imagine taking some number of lattice site steps going around a closed loop in a crystal plane of the material.   For example, in the \(x-y\) plane, you go 4 sites in the \(+x\) direction, 4 sites in the \(+y\) direction, 4 sites in the \(-x\) direction, and 4 sites in the \(-y\) direction.  If you ended up back where you started, then you have not enclosed a dislocation.  If you end up shifted sideways in the plane relative to your starting point, your path has enclosed an edge dislocation (see (a) and (b) to the right).  The Burgers vector connects the endpoint of the path with the beginning point of the path.  An edge dislocation is the end of an "extra" plane of atoms in a crystal (the orange atoms in (a)).  If you go around the path in the \(x-y\) plane and end up shifted out of the initial plane (so that the Burgers vector is pointing along \(z\), parallel to the dislocation line), your path enclosed a screw dislocation (see (c) in the figure).   Edge and screw dislocations are the two major classes of mobile dislocations.  There are also mixed dislocations, in which the dislocation line meanders around, so that displacements can look screw-like along some orientations of the line and edge-like along others.  (Here is some nice educational material on this, albeit dated in its web presentation.)  

A few key points:
  • Mobile dislocations are the key to plastic deformation and the "low" yield strength of ductile materials compared to the idea situation.  Edge dislocations propagate sideways along their Burgers vectors when shear stresses are applied to the plane in which the dislocation lies.  This is analogous to moving a rug across the floor by propagating a lump rather than trying to shift the entire rug at once.  Shearing the material by propagating an edge dislocation involves breaking and reforming bonds along the line, which is much cheaper energetically than breaking all the bonds in the shear plane at once.  To picture how a screw dislocation propagates in the presence of shear, imagine trying to tear a stack of paper.  (I was taught to picture tearing a phone book, which shows how ancient I am.)  
  • A dislocation is a great example of an emergent object.  Materials scientists and mechanical engineers interested in this talk about dislocations as entities that have positions, can move, and can interact.  One could describe everything in terms of the positions of the individual atoms in the solid, but it is often much more compact and helpful to think about dislocations as objects unto themselves. 
  • Dislocations can multiply under deformation.  Here is a low-tech but very clear video about one way this can happen, the Frank-Read source (more discussion here, and here is the original theory paper by Frank and Read).  In case you think this is just some hand-wavy theoretical idea, here is a video from a transmission electron microscopy showing one of these sources in action.
  • Dislocations are associated with local strain (and therefore stress). This is easiest for me to see in the end-on look at the edge dislocation in (a), where clearly there is compressive strain below where the "extra" orange plane of atoms starts, and tensile strain above there where the lattice is spreading to make room for that plane.   Because of these strain fields and the topological nature of dislocations, they can tangle with each other and hinder their propagation.  When this happens, a material becomes more difficult to deform plastically, a phenomenon called work hardening that you have seen if you've ever tried to break a paperclip by bending the metal back and forth.
  • Controlling the nucleation and pinning of dislocations is key to the engineering of tough, strong materials.  This paper is an example of this, where in a particular alloy, crystal rotation makes it possible to accommodate a lot of strain from dislocations in "kink bands". 




Friday, January 02, 2026

EUV lithography - a couple of quick links

Welcome to the new year!

I've written previously (see here, item #3) about the extreme ultraviolet lithography tools used in modern computer chip fabrication.   These machines are incredible, the size of a railway car, and cost hundreds of millions of dollars each.  Veritasium has put out a new video about these, which I will try to embed here.  Characteristically, it's excellent, and I wanted to bring it to your attention.


It remains an interesting question whether there could be a way of achieving this kind of EUV performance through an alternative path.  As I'd said a year ago, if you could do this for only $50M per machine, it would be hugely impactful.  

A related news item:  There are claims that a Chinese effort in Shenzen has a prototype EUV machine now (that fills an entire factory floor, so not exactly compact or cheap).  It will be a fascinating industrial race if multiple players are able to make the capital investments needed to compete in this area.