Search This Blog

Friday, February 13, 2026

Updates: The US government and STEM research

Now that we're 6 weeks into the new year, I think it's worth it to do an incomplete roundup of where we are on US federal support of STEM research.  Feel free to skip this post if you don't want to read about this.  
  • Appropriators in Congress largely went against the FY26 presidential budget request, and various spending bills by and large slightly-less-than level-funded most US science agencies. A physics-oriented take is here. The devil is in the details.  The AAAS federal R&D dashboard lets you explore this at a finer level.  Nature has an interactive widget that visualizes what has been cut and what remains.
  • Bear in mind, that was just year 1 of the present administration.  All of the effort, all of the work pushing back against proffered absolutely draconian, agency-destroying cuts?  That likely will have to be done again this year.  And in subsequent years, if the administration still invests effort in pushing enormously slashed budgets in their budget requests.
  • There is an issue of Science with the whole news section about how the past year has changed the science funding and pipeline in the US.
  • In NSF news, the rate of awards remains very low, though there is almost certainly a major delay because of the lateness of the budget, coping with reduced staffing levels, and restructuring now that Divisions no longer exist.  How greater emphasis on specific strategic priorities (beyond what is in the program calls) will affect operations remains unclear, at least to me.
  • Also, some NSF graduate research fellowship applications, especially in the life sciences, seem to be getting kicked back without review - see here (sorry about the paywall).  This seems to be a broad research area issue, despite no information to applicants about this (that lack of information flow is perhaps unsurprising).  
  • I'm not well-immersed in the world of NIH and the FDA, but I know things are bad.  Fifteen out of 27 of the NIH institutes have vacant or acting director positions.  The FDA declined to even take the application for Moderna's mRNA flu vaccine, a move not popular even with the Wall Street Journal.  Moderna has also decided to shelve promising vaccines for a number of diseases because they no longer think the US will be a market for them, and it practically seems like someone wants to bring back polio.  (Note:   I will not have the comments become a back-and-forth about vaccines.)
  • The back and forth about indirect cost rates continues, along with the relevant court cases.  The recent appropriations have language to prevent sudden changes in rates.  The FAIR model is not yet passed.
  • Concerns still loom about impoundment.
  • There has been an exodus of technically trained PhDs from government service.
  • I could go on.  I know I've left out critical areas, and I haven't talked about DOE or NASA or DOD or EPA or NOAA explicitly.  
Honest people can have discussions about the right balance of federal vs state vs industrial vs philanthropic support for research.  There are no easy answers in the present time.  For those who think that robust public investment in science and engineering research is critical to societal good, economic competitiveness, and security, we need to keep pushing and not let fatigue or fatalism win the day.  


  

Sunday, February 08, 2026

Data centers in space make no sense to me

There seems to be a huge push lately in the tech world for the idea of placing data centers in space.  This is not just coming from Musk via the merging of SpaceX and XAi.  Google has some effort along these lines.  NVIDIA is thinking about it. TED talks are being given by startup people in San Francisco on this topic, so you know we've reached some well-defined hype level.    Somehow the idea has enough traction that even the PRC is leaning in this direction.  The arguments seem to be that (1) there is abundant solar power in space; (2) environmental impact on the earth will be less, with no competition for local electricity, water, real estate; (3) space is "cold", so cooling these things should be do-able; (4) it's cool and sounds very sci-fi/high frontier.  

At present (or near-future) levels of technology, as far as I can tell this idea makes no sense.  I will talk about physics reasons here, though there are also pragmatic economic reasons why this seems crazy.  I've written before that I think some of the AI/data center evangelists are falling victim to magical thinking, because they come from the software world and don't in their heart of hearts appreciate that there are actual hardware constraints on things like chip manufacturing and energy production.  

Others have written about this - see here for example.  The biggest physics challenges with this idea (beyond lofting millions of kg of cargo into orbit):
  • While the cosmic microwave background is cold, cooling things in space is difficult, because vacuum is an excellent thermal insulator.  On the ground, you can use conduction and convection to get rid of waste heat.  In space, your only option (beyond throwing mass overboard, which is not readily replenishible) is radiative cooling.  The key physics here is the Stefan-Boltzmann law, which is a triumph of statistical physics (and one of my favorite derivations to discuss in class - you combine the Planck result for the energy density of a "gas" of photons in thermal equilibrium at some temperature \(T\) with a basic kinetic theory of gases result for the flux of particles out of a small hole).  It tells you that the best you can ever do is for an ideal black body, the total power radiated away is proportional to the area of the radiator and \(T^{4}\), with fundamental constants making up the proportionality constant with zero adjustable parameters.  
A liquid droplet radiator, from this excellent site
Remember, data centers right now consume enormous amounts of power (and cooling water).  While you can use heat pumps to try to get the radiators up to well above the operating temperatures of the electronics, that increases mass and waste power, and realistically there is an upper limit on the radiator temperature below 1000 K.  An ideal black body radiator at 1000 K puts out about 57 kW per square meter, and you probably need to get rid of tens of megawatts, necessitating hundreds to thousands of square meters of radiator area.  There are clever ideas on how to try to do this.  For example, in the liquid droplet radiator, you could spray a bunch of hot droplets out into space, capitalizing on their large specific surface area.  Of course, you'd need to recapture the cooled droplets, and the hot liquid needs to have sufficiently low vapor pressure that you don't lose a lot of material.  Still, as far as I am aware, to date no one has actually deployed a large-scale (ten kW let alone MW level) droplet radiator in space.  

  • High end computational hardware is vulnerable to radiation damage.  There are no rad-hard GPUs.  Low earth orbit is a pretty serious radiation environment, with some flux of high energy cosmic rays quite a bit higher than on the ground.  While there are tests going on, and astronauts are going to bring smartphones on the next Artemis mission, it's rough.  Putting many thousands to millions of GPUs and huge quantities of memory in a harsh environment where they cannot be readily accessed or serviced seems unwise.  (There are also serious questions of vulnerability to attack.  Setting off a small nuclear warhead in LEO injects energetic electrons into the lower radiation belts and would be a huge mess.)
I think we will be faaaaaaar better off in the long run if we take a fraction of the money that people want to invest in space-based data centers, and instead plow those resources into developing energy-efficient computing.  Musk has popularized the engineering sentiment "The best part is no part".  The best way to solve the problem of supplying and radiating away many GW of power for data centers is to make data centers that don't consume many GW of power.  

Sunday, February 01, 2026

What is the Aharonov-Bohm effect?

After seeing this latest extremely good video from Veritasium, and looking back through my posts, I realized that while I've referenced it indirectly, I've never explicitly talked about the Aharonov-Bohm effect.  The video is excellent, and that wikipedia page is pretty good, but maybe some people will find another angle on this to be helpful.  

Still from this video.

The ultrabrief version:  The quantum interference of charged particles like electrons can be controllably altered by tuning a magnetic field in a region that the particles never pass through.  This is weird and spooky because it's an entirely quantum mechanical effect - classical physics, where motion is governed by local forces, says that zero field = unaffected trajectories.  

In quantum mechanics, we describe the spatial distribution of particles like electrons with a wavefunction, a complex-valued quantity that one can write as an amplitude and a phase \(\varphi\), where both depend on position \(\mathbf{r}\).  The phase is important because waves can interfere.  Crudely speaking, when the crests of one wave (say \(\varphi = 0\)) line up with the troughs of another wave (\(\varphi = \pi\)) at some location, the waves interfere destructively, so the total wave at that location is zero if the amplitudes of each contribution are identical.   As quantum particles propagate through space, their phase "winds" with distance \(\mathbf{r}\) like \(\mathbf{k}\cdot \mathbf{r}\), where \(\hbar \mathbf{k} = \mathbf{p}\) is the momentum.  Higher momentum = faster winding of phase = shorter wavelength.  This propagation, phase winding, and interference is the physics behind the famous two-slit experiment.  (In his great little popular book - read it if you haven't yet - Feynman described phase as a clockface attached to each particle.)  One important note:  The actual phase itself is arbitrary; it's phase differences that matter in interference experiments.  If you added an arbitrary amount \(\varphi_{0}\) to every phase, no physically measurable observables would change. 

Things get trickier if the particles that move around are charged.  It was realized 150+ years ago that formal conservation of momentum gets tricky if we consider electric and magnetic fields.  The canonical momentum that shows up in the Lagrange and Hamilton equations is \(\mathbf{p}_{c} = \mathbf{p}_{kin} + q \mathbf{A}\), where \(\mathbf{p}_{kin}\) is the kinetic momentum (the part that actually has to do with the classical velocity and which shows up in the kinetic energy), \(q\) is the charge of the particle, and \(\mathbf{A}(\mathbf{r}\)\) is the vector potential.  

Background digression: The vector potential is very often a slippery concept for students.  We get used to the idea of a scalar potential \(\phi(\mathbf{r})\), such that the electrostatic potential energy is \(q\phi\) and the electric field is given by \(\mathbf{E} = -\nabla \phi\) if there are no magnetic fields.  Adding an arbitrary uniform offset to the scalar potential, \(\phi \rightarrow \phi + \phi_{0}\), doesn't change the electric field (and therefore forces on charged particles), because the zero that we define for energy is arbitrary (general relativity aside).  For the vector potential, \(\mathbf{B} = \nabla \times \mathbf{A}\).   This means we can add an arbitrary gradient of a scalar function to the vector potential, \(\mathbf{A} \rightarrow \mathbf{A}+ \nabla f(\mathbf{r})\), and the magnetic field won't change.  Maxwell's equations mean that \(\mathbf{E} = -\nabla \phi - \partial \mathbf{A}/\partial t\).  "Gauge freedom" means that there is more than one way to choose internally consistent definitions of \(\phi\) and \(\mathbf{A}\).

TL/DR main points: (1)  The vector potential can be nonzero in places where \(\mathbf{B}\) (and hence the classical Lorentz force) is zero.  (2) Because the canonical momentum becomes the operator \(-i \hbar \nabla\) in quantum mechanics and the kinetic momentum is what shows up in the kinetic energy, charged propagating particles pick up an extra phase winding given by \(\delta \varphi = (q/\hbar)\int \mathbf{A}\cdot d\mathbf{r}\) along a path.  

This is the source of the creepiness of the Aharonov-Bohm effect.  Think of two paths (see still taken from the Veritasium video), and threading magnetic flux just through the little region using a solenoid will tune the intensity detected on the screen on the far right.  That field region can be made arbitrarily small and positioned anywhere inside the diamond formed by the paths, and the effect still works.  Something not mentioned in the video:  The shifting of the interference pattern is periodic in the flux through the solenoid, with a period of \(h/e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  

Why should you care about this?

  • As the video discusses, the A-B effect shows that the potentials are physically important quantities that affect motion, at least as much as the corresponding fields, and there are quantum consequences to this that are just absent in the classical world.
  • The A-B effect (though not with the super skinny field confinement) has been seen experimentally in many mesoscopic physics experiments (e.g., here, or here) and can be used as a means of quantifying coherence at these scales (e.g., here and here).
  • When dealing with emergent quasiparticles that might have unusual fractional charges (\(e^*\)), then A-B interferometers can have flux periodicities that are given by \(h/e^*\). (This can be subtle and tricky.)
  • Interferometry to detect potential-based phase shifts is well established.  Here's the paper mentioned in the video about a gravitational analog of the A-B effect.  (Quibblers can argue that there is no field-free region in this case, so it's not strictly speaking the A-B analog.)
Basically, the A-B effect has gone from an initially quite controversial prediction to an established piece of physics that can be used as a tool.  If you want to learn Aharonov's take on all this, please read this interesting oral history.   

Update: The always informative Steve Simon has pointed out to me a history of this that I had not known, that this effect had already been discovered a decade earlier by Ehrenberg and Siday.  Please see this arXiv paper about this.  Here is Ehrenberg and Siday's paper.  Aharonov and Bohm were unaware of it and arrived at their conclusions independently.  One lesson to take away:  Picking a revealing article title can really help your impact.