Saturday, July 20, 2024

The physics of squeaky shoes

In these unsettling and trying times, I wanted to write about the physics of a challenge I'm facing in my professional life: super squeaky shoes.  When I wear a particularly comfortable pair of shoes at work, when I walk in some hallways in my building (but not all), my shoes squeak very loudly with every step. How and why does this happen, physically?  

The shoes in question.

To understand this, we need to talk a bit about a friction, the sideways interfacial force between two surfaces when one surface is sheared (or attempted to be sheared) with respect to the other.  (Tribology is the study of friction, btw.)  In introductory physics we teach some (empirical) "laws" of friction, described in detail on the wikipedia page linked above as well as here:

  1.  For static friction (no actual sliding of the surfaces relative to each other), the frictional force \(F_{f} \le \mu_{s}N\), where \(\mu_{s}\) is the "coefficient of static friction" and \(N\) is the normal force (pushing the two surfaces together).  The force is directed in the plane and takes on the magnitude needed so that no sliding happens, up to its maximum value, at which point the surfaces start slipping relative to each other.
  2. For sliding or kinetic friction, \(F_{f} = \mu_{k}N\), where \(\mu_{k}\) is the coefficient of kinetic or sliding friction, and the force is directed in the plane to oppose the relative sliding motion.  The friction coefficients depend on the particular materials and their surface conditions.
  3. The friction forces are independent of the apparent contact area between the surfaces.  
  4. The kinetic friction force is independent of the relative sliding speed between the surfaces.
These "laws", especially (3) and (4), are truly weird once we know a bit more about physics, and I discuss this a little in my textbook.  The macroscopic friction force is emergent, meaning that it is a consequence of the materials being made up of many constituent particles interacting.  It's not a conservative force, in that energy dissipated through the sliding friction force doing work is "lost" from the macroscopic movement of the sliding objects and ends up in the microscopic vibrational motion (and electronic distributions, if the objects are metals).  See here for more discussion of friction laws.

Shoe squeaking happens because of what is called "stick-slip" motion.  When I put my weight on my right shoe, the rubber sole of the shoe deforms and elastic forces (like a compressed spring) push the rubber to spread out, favoring sliding rubber at the rubber-floor interface.  At some point, the local static friction maximum force is exceeded and the rubber begins to slide relative to the floor.  That lets the rubber "uncompress" some, so that the spring-like elastic forces are reduced, and if they fall back below \(\mu_{s}N\), that bit of sole will stick on the surface again.  A similar situation is shown in this model from Wolfram, looking at a mass (attached to an anchored spring) interacting with a conveyer belt.   If this start/stop cyclic motion happens at acoustic sorts of frequencies in the kHz, it sounds like a squeak, because the start-stop motion excites sound waves in the air (and the solid surfaces).  This stick-slip phenomenon is also why brakes on cars and bikes squeal, why hinges on doors in spooky houses creak, and why that one board in your floor makes that weird noise.  It's also used in various piezoelectric actuators

Macroscopic friction emerges from a zillion microscopic interactions and is affected by the chemical makeup of the surfaces, their morphology and roughness, any adsorbed layers of moisture or contaminants (remember: every surface around you right now is coated in a few molecular layers of water and hydrocarbon contamination), and van der Waals forces, among other things.  The reason my shoes squeak in some hallways but not others has to do with how the floors have been cleaned.  I could stop the squeaking by altering the bottom surface of my soles, though I wouldn't want to use a lubricant that is so effective that it seriously lowers \(\mu_{s}N\) and makes me slip.  

Friction is another example of an emergent phenomenon that is everywhere around us, of enormous technological and practical importance, and has some remarkable universality of response.  This kind of emergence is at the heart of the physics of materials, and trying to predict friction and squeaky shoes starting from elementary particle physics is just not do-able. 


Sunday, July 14, 2024

Brief items - light-driven diamagnetism, nuclear recoil, spin transport in VO2

Real life continues to make itself felt in various ways this summer (and that's not even an allusion to political madness), but here are three papers (two from others and a self-indulgent plug for our work) you might find interesting.

  • There has been a lot of work in recent years particularly by the group of Andrea Cavalleri, in which they use infrared light to pump particular vibrational modes in copper oxide superconductors (and other materials) (e.g. here).  There are long-standing correlations between the critical temperature for superconductivity, \(T_{c}\), and certain bond angles in the cuprates.  Broadly speaking, using time-resolved spectroscopy, measurements of the optical conductivity in these pumped systems show superconductor-like forms as a function of energy even well above the equilibrium \(T_{c}\), making it tempting to argue that the driven systems are showing nonequilibrium superconductivity.  At the same time, there has been a lot of interest in looking for other signatures, such as signs of the ways uperconductors expel magnetic flux through the famous Meissner effect.  In this recent result (arXiv here, Nature here), magneto-optic measurements in this same driven regime show signs of field build-up around the perimeter of the driven cuprate material in a magnetic field, as would be expected from Meissner-like flux expulsion.  I haven't had time to read this in detail, but it looks quite exciting.  
  • Optical trapping of nanoparticles is a very useful tool, and with modern techniques it is possible to measure the position and response of individual trapped particles to high precision (see here and here).  In this recent paper, the group of David Moore at Yale has been able to observe the recoil of such a particle due to the decay of a single atomic nucleus (which spits out an energetic alpha particle).  As an experimentalist, I find this extremely impressive, in that they are measuring the kick given to a nanoparticle a trillion times more massive than the ejected helium nucleus.  
  • From our group, we have published a lengthy study (arXiv here, Phys Rev B here) of local/longitudinal spin Seebeck response in VO2, a material with an insulating state that is thought to be magnetically inert.  This corroborates our earlier work, discussed here.  In brief, in ideal low-T VO2, the vanadium atoms are paired up into dimers, and the expectation is that the unpaired 3d electrons on those atoms form singlets with zero net angular momentum.  The resulting material would then not be magnetically interesting (though it could support triplet excitations called triplons).  Surprisingly, at low temperatures we find a robust spin Seebeck response, comparable to what is observed in ordered insulating magnets like yttrium iron garnet.  It seems to have the wrong sign to be from triplons, and it doesn't seem possible to explain the details using a purely interfacial model.  I think this is intriguing, and I hope other people take notice.
Hoping for more time to write as the summer progresses.  Suggestions for topics are always welcome, though I may not be able to get to everything.

Saturday, July 06, 2024

What is a Wigner crystal?

Last week I was at the every-2-years Gordon Research Conference on Correlated Electron Systems at lovely Mt. Holyoke.  It was very fun, but one key aspect of the culture of the GRCs is that attendees are not supposed to post about them on social media, thus encouraging presenters to show results that have not yet been published.  So, no round up from me, except to say that I think I learned a lot.

The topic of Wigner crystals came up, and I realized that (at least according to google) I have not really written about these, and now seems to be a good time.

First, let's talk about crystals in general.  If you bring together an ensemble of objects (let's assume they're identical for now) and throw in either some long-range attraction or an overall confining constraint, plus a repulsive interaction that is effective at short range, you tend to get formation of a crystal, if an object's kinetic energy is sufficiently small compared to the interactions.  A couple of my favorite examples of this are crystals from drought balls and bubble rafts.  As the kinetic energy (usually parametrized by a temperature when we're talking about atoms and molecules as the objects) is reduced, the system crystallizes, spontaneously breaking continuous translational and rotational symmetry, leading to configurations with discrete translational and rotational symmetry.  Using charged colloidal particles as buiding blocks, the attractive interaction is electrostatic, because the particles have different charges, and they have the usual "hard core repulsion".  The result can be all kinds of cool colloidal crystal structures.

In 1934, Eugene Wigner considered whether electrons themselves could form a crystal, if the electron-electron repulsion is sufficiently large compared to their kinetic energy.  For a cold quantum mechanical electron gas, where the kinetic energy is related to the Fermi energy of the electrons, the essential dimensionless parameter here is \(r_{s}\), the Wigner-Seitz radius.  Serious calculations have shown that you should get a Wigner crystal for electrons in 2D if \(r_{s} > \sim 31\).  (You can also have a "classical" Wigner crystal, when the electron kinetic energy is set by the temperature rather than quantum degeneracy; an example of this situation is electrons floating on the surface of liquid helium.)

Observing Wigner crystals in experiments is very challenging, historically.  When working in ultraclean 2D electron gases in GaAs/AlGaAs structures, signatures include looking for "pinning" of the insulating 2D electronic crystal on residual disorder, leading to nonlinear conduction at the onset of "sliding"; features in microwave absorption corresponding to melting of the crystal; changes in capacitance/screening, etc.  Large magnetic fields can be helpful in bringing about Wigner crystallization (tending to confine electronic wavefunctions, and quenching kinetic energy by having Landau Levels).  

In recent years, 2D materials and advances in scanning tunneling microscopy (STM) have led to a lot of progress in imaging Wigner crystals.  One representative paper is this, in which the moirĂ© potential in a bilayer system helps by flattening the bands and therefore reducing the kinetic energy.  Another example is this paper from April, looking at Wigner crystals at high magnetic field in Bernal-stacked bilayer graphene.   One aspect of these experiments that I find amazing is that the STM doesn't melt the crystals, since it's either injecting or removing charge throughout the imaging process.  The crystals are somehow stable enough that any removed electron gets rapidly replaced without screwing up the spatial order.  Very cool.

Two additional notes:

Saturday, June 22, 2024

What is turbulence? (And why are helicopters never quiet?)

Fluid mechanics is very often left out of the undergraduate physics curriculum.  This is a shame, as it's very interesting and directly relevant to many broad topics (atmospheric science, climate, plasma physics, parts of astrophysics).  Fluid mechanics is a great example of how it is possible to have comparatively simple underlying equations and absurdly complex solutions, and that's probably part of the issue.  The space of solutions can be mapped out using dimensionless ratios, and two of the most important are the Mach number (\(\mathrm{Ma} \equiv u/c_{s}\), where \(u\) is the speed of some flow or object, and \(c_{s}\) is the speed of sound) and the Reynolds number (\(\mathrm{Re} \equiv \rho u d/\mu\), where \(\rho\) is the fluid's mass density, \(d\) is some length scale, and \(\mu\) is the viscosity of the fluid). 

From Laurence Kedward, wikimedia commons

There is a nice physical interpretation of the Reynolds number.  It can be rewritten as \(\mathrm{Re} = (\rho u^{2})/(\mu u/d)\).  The numerator is the "dynamic pressure" of a fluid, the force per unit area that would be transferred to some object if a fluid of density \(\rho\) moving at speed \(u\) ran into the object and was brought to a halt.  This is in a sense the consequence of the inertia of the moving fluid, so this is sometimes called an inertial force.  The denominator, the viscosity multiplied by a velocity gradient, is the viscous shear stress (force per unit area) caused by the frictional drag of the fluid.  So, the Reynolds number is a ratio of inertial forces to viscous forces.  

When \(\mathrm{Re}\ll 1\), viscous forces dominate.  That means that viscous friction between adjacent layers of fluid tend to smooth out velocity gradients, and the velocity field \(\mathbf{u}(\mathbf{r},t) \) tends to be simple and often analytically solvable.  This regime is called laminar flow.  Since \(d\) is just some characteristic size scale, for reasonable values of density and viscosity for, say, water, microfluidic devices tend to live in the laminar regime.  

When \(\mathrm{Re}\gg 1\), frictional effects are comparatively unimportant, and the fluid "pushes" its way along.  The result is a situation where the velocity field is unstable to small perturbations, and there is a transition to turbulent flow.  The local velocity field has big, chaotic variations as a function of space and time.  While the microscopic details of \(\mathbf{u}(\mathbf{r},t)\) are often not predictable, on a statistical level we can get pretty far since mass conservation and momentum conservation can be applied to a region of space (the control volume or Eulerian approach).

Turbulent flow involves a cascade of energy flow down through eddies at length scales all the way down eventually to the mean free path of the fluid molecules.   This right here is why helicopters are never quiet.  Even if you started with a completely uniform downward flow of air below the rotor (enough of a momentum flux to support the weight of the helicopter), the air would quickly transition to turbulence, and there would be pressure fluctuations over a huge range of timescales that would translate into acoustic noise.  You might not be able to hear the turbine engine directly from a thousand feet away, but you can hear the resulting sound from the turbulent airflow.  

If you're interested in fluid mechanics, this site is fantastic, and their links page has some great stuff.

Friday, June 14, 2024

Artificial intelligence, extrapolation, and physical constraints

Disclaimer and disclosure:  The "arrogant physicist declaims about some topic far outside their domain expertise (like climate change or epidemiology or economics or geopolitics or....) like everyone actually in the field is clueless" trope is very overplayed at this point, and I've generally tried to avoid doing this.  Still, I read something related to AI earlier this week, and I wanted to write about it.  So, fair warning: I am not an expert about AI, machine learning, or computer science, but I wanted to pass this along and share some thoughts.  Feel even more free than usual to skip this and/or dismiss my views.

This is the series of essays, and here is a link to the whole thing in one pdf file.  The author works for OpenAI.  I learned about this from Scott Aaronson's blog (this post), which is always informative.

In a nutshell, the author basically says that he is one of a quite small group of people who really know the status of AI development; that we are within a couple of years of the development of artificial general intelligence; that this will lead essentially to an AI singularity as AGI writes ever-smarter versions of AGI; that the world at large is sleepwalking toward this and its inherent risks; and that it's essential that western democracies have the lead here, because it would be an unmitigated disaster if authoritarians in general and the Chinese government in particular should take the lead - if one believes in extrapolating exponential progressions, then losing the initiative rapidly translates into being hopelessly behind forever.

I am greatly skeptical of many aspects of this (in part because of the dangers of extrapolating exponentials), but it is certainly thought-provoking.  

I doubt that we are two years away from AGI.  Indeed, I wonder if our current approaches are somewhat analogous to Ptolemeiac epicycles.  It is possible in principle to construct extraordinarily complex epicyclic systems that can reproduce predictions of the motions of the planets to high precision, but actual newtonian orbital mechanics is radically more compact, efficient, and conceptually unified.  Current implementations of AI systems use enormous numbers of circuit elements that consume tens to hundreds of MW of electricity.  In contrast, your brain hosts a human-level intelligence, consumes about 20 W, and masses about 1.4 kg.  I just wonder if our current architectural approach is not the optimal one toward AGI.  (Of course, a lot of people are researching neuromorphic computing, so maybe that resolves itself.)

The author also seems to assume that whatever physical resources are needed for rapid exponential progress in AI will become available.  Huge numbers of GPUs will be made.  Electrical generating capacity and all associated resources will be there.  That's not obvious to me at all.  You can't just declare that vastly more generating capacity will be available in three years - siting and constructing GW-scale power plants takes years alone.  TSMC is about as highly motivated as possible to build their new facilities in Arizona, and the first one has taken three years so far, with the second one delayed likely until 2028.  Actual construction and manufacturing at scale cannot be trivially waved away.

I do think that AI research has the potential to be enormously disruptive.  It also seems that if a big corporation or nation-state thought that they could gain a commanding advantage by deploying something even if it's half-baked and the long-term consequences are unknown, they will 100% do it.  I'd be shocked if the large financial companies aren't already doing this in some form.  I also agree that broadly speaking as a species we are unprepared for the consequences of this research, good and bad.  Hopefully we will stumble forward in a way where we don't do insanely stupid things (like putting the WOPR in charge of the missiles without humans in the loop).   

Ok, enough of my uninformed digression.  Back to physics soon.

Update:  this is a fun, contrasting view by someone who definitely disagrees with Aschenbrenner about the imminence of AGI.

Sunday, June 02, 2024

Materials families: Halide perovskites

Looking back, I realized that I haven't written much about halide perovskites, which is quite an oversight given how much research impact they're having.  I'm not an expert, and there are multiple extensive review articles out there (e.g. here, here, here, here, here), so this will only be a very broad strokes intro, trying to give some context to why these systems are important, remarkable, and may have plenty of additional tricks to play.

From ACS Energy Lett. 5, 2, 604–610 (2020).

Perovskites are a class of crystals based on a structural motif (an example is ABX3, originally identified in the mineral CaTiO3, though there are others) involving octahedrally coordinated metal atoms.  As shown in the figure, each B atom is in the center of an octahedron defined by six X atoms.  There are many flavors of purely inorganic perovskites, including the copper oxide semiconductors and various piezo and ferroelectric oxides.  

The big excitement in recent years, though, involves halide perovskites, in which the X atom = Cl, Br, I, the B atom is most often Pb or Sn.  These materials are quite ionic, in the sense that the B atom is in the 2+ oxidation state, the X atom is in the 1- oxidation state, and whatever is in the A site is in the 1+ oxidation state (whether it's Cs+ or a molecular ion like methylammonium (MA = [CH3NH3]+) or foramidinium (FA = [HC(NH2)2]+).  

From Chem. Rev. 123, 13, 8154–8231 (2023).

There is an enormous zoo of materials based on these building blocks, made even more rich by the capability of organic chemists to toss in various small organic, covalent ligands to alter spacings between the components (and hence electronic overlap and bandwidths), tilt or rotate the octahedra, add in chirality, etc.  Forms that are 3D, effectively 2D (layers of corner-sharing octahedra), 1D, and "OD" (with isolated octahedra) exist.  Remarkably:

  • These materials can be processed in solution form, and it's possible to cast highly crystalline films.
  • Despite the highly ionic character of much of the bonding, many of these materials are semiconductors, with bandgaps in the visible.
  • Despite the differences in what chemists and semiconductor physicists usually mean by "pure", these materials can be sufficiently clean and free of the wrong kinds of defects that it is possible to make solar cells with efficiencies greater than 26% (!) (and very bright light emitting diodes).  
These features make the halide perovskites extremely attractive for possible applications, especially in photovoltaics and potentially light sources (even quantum emitters).  They are seemingly much more forgiving (in terms of high carrier mobility, vulnerability to disorder, and having a high dielectric polarizability and hence lower exciton binding energy and greater ease of charge extraction) than most organic semiconductors.  The halide perovskites do face some serious challenges (chemical stability under UV illumination and air/moisture exposure; the unpleasantness of Pb), but their promise is enormous

Sometimes nature seems to provide materials with particularly convenient properties.  Examples include water and the fact that ordinary ice is less dense than the liquid form; silicon and its outstanding oxide; gallium arsenide and the fact that it can be grown with great purity and stoichiometry even in an extremely As rich environment; I'm sure commenters can provide many more.  The halide perovskites seem to be another addition to this catalog, and as material properties continue to improve, condensed matter physicists are going to be looking for interesting things to do in these systems. 

Wednesday, May 29, 2024

Interesting reading - resonators, quantum geometry w/ phonons, and fractional quantum anomalous Hall

 Real life continues to be busy, but I wanted to point out three recent articles that I found interesting:

  • Mechanical resonators are a topic with a long history, going back to the first bells and the tuning fork.  I've written about micromachined resonators before, and the quest to try to get very high quality resonators.  This recent publication is very impressive.  The authors have succeeded in fabricating suspended Si3N4 resonators that are 70 nm thick but 3 cm (!!) long.  In terms of aspect ratio, that'd be like a diving board 3 cm thick and 12.8 km long.  By varying the shape of the suspended "string" along its length, they create phononic band gaps, so that some vibrations are blocked from propagating along the resonator, leading to reduced losses.  They are able to make such resonators that work at acoustic frequencies at room temperature (in vacuum) and have quality factors as high as \(6.5 \times 10^{9}\), which is amazing.  
  • Speaking of vibrations, this paper in Nature Physics is a thought-provoking piece of work.  Electrons in solids are coupled to lattice vibrations (phonons), and that's not particularly surprising.  The electronic band structure depends on how the atoms are stacked in space, and a vibration like a phonon is a particular perturbation of that atomic arrangement.  The new insight here is to look at what is being called quantum geometry and how that affects the electron-phonon coupling.  As I wrote here, electrons in crystals can be described by Bloch waves which include a function \(u_{\mathbf{k}}(\mathbf{r})\) that has the real-space periodicity of the crystal lattice.  How that function varies over \(\mathbf{k}\)-space is called quantum geometry and has all kinds of consequences (e.g., here and here).  It turns out that this piece of the band structure can have a big and sometimes dominant influence on the coupling between mobile electrons and phonons.
  • Speaking of quantum geometry and all that, here is a nice article in Quanta about the observation of the fractional quantum anomalous Hall effect in different 2D material systems.  In the "ordinary" fractional quantum Hall effect, topology and interactions combine at low temperatures and (usually) high magnetic fields in clean 2D materials to give unusual electronic states with, e.g., fractionally charged low energy excitations.  Recent exciting advances have found related fractional Chern insulator states in various 2D materials at zero magnetic field.  The article does a nice job capturing the excitement of these recent works.

Saturday, May 18, 2024

Power and computing

The Wall Street Journal last week had an article (sorry about the paywall) titled "There’s Not Enough Power for America’s High-Tech Ambitions", about how there is enormous demand for more data centers (think Amazon Web Services and the like), and electricity production can't readily keep up.  I've written about this before, and this is part of the motivation for programs like FuSE (NSF's Future of Semiconductors call).  It seems that we are going to be faced with a choice: slow down the growth of computing demand (which seems unlikely, particularly with the rise of AI-related computing, to say nothing of cryptocurrencies); develop massive new electrical generating capacity (much as I like nuclear power, it's hard for me to believe that small modular reactors will really be installed at scale at data centers); or develop approaches to computing that are far more energy efficient; or some combination.  

The standard computing architecture that's been employed since the 1940s is attributed to von Neumann.  Binary numbers (1, 0) are represented by two different voltage levels (say some \(V\) for a 1 and \(V \approx 0\) for a 0); memory functions and logical operations happen in two different places (e.g., your DRAM and your CPU), with information shuttled back and forth as needed.  The key ingredient in conventional computers is the field-effect transistor (FET), a voltage-activated switch, in which a third (gate) electrode can switch the current flow between a source electrode and a drain electrode.  

The idea that we should try to lower power consumption of computing hardware is far from new.  Indeed, NSF ran a science and technology center for a decade at Berkeley about exploring more energy-efficient approaches.  The simplest approach, as Moore's Law cooked along in the 1970s, 80s, and 90s, was to steadily try to reduce the magnitude of the operating voltages on chips.  Very roughly speaking, power consumption goes as \(V^{2}\).  The losses in the wiring and transistors scale like \(I \cdot V\); the losses in the capacitors that are parts of the transistors scale like some fraction of the stored energy, which is also like \(V^{2}\).  For FETs to still work, one wants to keep the same amount of gated charge density when switching, meaning that the capacitance per area has to stay the same, so dropping \(V\) means reducing the thickness of the gate dielectric layer.  This went on for a while with SiO2 as the insulator, and eventually in the early 2000s the switch was made to a higher dielectric constant material because SiO2 could not be made any thinner.  Since the 1970s, the operating voltage \(V\) has fallen from 5 V to around 1 V.  There are also clever schemes now to try to vary the voltage dynamically.  For example, one might be willing to live with higher error rates in the least significant bits of some calculations (like video or audio playback) if it means lower power consumption.  With conventional architectures, voltage scaling has been taken about as far as it can go.

Way back in 2006, I went to a conference and Eli Yablonovitch talked at me over dinner about how we needed to be thinking about far lower voltage operations.  Basically, his argument was that if we are using voltages that are far greater than the thermal voltage noise in our wires and devices, we are wasting energy.  With conventional transistors, though, we're kind of stuck because of issues like subthreshold swing.  

So what are the options?  There are many ideas out there. 
  • Change materials.  There are materials that have metal-insulator transitions, for example, such that it might be possible to trigger dramatic changes in conduction (for switching purposes) with small stimuli, evading the device physics responsible for the subthreshold slope argument.  
  • Change architectures.  Having memory and logic physically separated isn't the only way to do digital computing.  The idea of "logic-in-memory" computing goes back to before I was born.  
  • Radically change architectures.  As I've written before, there is great interest in neuromorphic computing, trying to make devices with connectivity and function designed to mimic the way neurons work in biological brains.  This would likely mean analog rather than digital logic and memory, complex history-dependent responses, and trying to get vastly improved connectivity.  As was published last week in Science, 1 cubic millimeter of brain tissue contains 57,000 cells and 150,000,000 synapses.  Trying to duplicate that level of 3D integration at scale is going to be very hard.  The approach of just making something that starts with crazy but uncontrolled connectivity and training it somehow (e.g., this idea from 2002) may reappear.
  • Update: A user on twitter pointed out that the time may finally be right for superconducting electronics.  Here is a recent article in IEEE Spectrum about this, and here is a youtube video of a pretty good intro.  The technology of interest is "rapid single-flux quantum" (RSFQ) logic, where information is stored in circulating current loops in devices based on Josephson junctions.  The compelling aspects include intrinsically ultralow power dissipation b/c of superconductivity, and intrinsically fast timescales (clock speeds of hundreds of GHz) because of the frequency scales associated with the Josephson effect.  I'm a bit skeptical, because these ideas have been around for 30+ years and the integration challenges are still significant, but maybe now the economic motivation is finally sufficient.
A huge driving constraint on everything is economics.  We are not going to decide that computing is so important that we will sacrifice refrigeration, for example; basic societal needs will limit what fraction of total generating capacity we devote to computing, and that includes concerns about impact of power generation on climate.  Likewise, switching materials or architectures is going to be very expensive at least initially, and is unlikely to be quick.  It will be interesting to see where we are in another decade.... 

Tuesday, May 07, 2024

Wind-up nanotechnology

When I was a kid, I used to take allowance money and occasionally buy rubber-band-powered balsa wood airplanes at a local store.  Maybe you've seen these.  You wind up the rubber band, which stretches the elastomer and stores energy in the elastic strain of the polymer, as in Hooke's Law (though I suspect the rubber band goes well beyond the linear regime when it's really wound up, because of the higher order twisting that happens).  Rhett Alain wrote about how well you can store energy like this.  It turns out that the stored energy per mass of the rubber band can get pretty substantial. 

Carbon nanotubes are one of the most elastically strong materials out there.  A bit over a decade ago, a group at Michigan State did a serious theoretical analysis of how much energy you could store in a twisted yarn made from single-walled carbon nanotubes.  They found that the specific energy storage could get as large as several MJ/kg, as much as four times what you get with lithium ion batteries!

Now, a group in Japan has actually put this to the test, in this Nature Nano paper.  They get up to 2.1 MJ/kg, over the lithium ion battery mark, and the specific power (when they release the energy) at about \(10^{6}\) W/kg is not too far away from "non-cyclable" energy storage media, like TNT.  Very cool!  

Monday, April 29, 2024

Moiré and making superlattices

One of the biggest condensed matter trends in recent years has been the stacking of 2D materials and the development of moirĂ© lattices.  The idea is, take a layer of 2D material and stack it either (1) on itself but with a twist angle, or (2) on another material with a slightly different lattice constant.  Because of interactions between the layers, the electrons in the material have an effective potential energy that has a spatial periodicity associated with the moirĂ© pattern that results.  Twisted stacking hexagonal lattice materials (like graphene or many of the transition metal dichalcogenides) results in a triangular moirĂ© lattice with a moirĂ© lattice constant that depends on twist angle.  Some of the most interesting physics in these systems seems to pop out when the moirĂ© lattice constant is on the order of a few nm to 10 nm or so.  The upside of the moirĂ© approach is that it can produce such an effective lattice over large areas with really good precision and uniformity (provided that the twist angle can really be controlled - see here and here, for example.)  You might imagine using lithography to make designer superlattices, but getting the kind of cleanliness and homogeneity at these very small length scales is very challenging.

It's not surprising, then, that people are interested in somehow applying superlattice potentials to nearby monolayer systems.  Earlier this year, Nature Materials ran three papers published sequentially in one issue on this topic, and this is the accompanying News and Views article.

  • In one approach, a MoSe2/WS2 bilayer is made and the charge in the bilayer is tuned so that the bilayer system is a Mott insulator, with charges localized in exactly the moirĂ© lattice sites.  That results in an electrostatic potential that varies on the moirĂ© lattice scale that can then influence a nearby monolayer, which then shows cool moirĂ©/flat band physics itself.
  • Closely related, investigators used a small-angle twisted bilayer of graphene.  That provides a moirĂ© periodic dielectric environment for a nearby single layer of WSe2.  They can optically excite Rydberg excitons in the WSe2, excitons that are comparatively big and puffy and thus quite sensitive to their dielectric environment.  
  • Similarly, twisted bilayer WS2 can be used to apply a periodic Coulomb potential to a nearby bilayer of graphene, resulting in correlated insulating states in the graphene that otherwise wouldn't be there.

Clearly this is a growth industry.  Clever, creative ways to introduce highly ordered superlattice potentials on very small lengthscales with other symmetries besides triangular lattices would be very interesting.

Monday, April 15, 2024

The future of the semiconductor industry, + The Mechanical Universe

 Three items of interest:

  • This article is a nice review of present semiconductor memory technology.  The electron micrographs in Fig. 1 and the scaling history in Fig. 3 are impressive.
  • This article in IEEE Spectrum is a very interesting look at how some people think we will get to chips for AI applications that contain a trillion (\(10^{12}\)) transistors.  For perspective, the processor in my laptop used to write this has about 40 billion transistors.  (The article is nice, though the first figure commits the terrible sin of having no y-axis number or label; clearly it's supposed to represent exponential growth as a function of time in several different parameters.)
  • Caltech announced the passing of David Goodstein, renowned author of States of Matter and several books about the energy transition.  I'd written about my encounter with him, and I wanted to take this opportunity to pass along a working link to the youtube playlist for The Mechanical Universe.  While the animation can look a little dated, it's worth noting that when this was made in the 1980s, the CGI was cutting edge stuff that was presented at siggraph.

Friday, April 12, 2024

Electronic structure and a couple of fun links

Real life has been very busy recently.  Posting will hopefully pick up soon.  

One brief item.  Earlier this week, Rice hosted Gabi Kotliar for a distinguished lecture, and he gave a very nice, pedagogical talk about different approaches to electronic structure calculations.  When we teach undergraduate chemistry on the one hand and solid state physics on the other, we largely neglect electron-electron interactions (except for very particular issues, like Hund's Rules).  Trying to solve the many-electron problem fully is extremely difficult.  Often, approximating by solving the single-electron problem (e.g. finding the allowed single-electron states for a spatially periodic potential as in a crystal) and then "filling up"* those states gives decent results.   As we see in introductory courses, one can try different types of single-electron states.  We can start with atomic-like orbitals localized to each site, and end up doing tight binding / LCAO / HĂ¼ckel (when applied to molecules).  Alternately, we can do the nearly-free electron approach and think about Bloch wavesDensity functional theory, discussed here, is more sophisticated but can struggle with situations when electron-electron interactions are strong.

One of Prof. Kotliar's big contributions is something called dynamical mean field theory, an approach to strongly interacting problems.  In a "mean field" theory, the idea is to reduce a many-particle interacting problem to an effective single-particle problem, where that single particle feels an interaction based on the averaged response of the other particles.  Arguably the most famous example is in models of magnetism.  We know how to write the energy of a spin \(\mathbf{s}_{i}\) in terms of its interactions \(J\) with other spins \(\mathbf{s}_{j}\) as \(\sum_{j} J \mathbf{s}_{i}\cdot \mathbf{s}_{j}\).  If there are \(z\) such neighbors that interact with spin \(i\), then we can try instead writing that energy as \(zJ \mathbf{s}_{i} \cdot \langle \mathbf{s}_{i}\rangle\), where the angle brackets signify the average.  From there, we can get a self-consistent equation for \(\langle \mathbf{s}_{i}\rangle\).  

Dynamical mean field theory is rather similar in spirit; there are non-perturbative ways to solve some strong-interaction "quantum impurity" problems.  DMFT is like a way of approximating a whole lattice of strongly interacting sites as a self-consistent quantum impurity problem for one site.  The solutions are not for wave functions but for the spectral function.  We still can't solve every strongly interacting problem, but Prof. Kotliar makes a good case that we have made real progress in how to think about many systems, and when the atomic details matter.

*Here, "filling up" means writing the many-electron wave function as a totally antisymmetric linear combination of single-electron states, including the spin states.

PS - two fun links:

Friday, March 29, 2024

Thoughts on undergrad solid-state content

Figuring out what to include in an undergraduate introduction to solid-state physics course is always a challenge.   Books like the present incarnation of Kittel are overstuffed with more content than can readily fit in a one-semester course, and because that book has grown organically from edition to edition, it's organizationally not the most pedagogical.  I'm a big fan of and have been teaching from my friend Steve Simon's Oxford Solid State Basics, which is great but a bit short for a (US) one-semester class.  Prof. Simon is interested in collecting opinions on what other topics would be good to include in a hypothetical second edition or second volume, and we thought that crowdsourcing it to this blog's readership could be fun.  As food for thought, some possibilities that occurred to me were:

  • A slightly longer discussion of field-effect transistors, since they're the basis for so much modern technology
  • A chapter or two on materials of reduced dimensionality (2D electron gas, 1D quantum wires, quantum point contacts, quantum dots; graphene and other 2D materials)
  • A discussion of fermiology (Shubnikov-DeHaas, DeHaas-van Alphen) - this is in Kittel, but it's difficult to explain in an accessible way
  • An introduction to the quantum Hall effect
  • Some mention of topology (anomalous velocity?  Berry connection?)
  • An intro to superconductivity (though without second quantization and the gap equation, this ends up being phenomenology)
  • Some discussion of Ginzburg-Landau treatment of phase transitions (though I tend to think of that as a topic for a statistical/thermal physics course)
  • An intro to Fermi liquid theory
  • Some additional discussion of electronic structure methods beyond the tight binding and nearly-free electron approaches in the present book (Wannier functions, an intro to density functional theory)
What do people think about this?

Sunday, March 24, 2024

Items of interest

The time since the APS meeting has been very busy, hence the lack of posting.  A few items of interest:

  • The present issue of Nature Physics has several articles about physics education that I really want to read. 
  • This past week we hosted N. Peter Armitage for a really fun colloquium "On Ising's Model of Magnetism" (a title that he acknowledged borrowing from Peierls).  In addition to some excellent science about spin chains, the talk included a lot of history of science about Ising that I hadn't known.  An interesting yet trivial tidbit: when he was in Germany and later Luxembourg, the pronunciation was "eeesing", while after emigrating to the US, he changed it to "eye-sing", so however you've been saying it to yourself, you're not wrong.  The fact that the Isings survived the war in Europe is amazing, given that he was a Jew in an occupied country.  Someone should write a biography....
  • When I participated in a DOD-related program 13 years ago, I had the privilege to meet General Al Gray, former commandant of the US Marine Corps.  He just passed away this week, and people had collected Grayisms (pdf), his takes on leadership and management.  I'm generally not a big fan of leadership guides and advice books, but this is good stuff, told concisely.
  • It took a while, but a Scientific American article that I wrote is now out in the April issue.
  • Integrating nitrogen-vacancy centers for magnetic field sensing directly into the diamond anvils seems like a great way to make progress on characterizing possible superconductivity in hydrides at high pressures.
  • Congratulations to Peter Woit on 20 (!!) years of blogging at Not Even Wrong.  

Thursday, March 07, 2024

APS March Meeting 2024, Day 4 and wrap-up

Because of the timing of my flight back to Houston, I really only went to one session today, in which my student spoke as did some collaborators.  It was a pretty interesting collection of contributed talks.  

  • The work that's been done on spin transport in multiferroic insulators is particularly interesting to me.  A relevant preprint is this one, in which electric fields are used to reorient \(\mathbf{P}\) in BiFeO3, which correspondingly switches the magnetization in this system (which is described by a complicated spin cycloid order) and therefore modulates the transmission of spin currents (as seen in ferromagnetic resonance).  
  • Similarly adding a bit of La to BiFeO3 to favor single ferroelectric domain formation was a neat complement to this.
  • There were also multiple talks showing the utility of the spin Hall magnetoresistance as a way to characterize spin transport between magnetic insulators and strong spin-orbit coupled metals.
Some wrap-up thoughts:
  • This meeting venue and environment was superior in essentially every way relative to last year's mess in Las Vegas.  Nice facilities, broadly good rooms, room sizes, projectors, and climate control.  Lots of hotels.  Lots of restaurants that are not absurdly expensive.  I'd be very happy to have the meeting in Minneapolis again at some point.  There was even a puppy-visiting booth at the exhibit hall on Tuesday and Thursday.
  • Speaking of the exhibit hall, I think this is the first time I've been at a meeting where a vendor was actually running a dilution refrigerator on the premises.  
  • Only one room that I was in had what I would describe as a bad projector (poor color balance, loud fan, not really able to be focused crisply).  I also did not see any session chair this year blow it by allowing speakers to blow past their allotted times.
  • We really lucked out on the weather.  
  • Does anyone know what happens if someone ignores the "Warning: Do Not Drive Over Plate" label on the 30 cm by 40 cm yellow floor plate in the main lobby?  Like, does it trigger a self-destruct mechanism, or the apocalypse or something?
  • Next year's combined March/April meeting in Anaheim should be interesting - hopefully the venue is up to the task, and likewise I hope there are good, close housing and food options.

Wednesday, March 06, 2024

APS March Meeting 2024, Day 3

My highlights today are a bit thin, because I was fortunate enough to spend time catching up with collaborators and old friends, but here goes:
  • Pedram Roushan from Google gave an interesting talk about noisy intermediate-scale quantum experiments for simulation.  He showed some impressive data looking at the propagation of (simulated) magnons in the 1D Heisenberg spin chain.
  • In the same session, Lieven Vandersypen from Delft presented their recent results using gate-defined Ge/SiGe quantum dot arrays to simulate a small-scale version of the Hubbard model.  Looking at exciton formation and propagation in a Hubbard ladder while being able to tune many parameters, the data are pretty neat, though I have to say it seems like scaling this up to large arrays will be extremely challenging in terms of layout and tuning.  He also showed some in-preparation work on spin propagation in similar arrays - neat.
  • In a completely different session, Jacques Prost, recipient of this year's Onsager Prize, gave an interesting talk about broken symmetries and dynamics of living tissue.  This included cell motion driven by nematicity (living tissue as liquid crystal....) and how in a cylindrical environment this can lead to rotation of growing tissue.  These sorts of interactions in "active matter" can be related to how tissue grows and differentiates in living systems.
  • My colleague Gustavo Scuseria is this year's recipient of the Aneesha Rahman Prize, and he gave a good explanation of his group's recent work on using dualities to map strongly correlated models onto more tractable (polynomial-growth rather than exponential growth in problem size) equivalent weakly correlated models.
  • In a session on quantum spin liquids, Tyrel McQueen of Johns Hopkins spoke about two examples of his group's recent work.  Chemical substitution can help tune interactions in a Kitaev spin liquid candidate, and they've also examined the controlled interplay of charge density waves and magnetic order.  The talk did a great job of conveying a taste of the breadth and depth of the space of quantum magnets.
  • Lastly, Chih-Yuan Lu, recipient of this year's George E. Pake Prize, gave a very nice historical overview of the development of semiconductor electronics from the integrated circuit to the present frontiers (of gate-all-around transistors and 3D integrated NAND memory).
Two other notes not directly germane to the APS meeting:
  • The AAAS appropriations tracker shows how outlays for the coming year are shaping up for NSF and the other agencies.  </begin rant>Can someone explain to me why the conference NSF budget allocation for research ends up -8.5%, when the House pushed +0.3% and the Senate pushed -2.9%? Also, cutting the STEM education budget (which includes GRFP) by 28% seems terrible.  Griping about US STEM competitiveness and the need for developing the next-generation technical workforce, while simultaneously cutting research training resources:  Congress in action.   Once again, they feel good about supporting the authorization of doubling the NSF budget over five years, but don't actually want to appropriate the funds to do it.  </end rant>
  • Purely by random chance (ahem), I want to point to this column.

Tuesday, March 05, 2024

APS March Meeting 2024, Day 2

A decent part of today was spent in conversation with friends and colleagues, but here are some high points of scientific talks:

More tomorrow....

Monday, March 04, 2024

APS March Meeting 2024, Day 1

There is no question that the meeting venue in Minneapolis is superior in multiple ways to last year's meeting in Las Vegas.  The convention center doesn't feel scarily confining, and it also doesn't smell like a combination of cigarettes and desperation.

Here are a few highlights from the day:

  • There was an interesting session about "polar materials", systems that have the same kind of broken inversion symmetry within a unit cell as ferroelectrics; this includes "polar metals" which host mobile charge carriers.  One polar material family involving multiferroic insulators was presented by Daniel FlaviĂ¡n, in which dielectric (capacitance) measurements can show magnetic quantum critical phenomena, as in here and here.  Both sets of materials examined, Rb2Cu2Mo3O12 and Cs2Cu2Mo3O12, show remarkable dielectric effects due to fluctuating electric dipoles, connected to quantum critical points at B-field driven transitions between magnetic ordered states.
  • Natalia Drichko from Johns Hopkins showed Raman spectroscopy data on an organic Mott insulator, in which melting charge order is connected to spin fluctuations.
  • Pavel Volkov from U Conn discussed doped strontium titanate (STO), an example of an incipient polar metal, and looking at how polar fluctuations might be connected with the mechanism behind the unusual superconductivity of STO. 
  • The last talk of that session that I saw was Pablo Jarillo-Herrero giving a characteristically clear presentation about sliding ferroelectricity.  Taking a material like hBN and trying to stack a bilayer with perfect A-A alignment is not energetically favored - it's lower in energy if the two layers shift relative to each other by a third of a lattice parameter, resulting in an out-of-plane electric dipole moment, pointing either up or down depending on the direction of the shift.  Applying a sufficiently large electric field perpendicular to the plane can switch the system - this works on TMDs as well.  Putting a moire bilayer in the mix, and you can get some neat charge ratcheting effects
  • The session on transport in non-Fermi liquids was fun and informative.  I thought the discussion of possible intrinsic nonlinear transport in strange metals was intriguing.
  • I also saw a couple of interesting invited talks (here and here) about experiments that try to use electronic transport in adjacent layers to probe nontrivial magnetic properties of adjacent spin ices.  Very cool.
More tomorrow....

Sunday, March 03, 2024

APS March Meeting 2024 - coming soon

This week I'm going to be at the APS March Meeting in Minneapolis.  As I've done in past years, I will try to write up some highlights of talks that I am able to see, though it may be hit-or-miss.  If readers have suggestions for sessions or talks that they think will be particularly interesting, please put them in the comments.

Sunday, February 25, 2024

2024 version: Advice on choosing a graduate school

It's been four years since I posted the previous version of this, so it feels like the time is right for an update.

This is written on the assumption that you have already decided, after careful consideration, that you want to get an advanced degree (in physics, though much of this applies to any other science or engineering discipline).  This might mean that you are thinking about going into academia, or it might mean that you realize such a degree will help prepare you for a higher paying technical job outside academia.  Either way,  I'm not trying to argue the merits of a graduate degree - let's take it as given that this is what you want to do.

  • It's ok at the applicant stage not to know exactly what research area you want to be your focus.  While some prospective grad students are completely sure of their interests, that's more the exception than the rule.  I do think it's good to have narrowed things down a bit, though.  If a school asks for your area of interest from among some palette of choices, try to pick one (rather than going with "undecided").  We all know that this represents a best estimate, not a rigid commitment.
  • If you get the opportunity to visit a school, you should go.  A visit gives you a chance to see a place, get a subconscious sense of the environment (a "gut" reaction), and most importantly, an opportunity to talk to current graduate students.  Always talk to current graduate students if you get the chance - they're the ones who really know the score.  A professor should always be able to make their work sound interesting, but grad students can tell you what a place is really like.
  • International students may have a very challenging time being able to visit schools in the US, between the expense (many schools can help defray costs a little but cannot afford to pay for airfare for trans-oceanic travel) and visa challenges.  Trying to arrange zoom discussions with people at the school is a possibility, but that can also be challenging.  I understand that this constraint tends to push international students toward making decisions based heavily on reputation rather than up-close information.  
  • Picking an advisor and thesis area are major decisions, but it's important to realize that those decisions do not define you for the whole rest of your career.  I would guess (and if someone had real numbers on this, please post a comment) that the very large majority of science and engineering PhDs end up spending most of their careers working on topics and problems distinct from their theses.  Your eventual employer is most likely going to be paying for your ability to think critically, structure big problems into manageable smaller ones, and knowing how to do research, rather than the particular detailed technical knowledge from your doctoral thesis.  A personal anecdote:  I did my graduate work on the ultralow temperature properties of amorphous insulators.  I no longer work at ultralow temperatures, and I don't study glasses either; nonetheless, I learned a huge amount in grad school about the process of research that I apply all the time.
  • Always go someplace where there is more than one faculty member with whom you might want to work.  Even if you are 100% certain that you want to work with Prof. Smith, and that the feeling is mutual, you never know what could happen, in terms of money, circumstances, etc.  Moreover, in grad school you will learn a lot from your fellow students and other faculty.  An institution with many interesting things happening will be a more stimulating intellectual environment, and that's not a small issue.
  • You should not go to grad school because you're not sure what else to do with yourself.  You should not go into research if you will only be satisfied by a Nobel Prize.  In both of those cases, you are likely to be unhappy during grad school.  
  • I know grad student stipends are low, believe me.  However, it's a bad idea to make a grad school decision based purely on a financial difference of a few hundred or a thousand dollars a year.  Different places have vastly different costs of living - look into this.  Stanford's stipends are profoundly affected by the cost of housing near Palo Alto and are not an expression of generosity.  Pick a place for the right reasons.
  • Likewise, while everyone wants a pleasant environment, picking a grad school largely based on the weather is silly.  
  • Pursue external fellowships if given the opportunity.  It's always nice to have your own money and not be tied strongly to the funding constraints of the faculty, if possible.  (It's been brought to my attention that at some public institutions the kind of health insurance you get can be complicated by such fellowships.  In general, I still think fellowships are very good if you can get them.)
  • Be mindful of how departments and programs are run.  Is the program well organized?  What is a reasonable timetable for progress?  How are advisors selected, and when does that happen?  Who sets the stipends?  What are TA duties and expectations like?  Are there qualifying exams?  Where have graduates of that department gone after the degree?  Are external internships possible/unusual/routine? Know what you're getting into!  Very often, information like this is available now in downloadable graduate program handbooks linked from program webpages.   
  • When talking with a potential advisor, it's good to find out where their previous students have gone and how long a degree typically takes in their group.  What is their work style and expectations?   How is the group structured, in terms of balancing between team work to accomplish goals vs. students having individual projects over which they can have some ownership? 
  • Some advice on what faculty look for in grad students:  Be organized and on-time with things.  Be someone who completes projects (as opposed to getting most of the way there and wanting to move on).  Doctoral research should be a collaboration.  If your advisor suggests trying something and it doesn't work (shocking how that happens sometimes), rather than just coming to group meeting and saying "It didn't work", it's much better all around to be able to say "It didn't work, but I think we should try this instead", or "It didn't work, but I think I might know why", even if you're not sure. 
  • It's fine to try to communicate with professors at all stages of the process.  We'd much rather have you ask questions than the alternative.  If you don't get a quick response to an email, it's almost certainly due to busy-ness, and not a deeply meaningful decision by the faculty member.  For a sense of perspective: I get 50+ emails per day of various kinds not counting all the obvious spam that gets filtered.  

There is no question that far more information is now available to would-be graduate students than at any time in the past.  Use it.  Look at departmental web pages, look at individual faculty member web pages.  Make an informed decision.  Good luck!

Tuesday, February 13, 2024

Continuing Studies course, take 2

A year and a half ago, I mentioned that I was going to teach a course through Rice's Glasscock School of Continuing Studies, trying to give a general audience introduction to some central ideas in condensed matter physics.  Starting in mid-March, I'm doing this again.  Here is a link to the course registration for this synchronous online class.  This course is also intended as a potential continuing education/professional development offering for high school teachers, community college instructors, and other educators, and thanks to the generous support of the NSF, the Glasscock School is able to offer a limited number of full scholarships for educators - apply here by February 27 for consideration.   

(I am aware that the cost of the course is not trivial; at some point in the future I will make the course materials available broadly, and I will be sure to call attention to that at the time.)

Wednesday, February 07, 2024

A couple of links + a thought experiment about spin

A couple of interesting things to read:

  • As someone interested in lost ancient literature and also science, I really liked this news article from Nature about progress in reading scrolls excavated from Herculaneum.  The area around the Bay of Naples was a quite the spot for posh Roman families, and when Vesuvius erupted in 79 CE, whole villas, complete with their libraries of books on papyrus scrolls, were buried and flash-cooked under pyroclastic flows.  Those scrolls now look like lump charcoal, but with modern x-ray techniques (CT scanning using the beam from a synchrotron) plus machine learning, it is now possible to virtually unroll the scrolls and decipher the writing, because the ink has enough x-ray contrast with the carbonized papyrus to be detected.  There is reason to believe that there are more scrolls out there still buried, and there are lots of other books and scrolls out there that are too delicate or damaged to be handled and read the normal way.  It's great to see this approach starting to succeed.
  • I've written about metalenses before - using nanostructured surfaces for precise control of optical wavefronts to make ultrathin optical elements with special properties.  This extended news item from Harvard about this paper is a nice piece of writing.  With techniques now developed to make dielectric metalenses over considerably larger areas (100 mm silica wafers), these funky lenses can now start to be applied to astronomy.  Nifty.
And now the gedanken experiment that I've been noodling on for a bit.  I know what the correct answer must be, but I think this has done a good job at reminding me how what constitutes a measurement is a very subtle issue in quantum mechanics.

Suppose I have a single electron roughly localized at the origin.  It has spin-1/2, meaning that, if there are no other constraints, if I choose to make a measurement of the electron spin along some particular axis, I will find that with 50/50 probability the component of the angular momentum of the electron is \(\pm \hbar/2\) along that axis.  Suppose that I pick a \(z\) axis and do the measurement, finding that the electron is "spin-up" along \(z\).  Because the electron has a magnetic dipole moment, that means that the magnetic field at some distance \(r\) away from the origin should be the field from a magnetic dipole along \(z\).  

Now suppose I make another measurement of the spin, this time along the \(x\) axis.  I have a 50/50 chance of finding the electron spin up/down along \(x\).  After that measurement, the magnetic field at the same location \(r\) away from the origin should be the field from a magnetic dipole along \(x\).  It makes physical sense that the magnetic field at location \(r\) can only "know" that a measurement was done at the origin on a timescale \(r/c\).  (Note:  A truly correct treatment of this situation would seem to require QED, because the spin is entangled with the electromagnetic field via its magnetic moment; likewise one would really need to discuss in detail what it means to measure the spin state at the origin and what it means to measure the magnetic field locally.  Proper descriptions of detectors and measurements are really necessary.)

To highlight how subtle the situation is, suppose the spin at the origin is initially half of an EPR pair, so that it's in a spin singlet with a second spin near Alpha Centauri, so that the total spin of the two is zero.  Now a measurement of \(s_{z}\) at the origin determines the state of \(s_{z}\) at Alpha Centauri, and the magnetic field near that dipole at Alpha Centauri should be consistent with that.  Thinking about all of the subtleties here has been a good exercise for me in remembering how the seemingly simple statements we make when we teach this stuff can be implicitly very complicated.