Monday, September 29, 2014

Penny-wise, pound-foolish: Krulwich blog

I just read that NPR is getting rid of Robert Krulwich's excellent science blog, allegedly as part of cost-cutting.  Cost-cutting?  Really?  Does anyone actually think that it costs a huge sum of money to run that blog?  Surely the most expensive part of the blog is Robert Krulwich's time, which he seems more than willing to give.  Seriously, we should find out what the costs are, and have a kick-starter project to finance it.  Come on, NPR.

Thursday, September 25, 2014

The persistent regional nature of physics

In the 21st century, with the prevalence of air travel, global near-instantaneous communications, and active cultures of well-financed scientific research on several continents, you would think that the physics enterprise would be thoroughly homogenized, at least across places with similar levels of resources.  Sure, really expensive endeavors would be localized to a few places (e.g., CERN), but the comparatively cheap subfields like condensed matter physics would be rather uniformly spread out.

Strangely, in my (anecdotal, by necessity) experience, that doesn't seem to be the case.  One area of my research, looking at electronic/optical/thermal properties of atomic and molecular-scale junctions, has a very small number of experimental practitioners in the US (I can think of a handful), though there are several more groups in Europe and Asia.  Similarly, the relevant theory community for this work, with a few notable exceptions, is largely in Europe.   This imbalance has become clear in terms of both who I talk with about this work, and where I'm asked to speak.  Interestingly, there are also strong regional tendencies in some of the citation patterns (e.g., European theorists tend to cite European experimentalists), and I'm told this is true in other areas of physics (and presumably chemistry and biology).  I'm sure this has a lot to do with proximity and familiarity - it's much more likely for me to see talks by geographically proximal people, even if it's equally easy for me to read papers from people all over the world.

Basically, physics areas of pursuit have a (surprising to me) large amount of regional specialization.  There's been a major emphasis historically on new materials growth and discovery in, e.g., Germany, China, and Japan compared to the US (though this is being rectified, in part thanks to reports like this one).  Atomic physics w/ cold atoms has historically been dominated by the US and Europe.   I'm sure some of these trends are the result of funding decisions by governments.   Others are due to the effect of particularly influential, talented individuals that end up having long-lasting effects because the natural timescale for change at universities is measured in decades.  It will be interesting to see whether these inhomogeneities smooth out or persist over the long term.

Tuesday, September 23, 2014

Hype, BICEP2, and all that.

It's been a few years since I've written a post slamming some piece of hype about nanoscience.  In part, I decided that all this-is-hype posts start to sound the same and therefore weren't worth making unless the situation was truly egregious or somehow otherwise special.  In part, I also felt like I was preaching to the choir, so to speak.  That being said, I think the recent dustup over the BICEP2 experiment is worth mentioning, as an object lesson.   
  • If the BICEP2 collaboration had only posted their paper on the arxiv and said that the validity of their interpretation depended on further checks of the background by, e.g., the PLANCK collaboration, no one would have batted an eye.  They could have said that they were excited but cautious, and that, too, would have been fine.  
  • Where they (in my view) crossed the line is when they orchestrated a major media extravaganza around their results, including showing up at Andre Linde's house and filming his reaction on being told about the data.  Sure, they were excited, but it seems pretty clear that they went well beyond the norm in terms of trying to whip up attention and recognition.
  • While not catastrophic for science or anything hyperbolic like that by itself, this is just another of the death-by-1000-cuts events that erodes public confidence in science.  "Why believe what scientists say?  They drum up attention all the time, and then turn out to be wrong!  That's why low fat diets were good for me before they were bad for me!"
  • Bottom line:  If you are thinking of staging a press conference and a big announcement before your paper has even been sent out to referees, please do us all a favor and think again. 

Thursday, September 18, 2014

When freshman physics models fail

When we teach basic ac circuits in second semester freshman physics, or for that matter in intro to electrical engineering, we introduce the idea of an impedance, \(Z\), so that we can make ac circuit problems look like a generalization of Ohm's law.  For dc currents, we say that \(V = I R\), the voltage dropped across a resistor is linearly proportional to the current.  For reactive circuit elements and ac currents, we use complex numbers to keep track of phase shifts between the current and voltage.  Calling \(j \equiv \sqrt{-1}\), we assume that the ac current has a time dependence \(\exp(-j \omega t\).  Then we can say that the impedance \(Z\) of an inductor is \(j \omega L\), and write \(V = Z I\) for the case of an ac voltage across the inductor.

Where does that come from, though?  Well, it's really Faraday's law.  The magnetic flux through an inductor is given by \(\Phi = LI\).  We know that the voltage induced between the ends of such a coil is given by \(-d\Phi/dt = L (dI/dt) + (dL/dt) I\), and in an ordinary inductor, \(dL/dt\) is simply zero.  But not always!

Last fall and into the spring, two undergrads in my lab (aided by two grad students) were doing some measurements of inductors filled with vanadium dioxide powder, a material that goes through a sharp first-order phase transition at about 65 \(^{\circ}\)C from a low temperature insulator to a high temperature poor metal.  At the transition, there is also a change in the magnetic susceptibility of the material.  What I rather expected to see was a step-like change in the inductive response going across the transition, and an accompanying step-like change in the loss (due to resistive heating in the metal).  Both of these effects should be small (just at the edge of detection in our scheme).  Instead, the students found something very different - a big peak in the lossy response on warming, and an accompanying dip in the lossy response on cooling.  We stared at this data for weeks, and I asked them to run a whole variety of checks and control experiments to make sure we didn't have something wrong with the setup.  We also found that if we held the temperature fixed in the middle of the peak/dip, the response would drop off to what you'd expect in the absence of any peak/dip.   No, this was clearly a real effect, requiring a time-varying temperature to be apparent, and eventually it dawned on me what was going on:  we were seeing the other contribution to \(d\Phi/dt\)!  As each grain flicks into the new phase, it makes a nearly singular contribution to \(dL/dt\) because the transition for each grain is so rapid.

This is analogous to the Barkhausen effect, where a pickup coil wrapped around a piece of, e.g., iron and wired into speakers produces pops and crackling sounds as an external magnetic field is swept.  In the Barkhausen case, individual magnetic domains reorient or domain walls propagate suddenly, also giving a big \(d\Phi/dt\).  In our version, temperature is causing sudden changes in susceptibility, but it's the same basic idea.

This was great fun to figure out, and I really enjoy that it shows how the simple model of the impedance of an inductor can fail dramatically if the material in the coil does interesting things.  The paper is available here.


Monday, September 15, 2014

What is a "bad metal"? What is a "strange metal"?

Way back in the mists of time, I wrote about what what physicists mean when they say that some material is a metal.  In brief, a metal is a material that has an electrical resistivity that decreases with decreasing temperature, and in bulk has low energy excitations of the electron system down to arbitrarily low energies (no energy gap in the spectrum).  In a conventional or good metal, it makes sense to think about the electrons in terms of a classical picture often called the Drude model or a semiclassical (more quantum mechanical) picture called the Sommerfeld model.  In the former, you can think of the electrons as a gas, with the idea that the electrons travel some typical distance scale, \(\ell\), the mean free path, between scattering events that randomize the direction of the electron motion.  In the latter, you can think of a typical electronic state as a plane-wave-like object with some characteristic wavelength (of the highest occupied state) \(\lambda_{\mathrm{F}}\) that propagates effortlessly through the lattice, until it comes to a defect (break in the lattice symmetry) causing it to scatter.  In a good metal, \(\ell >> \lambda_{\mathrm{F}}\), or equivalently \( (2\pi/\lambda_{\mathrm{F}})\ell >> 1\).  Electrons propagate many wavelengths between scattering events.  Moreover, it also follows (given how many valence electrons come from each atom in the lattice) that \(\ell >> a\), where \(a\) is the lattice constant, the atomic-scale distance between adjacent atoms.

Another property of a conventional metal:  At low temperatures, the temperature-dependent part of the resistivity is dominated by electron-electron scattering, which in turn is limited by the number of empty electronic states that are accessible (e.g., not already filled and this forbidden as final states due to the Pauli principle).    The number of excited electrons (that in a conventional metal called a Fermi liquid act roughly like ordinary electrons, with charge \(-e\) and spin 1/2) is proportional to \(T\), and therefore the number of empty states available at low energies as "targets" for scattering is also proportional to \(T\), leading to a temperature-varying contribution to the resistivity proportional to \(T^{2}\).

bad metal is one in which some or all of these assumptions fail, empirically.  That is, a bad metal has gapless excitations, but if you analyze its electrical properties and tried to model them conventionally, you might find that the \(\ell\) that you infer from the data might be small compared to a lattice spacing.   This is called violating the Ioffe-Mott-Regel limit, and can happen in metals like rutile VO2 or LaSrCuO4 at high temperatures.

strange metal is a more specific term.  In a variety of systems, instead of having the resistivity scale like \(T^{2}\) at low temperatures, the resistivity scales like \(T\).  This happens in the copper oxide superconductors near optimal doping.  This happens in the related ruthenium oxides.  This happens in some heavy fermion metals right in the "quantum critical" regime.  This happens in some of the iron pnictide superconductors.  In some of these materials, when some technique like photoemission is applied, instead of finding ordinary electron-like quasiparticles, a big, smeared out "incoherent" signal is detected.  The idea is that in these systems there are not well-defined (in the sense of long-lived) electron-like quasiparticles, and these systems are not Fermi liquids.

There are many open questions remaining - what is the best way to think about such systems?  If an electron is injected from a boring metal into one of these, does it "fractionalize", in the sense of producing a huge number of complicated many-body excitations of the strange metal?  Are all strange metals the same deep down?  Can one really connect these systems with quantum gravity?  Fun stuff.

Saturday, September 06, 2014

What is the Casimir effect?

This is another in an occasional series of posts where I try to explain some physical phenomena and concepts in a comparatively accessible way.  I'm going to try hard to lean toward a lay audience here, with the very real possibility that this will fail.

You may have heard of the Casimir effect, or the Casimir force - it's usually presented in language that refers to "quantum fluctuations of the electromagnetic field", and phrases like "zero point energy" waft around.  The traditional idea is that two electrically neutral, perfectly conducting plates, parallel to each other, will experience an attractive force per unit area given by \( \hbar c \pi^{2}/(240 a^{4})\), where \(a \) is the distance between the plates.  For realistic conductors (and even dielectrics) it is possible to derive analogous expressions.  For a recent, serious scientific review, see here (though I think it's behind a paywall).

To get some sense of where these forces come from, we need to think about van der Waals forces.  It turns out that there is an attractive force between neutral atoms, say helium atoms for simplicity.  We are taught to think about the electrons in helium as "looking" like puffy, spherical clouds - that's one way to visualize the electron's quantum wave function, related to the probability of finding the electron in a given spot if you decided to look through some experimental means.  If you imagine using some scattering experiment to "take a snapshot" of the helium atom, you'd find the two electrons located at particular locations, probably away from the nucleus.  In that sense, the helium atom would have an "instantaneous electric dipole moment".  To use an analogy with magnetic dipoles, imagine that there are little bar magnets pointing from the nucleus to each electron.  The influence (electric field in the real atom; magnetic field from the bar magnet analogy) of those dipoles drops off in distance like \(1/r^{3}\).  Now, if there was a second nearby atom, its electrons would experience the fields from the first atom.  This would tend to influence its own dipole (in the magnet analogy, instead of the bar magnets pointing on average in all directions, they would tend to align with the field from the first atom, rather like how a compass needle is influenced by a nearby bar magnet).   The result would be an attractive force, proportional to \(1/r^{6}\).

In this description, we ignored that it takes time for the fields from the first atom to propagate to the second atom.  This is called retardation, and it's one key difference between the van der Waals interaction (when retardation is basically assumed to be unimportant) and so-called Casimir-Polder forces.   

Now we can ask, what about having more than two atoms?  What happens to the forces then?  Is it enough just to think of them as a bunch of pairs and add up the contributions?  The short answer is, no, you can't just think about pair-wise interactions (interference effects and retardation make it necessary to treat extended objects carefully).

What about exotic quantum vacuum fluctuations, you might ask.  Well, in some sense, you can think about those fluctuations and interactions with them as helping to set the randomized flipping dipole orientations in the first place, though that's not necessary.  It has been shown that you can do full, relativistic, retarded calculations of these fluctuating dipole effects and you can reproduce the Casimir results (and with greater generality) without saying much of anything about zero point stuff.  That is why while it is fun to speculate about zero point energy and so forth (see here for an entertaining and informative article - again, sorry about the paywall), there really doesn't seem to be any way to get net energy "out of the vacuum".