Monday, September 15, 2014

What is a "bad metal"? What is a "strange metal"?

Way back in the mists of time, I wrote about what what physicists mean when they say that some material is a metal.  In brief, a metal is a material that has an electrical resistivity that decreases with decreasing temperature, and in bulk has low energy excitations of the electron system down to arbitrarily low energies (no energy gap in the spectrum).  In a conventional or good metal, it makes sense to think about the electrons in terms of a classical picture often called the Drude model or a semiclassical (more quantum mechanical) picture called the Sommerfeld model.  In the former, you can think of the electrons as a gas, with the idea that the electrons travel some typical distance scale, \(\ell\), the mean free path, between scattering events that randomize the direction of the electron motion.  In the latter, you can think of a typical electronic state as a plane-wave-like object with some characteristic wavelength (of the highest occupied state) \(\lambda_{\mathrm{F}}\) that propagates effortlessly through the lattice, until it comes to a defect (break in the lattice symmetry) causing it to scatter.  In a good metal, \(\ell >> \lambda_{\mathrm{F}}\), or equivalently \( (2\pi/\lambda_{\mathrm{F}})\ell >> 1\).  Electrons propagate many wavelengths between scattering events.  Moreover, it also follows (given how many valence electrons come from each atom in the lattice) that \(\ell >> a\), where \(a\) is the lattice constant, the atomic-scale distance between adjacent atoms.

Another property of a conventional metal:  At low temperatures, the temperature-dependent part of the resistivity is dominated by electron-electron scattering, which in turn is limited by the number of empty electronic states that are accessible (e.g., not already filled and this forbidden as final states due to the Pauli principle).    The number of excited electrons (that in a conventional metal called a Fermi liquid act roughly like ordinary electrons, with charge \(-e\) and spin 1/2) is proportional to \(T\), and therefore the number of empty states available at low energies as "targets" for scattering is also proportional to \(T\), leading to a temperature-varying contribution to the resistivity proportional to \(T^{2}\).

bad metal is one in which some or all of these assumptions fail, empirically.  That is, a bad metal has gapless excitations, but if you analyze its electrical properties and tried to model them conventionally, you might find that the \(\ell\) that you infer from the data might be small compared to a lattice spacing.   This is called violating the Ioffe-Mott-Regel limit, and can happen in metals like rutile VO2 or LaSrCuO4 at high temperatures.

strange metal is a more specific term.  In a variety of systems, instead of having the resistivity scale like \(T^{2}\) at low temperatures, the resistivity scales like \(T\).  This happens in the copper oxide superconductors near optimal doping.  This happens in the related ruthenium oxides.  This happens in some heavy fermion metals right in the "quantum critical" regime.  This happens in some of the iron pnictide superconductors.  In some of these materials, when some technique like photoemission is applied, instead of finding ordinary electron-like quasiparticles, a big, smeared out "incoherent" signal is detected.  The idea is that in these systems there are not well-defined (in the sense of long-lived) electron-like quasiparticles, and these systems are not Fermi liquids.

There are many open questions remaining - what is the best way to think about such systems?  If an electron is injected from a boring metal into one of these, does it "fractionalize", in the sense of producing a huge number of complicated many-body excitations of the strange metal?  Are all strange metals the same deep down?  Can one really connect these systems with quantum gravity?  Fun stuff.

Saturday, September 06, 2014

What is the Casimir effect?

This is another in an occasional series of posts where I try to explain some physical phenomena and concepts in a comparatively accessible way.  I'm going to try hard to lean toward a lay audience here, with the very real possibility that this will fail.

You may have heard of the Casimir effect, or the Casimir force - it's usually presented in language that refers to "quantum fluctuations of the electromagnetic field", and phrases like "zero point energy" waft around.  The traditional idea is that two electrically neutral, perfectly conducting plates, parallel to each other, will experience an attractive force per unit area given by \( \hbar c \pi^{2}/(240 a^{4})\), where \(a \) is the distance between the plates.  For realistic conductors (and even dielectrics) it is possible to derive analogous expressions.  For a recent, serious scientific review, see here (though I think it's behind a paywall).

To get some sense of where these forces come from, we need to think about van der Waals forces.  It turns out that there is an attractive force between neutral atoms, say helium atoms for simplicity.  We are taught to think about the electrons in helium as "looking" like puffy, spherical clouds - that's one way to visualize the electron's quantum wave function, related to the probability of finding the electron in a given spot if you decided to look through some experimental means.  If you imagine using some scattering experiment to "take a snapshot" of the helium atom, you'd find the two electrons located at particular locations, probably away from the nucleus.  In that sense, the helium atom would have an "instantaneous electric dipole moment".  To use an analogy with magnetic dipoles, imagine that there are little bar magnets pointing from the nucleus to each electron.  The influence (electric field in the real atom; magnetic field from the bar magnet analogy) of those dipoles drops off in distance like \(1/r^{3}\).  Now, if there was a second nearby atom, its electrons would experience the fields from the first atom.  This would tend to influence its own dipole (in the magnet analogy, instead of the bar magnets pointing on average in all directions, they would tend to align with the field from the first atom, rather like how a compass needle is influenced by a nearby bar magnet).   The result would be an attractive force, proportional to \(1/r^{6}\).

In this description, we ignored that it takes time for the fields from the first atom to propagate to the second atom.  This is called retardation, and it's one key difference between the van der Waals interaction (when retardation is basically assumed to be unimportant) and so-called Casimir-Polder forces.   

Now we can ask, what about having more than two atoms?  What happens to the forces then?  Is it enough just to think of them as a bunch of pairs and add up the contributions?  The short answer is, no, you can't just think about pair-wise interactions (interference effects and retardation make it necessary to treat extended objects carefully).

What about exotic quantum vacuum fluctuations, you might ask.  Well, in some sense, you can think about those fluctuations and interactions with them as helping to set the randomized flipping dipole orientations in the first place, though that's not necessary.  It has been shown that you can do full, relativistic, retarded calculations of these fluctuating dipole effects and you can reproduce the Casimir results (and with greater generality) without saying much of anything about zero point stuff.  That is why while it is fun to speculate about zero point energy and so forth (see here for an entertaining and informative article - again, sorry about the paywall), there really doesn't seem to be any way to get net energy "out of the vacuum".

Thursday, August 28, 2014

Two cool papers on the arxiv

The beginning of the semester is a crazy time, so blogging is a little light right now.  Still, here are a couple of recent papers from the arxiv that struck my fancy.

arxiv:1408.4831 - "Self-replicating cracks:  A collaborative fracture mode in thin films," by Marthelot et al
This is very cool classical physics.  In thin, brittle films moderately adhering to a substrate, there can be a competition between the stresses involved with crack propagation and the stresses involved with delamination of the film.  The result can be very pretty pattern formation and impressively rich behavior.  A side note:  All cracks are really nanoscale phenomena - the actual breaking of bonds at the tip of the propagating crack is firmly in the nano regime.

arxiv:1408.6496 - "Non-equilibrium probing of two-level charge fluctuators using the step response of a single electron transistor," by Pourkabirian et al
I've written previously (wow, I've been blogging a while) about "two-level systems", the local dynamic degrees of freedom that are ubiquitous in disordered materials.  These little fluctuators have a statistically broad distribution of level asymmetries and tunneling times.  As a result, when perturbed, the ensemble of these TLSs responds not with a simple exponential decay (as would a system with a single characteristic time scale).  Instead, the TLS ensemble leads to a decaying response that is logarithmic in time.  For my PhD I studied such (agonizingly) slow relaxations in the dielectric response and acoustic response of glasses (like SiO2) at cryogenic temperatures.   Here, the authors use the incredible charge sensitivity of a single-electron transistor (SET) to look at the relaxation of the local charge environment near such disordered dielectrics.  The TLSs often have electric dipole moments, so their relaxation changes the local electrostatic potential near the SET. Guess what:  logarithmic relaxations.  Cute, and brings back memories of loooooong experiments from grad school.

Wednesday, August 20, 2014

Science and engineering research infrastructure - quo vadis?

I've returned from the NSF's workshop regarding the successor program to the NNIN.  While there, I learned a few interesting things, and I want to point out a serious issue facing science and engineering education and research (at least in the US).
  • The NNIN has been (since 2010) essentially level-funded at $16M/yr for the whole program, and there are no indications that this will change in the foreseeable future.  (Inflation erodes the value of that sum as well over time.)  The NNIN serves approximately 6000 users per year (with turnover of about 2200 users/yr).  For perspective, a truly cutting edge transmission electron microscope, one instrument, costs about $8M.  The idea that the NNIN program can directly create bleeding edge shared research hardware across the nation is misguided.
  • For comparison, the US DOE has five nano centers.  The typical budget for each one is about $20M/yr.  Each nano center can handle around 450 users/yr.  Note that these nano centers are very different things than NNIN sites - they do not charge user fees, and they are co-located with some truly unique characterization facilities (synchrotrons, neutron sources).  Still, the DOE is spending seventeen times as much per user per year in their program as the NNIN.
  • Even the DOE, with their much larger investment, doesn't really know how to handle "recapitalization".  That is, there was money available to buy cutting edge tools to set up their centers initially, but there is no clear, sustainable financial path to be able to replace aging instrumentation.  This is exactly the same problem faced by essentially every research university in the US.  Welcome to the party.  
  • Along those lines:  As far as I can tell (and please correct me if I'm wrong about this!), every US federal granting program intended to have a component associated with increasing shared research infrastructure at universities (this includes the NSF MRI program, MRSEC, STC, ERC, CCI; DOE instrumentation grants, DOE centers like EFRCs, DOD equipment programs like DURIPs) is either level-funded or facing declining funding levels.  Programs like these often favor acquisition of new, unusual tools over standard "bread-and-butter" as well.  Universities are going to have to rely increasingly on internal investment to acquire/replace instrumentation.  Given that there is already considerable resentment/concern about perceived stratification of research universities into "haves" and "have-nots", it's hard to see how this is going to get much better any time soon.
  • To potential donors who are really interested in the problem of graduate (and advanced undergrad) science and engineering hands-on education:  PLEASE consider this situation.  A consortium of donors who raised, say, $300M in an endowment could support the equivalent of the NNIN on the investment returns for decades to come.  This can have an impact on thousands of students/postdocs per year, for years at a time.  The idea that this is something of a return to the medieval system of rich patrons supporting the sciences is distressing.  However, given the constraints of government finances and the enormous sums of money out there in the hands of some brilliant, tech-savvy people who appreciate the importance of an educated workforce, I hope someone will take this possibility seriously.  To put this in further perspective:  I heard on the radio yesterday that the college athletics complex being built at Texas A&M University costs $400M.  Think about that.  A university athletic booster organization was able to raise that kind of money for something as narrowly focused (sorry, Aggies, but you know it's true). 

Sunday, August 17, 2014

Distinguishable from magic?

Arthur C. Clarke's most famous epigram is that "Any sufficiently advanced technology is indistinguishable from magic."  A question that I've heard debated in recent years is, have we gone far enough down that road that it's adversely affecting the science and engineering education pipeline?  There was a time when young people interested in technology could rip things apart and actually get a moderately good sense of how those gadgets worked.  This learning-through-disassembly approach is still encouraged, but the scope is much more limited. 

For example, when I was a kid (back in the dim mists of time known as the 1970s and early 80s), I ripped apart transistor radios and at least one old, busted TV.  Inside the radios, I saw how the AM tuner worked by sliding a metal contact along a wire solenoid - I learned later that this was tuning an inductor-capacitor resonator, and that the then-mysterious diodes in there (the only parts on the circuit board with some kind of polarity stamped on them, aside from the electrolytic capacitors on the power supply side) somehow were important at getting the signal out.  Inside the TV, I saw that there was a whopping big transformer, some electromagnets, and that the screen was actually the front face of a big (13 inch diagonal!) vacuum tube.  My dad explained to me that the electromagnets helped raster an electron beam back and forth in there, which smacked on phosphors on the inside of the screen.  Putting a big permanent magnet up against the front of a screen distorted the picture and warped the colors in a cool way that depended strongly on the distance between the magnet and the screen, and on the magnet's orientation, thanks to the magnet screwing with the electron beam's trajectory. 

Now, a kid opening up an ipod or little portable radio will find undocumented integrated circuits that do the digital tuning.  Flat screen LCD TVs are also much more black-box-like (though the light source is obvious), again containing lots of integrated circuits.  Touch screens, the accelerometers that determine which way to orient the image on a cell phone's screen, the chip that actually takes the pictures in a cell phone camera - all of these seem almost magical, and they are either packaged monolithically (and inscrutably), or all the really cool bits are too small to see without a high-power optical microscope.  Even automobiles are harder to figure out, with lots of sensors, solid-state electronics, and an architecture that often actively hampers investigation. 

I fully realize that I'm verging on sounding like a grumpy old man with an onion on his belt (non-US readers: see transcript here).  Still, the fact that understanding of everyday technology is becoming increasingly inaccessible, disconnected with common sense and daily experience, does seem like a cause for concern.  Chemistry sets, electronics sets, arduinos and raspberry pi-s, these are all ways to fight this trend, and their use should be encouraged!

Tuesday, August 12, 2014

Some quick cool science links

Here are a few neat things that have cropped up recently:
  • The New Horizons spacecraft is finally getting close enough to Pluto to be able to image Pluto and Charon orbiting about their common (approximate, b/c of other moons) center of mass.
  • The Moore Foundation announced the awardees in the materials synthesis component of their big program titled Emergent Phenomena in Quantum Systems.  Congratulations all around.  
  • Here's a shock:  congressmen in the pockets of the United Launch Alliance don't like SpaceX.
  • Cute toy.
  • The Fields Medal finally goes to a woman, Maryam Mirzakhani.  Also getting a share, Manjul Bhargava, who gave the single clearest math talk I've ever seen, using only a blank transparency and a felt-tip pen.

Saturday, August 09, 2014

Nanotubes by design

There is a paper in this week's issue of Nature (with an accompanying news commentary by my colleague Jim Tour) in which the authors appear to have solved a major, two decade+ challenge, growing single-walled carbon nanotubes of a specific type.   For a general audience:  You can imagine rolling up a single graphene sheet and joining the edges to make a cylinder.  There are many different ways to do this.  The issue is, different ways of rolling up the sheet lead to different electronic properties, and the energetic differences between these different tube types are very small.  When people have tried to grow nanotubes by any number of methods, they tend to end up with a bunch of tube types of similar diameters, rather than just the one they want.

The authors of this new paper have taken an approach that has great visual appeal.  They have used synthetic chemistry to make a planar hydrocarbon molecule that looks like they've taken the geodesic hemisphere end-cap of their desired tube and cut it to lay it flat - like making a funky projection to create a flat map of a globe.  When placed on a catalytically active Pt surface at elevated temperatures, this molecular seed can fold up into an endcap and start growing as a nanotube.  The authors show Raman spectroscopic evidence that they only produce the desired tube type (in this case, a metallic nanotube).  The picture is nice, and the authors imply that they could do this for other desired tube types.  It's not clear whether this is scalable for large volumes, but it's certainly encouraging.

This is very cute.  People in the nanotube game have been trying to do selective synthesis for twenty years.  Earlier this summer, a competing group showed progress in this direction using nanoparticle seeds, an approach pursued by many over the years with limited success.  It will be fun to see where this goes.  This is a good example of how long it can take to solve some materials problems.