Monday, June 18, 2018

Scientific American - what the heck is this?

Today, Scientific American ran this on their blogs page.  This article calls to mind weird mysticism stuff like crystal energy, homeopathy, and tree waves (a reference that attendees of mid-1990s APS meetings might get), and would not be out of place in Omni Magazine in about 1979.

I’ve written before about SciAm and their blogs.  My offer still stands, if they ever want a condensed matter/nano blog that I promise won’t verge into hype or pseudoscience.

Saturday, June 16, 2018

Water at the nanoscale

One reason the nanoscale is home to some interesting physics and chemistry is that the nanometer is a typical scale for molecules.   When the size of your system becomes comparable to the molecular scale, you can reasonably expect something to happen, in the sense that it should no longer be possible to ignore the fact that your system is actually built out of molecules.

Consider water as an example.  Water molecules have finite size (on the order of 0.2 nm between the hydrogens), a definite angled shape, and have a bit of an electric dipole moment (the oxygen has a slight excess of electron density and the hydrogens have a slight deficit).  In the liquid state, the water molecules are basically jostling around and have a typical intermolecular distance comparable to the size of the molecule.  If you confine water down to a nanoscale volume, you know at some point the finite size and interactions (steric and otherwise) between the water molecules have to matter.  For example, squeeze water down to a few molecular layers between solid boundaries, and it starts to act more like an elastic solid than a viscous fluid.  

Another consequence of this confinement in water can be seen in measurements of its dielectric properties - how charge inside rearranges itself in response to an external electric field.  In bulk liquid water, there are two components to the dielectric response.  The electronic clouds in the individual molecules can polarize a bit, and the molecules themselves (with their electric dipole moments) can reorient.  This latter contribution ends up being very important for dc electric fields, and as a result the dc relative dielectric permittivity of water, \(\kappa\), is about 80 (compared with 1 for the vacuum, and around 3.9 for SiO2).   At the nanoscale, however, the motion of the water molecules should be hindered, especially near a surface.  That should depress \(\kappa\) for nanoconfined water.

In a preprint on the arxiv this week, that is exactly what is found.  Using a clever design, water is confined in nanoscale channels defined by a graphite floor, hexagonal boron nitride (hBN) walls, and a hBN roof.  A conductive atomic force microscope tip is used as a top electrode, the graphite is used as a bottom electrode, and the investigators are able to see results consistent with \(\kappa\) falling to roughly 2.1 for layers about 0.6-0.9 nm thick adjacent to the channel floor and ceiling.  The result is neat, and it should provide a very interesting test case for attempts to model these confinement effects computationally.

Friday, June 08, 2018

What are steric interactions?

When first was reading chemistry papers, one piece of jargon jumped out at me:  "steric hindrance", which is an abstruse way of saying that you can't force pieces of molecules (atoms or groups of atoms) to pass through each other.  In physics jargon, they have a "hard core repulsion".  If you want to describe the potential energy of two atoms as you try to squeeze one into the volume of the other, you get a term that blows up very rapidly, like \(1/r^{12}\), where \(r\) is the distance between the nuclei.  Basically, you can do pretty well treating atoms like impenetrable spheres with diameters given by their outer electronic orbitals.  Indeed, Robert Hooke went so far as to infer, from the existence of faceted crystals, that matter is built from effectively impenetrable little spherical atoms.

It's a common thing in popular treatments of physics to point out that atoms are "mostly empty space".  With hydrogen, for example, if you said that the proton was the size of a pea, then the 1s orbital (describing the spatial probability distribution for finding the point-like electron) would be around 250 m in radius.  So, if atoms are such big, puffy objects, then why can't two atoms overlap in real space?  It's not just the electrostatic repulsion, since each atom is overall neutral.

The answer is (once again) the Pauli exclusion principle (PEP) and the fact that electrons obey Fermi statistics.  Sometimes the PEP is stated in a mathematically formal way that can obscure its profound consequences.  For our purposes, the bottom line is:  It is apparently a fundamental property of the universe that you can't stick two identical fermions (including having the same spin) in the same quantum state.    At the risk of getting technical, this can mean a particular atomic orbital, or more generally it can be argued to mean the same little "cell" of volume \(h^{3}\) in r-p phase space.  It just can't happen

If you try to force it, what happens instead?  In practice, to get two carbon atoms, say, to overlap in real space, you would have to make the electrons in one of the atoms leave their ordinary orbitals and make transitions to states with higher kinetic energies.  That energy has to come from somewhere - you have to do work and supply that energy to squeeze two atoms into the volume of one.  Books have been written about this.

Leaving aside for a moment the question of why rigid solids are rigid, it's pretty neat to realize that the physics principle that keeps you from falling through your chair or the floor is really the same principle that holds up white dwarf stars.

Thursday, May 31, 2018

Coming attractions and short items

Here are a few items of interest. 

I am planning to write a couple of posts about why solids are rigid, and in the course of thinking about this, I made a couple of discoveries:

  • When you google "why are solids rigid?", you find a large number of websites that all have exactly the same wording:  "Solids are rigid because the intermolecular forces of attraction that are present in solids are very strong. The constituent particles of solids cannot move from their positions they can only vibrate from their mean positions."  Note that this is (1) not correct, and (2) also not much of an answer.  It seems that the wording is popular because it's an answer that has appeared on the IIT entrance examinations in India.
  • I came across an absolutely wonderful paper by Victor Weisskopf, "Of Atoms, Mountains, and Stars:  A Study in Qualitative Physics", Science 187, 605-612 (1975).  Here is the only link I could find that might be reachable without a subscription.  It is a great example of "thinking like a physicist", showing how far one can get by starting from simple ideas and using order-of-magnitude estimates.  This seems like something that should be required reading of most undergrad physics majors, and more besides.
In politics-of-science news:

  • There is an amendment pending in the US Congress on the big annual defense bill that has the potential to penalize US researchers who have received any (presently not well-defined) resources from Chinese talent recruitment efforts.  (Russia, Iran, and North Korea are also mentioned, but they're irrelevant here, since they are not running such programs.)  The amendment would allow the DOD to deny these folks research funding.  The idea seems to be that such people are perceived by some as a risk in terms of taking DOD-relevant knowledge and giving China an economic or strategic benefit.  Many major US research universities have been encouraging closer ties with China and Chinese universities in the last 15 years.  Makes you wonder how many people would be affected.
  • The present US administration, according to AP, is apparently about to put in place (June 11?) new limitations on Chinese graduate student visas, for those working in STEM (and especially in fields mentioned explicitly in the Chinese government's big economic plan).   It would make relevant student visas one year in duration.  Given that the current visa renewal process can already barely keep up with the demand, it seems like this could become an enormous headache.  I could go on at length about why I think this is a bad idea.  Given that it's just AP that is reporting this so far, perhaps it won't happen or will be more narrowly construed.  We'll see.

Tuesday, May 29, 2018

What is tunneling?


I first learned about quantum tunneling from science fiction, specifically a short story by Larry Niven.  The idea is often tossed out there as one of those "quantum is weird and almost magical!" concepts.  It is surely far from our daily experience.

Imagine a car of mass \(m\) rolling along a road toward a small hill.  Let’s make the car and the road ideal – we’re not going to worry about friction or drag from the air or anything like that.   You know from everyday experience that the car will roll up the hill and slow down.  This ideal car’s total energy is conserved, and it has (conventionally) two pieces, the kinetic energy \(p^2/2m\) (where \(p\) is the momentum; here I’m leaving out the rotational contribution of the tires), and the gravitational potential energy, \(mgz\), where \(g\) is the gravitational acceleration and \(z\) is the height of the center of mass above some reference level.  As the car goes up, so does its potential energy, meaning its kinetic energy has to fall.  When the kinetic energy hits zero, the car stops momentarily before starting to roll backward down the hill.  The spot where the car stops is called a classical turning point.  Without some additional contribution to the energy, you won’t ever find the car on the other side of that hill, because the shaded region is “classically forbidden”.  We’d either have to sacrifice conservation of energy, or the car would have to have negative kinetic energy to exist in the forbidden region.  Since the kinetic piece is proportional to \(p^2\), to have negative kinetic energy would require \(p\) to be imaginary (!).

However, we know that the car is really a quantum object, built out of a huge number (more than \(10^27\)) other quantum objects.  The spatial locations of quantum objects can be described with “wavefunctions”, and you need to know a couple of things about these to get a feel for tunneling.  For the ideal case of a free particle with a definite momentum, the wavefunction really looks like a wave with a wavelength \(h/p\), where \(h\) is Planck’s constant.  Because a wave extends throughout all space, the probability of finding the ideal free particle anywhere is equal, in agreement with the oft-quoted uncertainty principle. 

Here’s the essential piece of physics:  In a classically forbidden region, the wavefunction decays exponentially with distance (mathematically equivalent to the wave having an imaginary wavelength), but it can’t change abruptly.  That means that if you solve the problem of a quantum particle incident on a finite (in energy and spatial size) barrier from one side, there is always some probability that the particle will be found on the far side of the classically forbidden region.  

This means that it’s technically possible for the car to “tunnel” through the hillside and end up on the downslope.  I would not recommend this as a transportation strategy, though, because that’s incredibly unlikely.  The more massive the particle, and the more forbidden the region (that is, the more negative the classical kinetic energy of the particle would have to be in the barrier), the faster the exponential decay of the probability of getting through.  For a 1000 kg car trying to tunnel through a 10 cm high speed bump 1 m long, the probability is around exp(-2.7e20).  That kind of number is why quantum tunneling is not an obvious part of your daily existence.  For something much less massive, like an electron, the tunneling probability from, say, a metal tip to a metal surface decays by around a factor of \(e^2\) for every 0.1 nm of tip-surface distance separation.  It’s that exponential sensitivity to geometry that makes scanning tunneling microscopy possible.

However, quantum tunneling is very much a part of your life.  Protons can tunnel through the repulsion of their positive charges to bind to each other – that’s what powers the sun.  Electrons routinely tunnel in zillions of chemical reactions going on in your body right now, as well as in the photosynthesis process that drives most plant life. 

On a more technological note, tunneling is a key ingredient in the physics of flash memory.  Flash is based on field-effect transistors, and as I described the other day, transistors are switched on or off depending on the voltage applied to a gate electrode.  Flash storage uses transistors with a “floating gate”, a conductive island surrounded by insulating material, some kind of glassy oxide.  Charge can be parked on that gate or removed from it, and depending on the amount of charge there, the underlying transistor channel is either conductive or not.   How does charge get on or off the island?  By a flavor of tunneling called field emission.  The insulator around the floating gate functions as a potential energy barrier for electrons.  If a big electric field is applied via some other electrodes, the barrier’s shape is distorted, allowing electrons to tunnel through it efficiently.  This is a tricky aspect of flash design.  The barrier has to be high/thick enough that charge stuck on the floating gate can stay there a very long time - you wouldn’t want the bits in your SSD or your flash drive losing their status on the timescale of months, right? - but ideally tunable enough that the data can be rewritten quickly, with low error rates, at low voltages.

Monday, May 21, 2018

Physics around you: the field-effect transistor

While dark matter and quantum gravity routinely get enormous play in the media, you are surrounded every day by physics that enables near miraculous technology.  Paramount among these is the field-effect transistor (FET).   That wikipedia link is actually pretty good, btw.  While I've written before about specific issues regarding FETs (here, here, here), I hadn't said much about the general device.

The idea of the FET is to use a third electrode, a gate, to control the flow of current through a channel between two other electrodes, the source and drain.  The electric field from the gate controls the mobile charge in the channel - this is the field effect.   You can imagine doing this in vacuum, with a hot filament to be a source of electrons, a second electrode (at a positive voltage relative to the source) to collect the electrons, and an intervening grid as the gate.  Implementing this in the solid state was proposed more than once (LilienfeldHeil) before it was done successfully. 

Where is the physics?  There is a ton of physics involved in how these systems actually work.  For example, it's all well and good to talk about "free" electrons moving around in solids in analogy to electrons flying in space in a vacuum tube, but it's far from obvious that you should be able to do this.   Solids are built out of atoms and are inherently quantum mechanical, with particular allowed energies and electronic states picked out by quantum mechanics and symmetries.  The fact that allowed electronic states in periodic solids ("Bloch waves") resemble "free" electron states (plane waves, in the quantum context) is very deep and comes from the underlying symmetry of the material.  [Note that you can have transistors even when the charge carriers should be treated as hopping from site to site - that's how many organic FETs work.]  It's the Pauli principle that allows us to worry only about the highest energy electronic states and not have to worry about, e.g., the electrons deep down in the ion cores of the atoms in the material.  Still, you do have to make sure there aren't a bunch of electronic states at energies where you don't want them - these the are traps and surface states that made FETs hard to get working.  The combo of the Pauli principle and electrostatic screening is why we can largely ignore the electron-electron repulsion in the materials, but still use the gate electrode's electric field to affect the channel.  FETs have also been great tools for learning new physics, as in the quantum Hall effect

What's the big deal?  When you have a switch that is either open or closed, it's easy to realize that you can do binary-based computing with a bunch of them.  The integrated manufacturing of the FET has changed the world.  It's one of the few examples of a truly disruptive technology in the last 100 years.  The device you're using to read this probably contains several billion (!) transistors, and they pretty much all work, for years at a time.  FETs are the underlying technology for both regular and flash memory.  FETs are what drive the pixels in the flat panel display you're viewing.  Truly, they are so ubiquitous that they've become invisible.

Wednesday, May 16, 2018

"Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue. 

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)

Wednesday, May 02, 2018

Short items

A couple of points of interest:
  • Bill Gates apparently turned down an offer from the Trump administration to be presidential science advisor.  It's unclear if this was a serious offer or an off-hand remark.   Either way it underscores what a trivialized and minimal role OSTP appears to be playing in the present administration.  It's a fact of modern existence that there are many intersections between public policy and the need for technical understanding of scientific issues (in the broad sense that includes engineering).   While an engaged and highly functional OSTP doesn't guarantee good policy (because science is only one of many factors that drive decision-making), the US is doing itself a disservice by running a skeleton crew in that office.  
  • Phil Anderson has posted a document (not a paper submitted for publication anywhere, but more of an essay) on the arxiv with the sombre title, "Four last conjectures".  These concern: (1) the true ground state of solids made of atoms that are hard-core bosons, suggesting that at sufficiently low temperatures one could have "non-classical rotational inertia" - not exactly a supersolid, but similar in spirit; (2) a discussion of a liquid phase of (magnetic) vortices in superconductors in the context of heat transport; (3) an exposition of his take on high temperature superconductivity (the "hidden Fermi liquid"), where one can have non-Fermi-liquid scattering rates for longitudinal resistivity, yet Fermi liquid-like scattering rates for scattering in the Hall effect; and (4) a speculation about an alternative explanation (that, in my view, seems ill-conceived) for the accelerating expansion of the universe.   The document is vintage Anderson, and there's a melancholy subtext given that he's 94 years old and is clearly conscious that he likely won't be with us much longer.
  • On a lighter note, a paper (link goes to publicly accessible version) came out a couple of weeks ago explaining how yarn works - that is, how the frictional interactions between a zillion constituent short fibers lead to thread acting like a mechanically robust object.  Here is a nice write-up.

Sunday, April 29, 2018

What is a quantum point contact? What is quantized conductance?

When we teach basic electrical phenomena to high school or college physics students, we usually talk about Ohm's Law, in the form \(V = I R\), where \(V\) is the voltage (how much effort it takes to push charge, in some sense), \(I\) is the current (the flow rate of the charge), and \(R\) is the resistance.  This simple linear relationship is a good first guess about how you might expect conduction to work.  Often we know the voltage and want to find the current, so we write \(I = V/R\), and the conductance is defined as \(G \equiv 1/R\), so \(I = G V\). 

In a liquid flow analogy, voltage is like the net pressure across some pipe, current is like the flow rate of liquid through the pipe, and the conductance characterizes how the pipe limits the flow of liquid.  For a given pressure difference between the ends of the pipe, there are two ways to lower the flow rate of the liquid:  make the pipe longer, and make the pipe narrower.  The same idea applies to electrical conductance of some given material - making the material longer or narrower lowers \(G\) (increases \(R\)).   

Does anything special happen when the conductance becomes small?  What does "small" mean here - small compared to what?  (Physicists love dimensionless ratios, where you compare some quantity of interest with some characteristic scale - see here and here.  I thought I'd written a long post about this before, but according to google I haven't; something to do in the future.)  It turns out that there is a combination of fundamental constants that has the same units as conductance:  \(e^2/h\), where \(e\) is the electronic charge and \(h\) is Planck's constant.  Interestingly, evaluating this numerically gives a characteristic conductance of about 1/(26 k\(\Omega\)).   The fact that \(h\) is in there tells you that this conductance scale is important if quantum effects are relevant to your system (not when you're in the classical limit of, say, a macroscopic, long spool of wire that happens to have \(R \sim 26~\mathrm{k}\Omega\).   
Example of a quantum point contact, from here.

Conductance quantization can happen when you make the conductance approach this characteristic magnitude by having the conductor be very narrow, comparable to the spatial spread of the quantum mechanical electrons.  We know electrons are really quantum objects, described by wavefunctions, and those wavefunctions can have some characteristic spatial scale depending on the electronic energy and how tightly the electron is confined.  You can then think of the connection between the two conductors like a waveguide, so that only a handful of electronic "modes" or "channels" (compatible with the confinement of the electrons and what the wavefunctions are required to do) actually link the two conductors.  (See figure.) Each spatial electronic mode that connects between the two sides has a conductance of \(G_{0} \equiv 2e^{2}/h\), where the 2 comes from the two possible spin states of the electron.  

Conductance quantization in a 2d electron system,
from here.
A junction like this in a semiconductor system is called a quantum point contact.  In semiconductor devices you can use gate electrodes to confine the electrons, and when the conductance reaches the appropriate spatial scale you can see steps in the conductance near integer multiples of \(G_{0}\), the conductance quantum.  A famous example of this is shown in the figure here.  

In metals, because the density of (mobile) electrons is very high, the effective wavelength of the electrons is much shorter, comparable to the size of an atom, a fraction of a nanometer.  This means that constrictions between pieces of metal have to reach the atomic scale to see anything like conductance quantization.  This is, indeed, observed.

For a very readable review of all of this, see this Physics Today article by two of the experimental progenitors of this.  Quantized conductance shows up in other situations when only a countable number of electronic states are actually doing the job of carrying current (like along the edges of systems in the quantum Hall regime, or along the edges of 2d topological materials, or in carbon nanotubes).   

Note 1:  It's really the "confinement so that only a few allowed waves can pass" that gives the quantization here.  That means that other confined wave systems can show the analog of this quantization.  This is explained in the PT article above, and an example is conduction of heat due to phonons.

Note 2:  What about when \(G\) becomes comparable to \(G_{0}\) in a long, but quantum mechanically coherent system?  That's a story for another time, and gets into the whole scaling theory of localization.  

Wednesday, April 25, 2018

Postdoc opportunity

While I have already spammed a number of faculty colleagues about this, I wanted to point out a competitive, endowed postdoctoral opportunity at Rice, made possible through the Smalley-Curl Institute.  (I am interested in hiring a postdoc in general, but the endowed opportunity is a nice one to pursue as well.)

The endowed program is the J Evans Attwell Welch Postdoctoral Fellowship.  This is a competitive, two-year fellowship, and each additionally includes travel funds and research supplies/minor equipment resources.  The deadline for the applications is this coming July 1, 2018 with an anticipated start date around September, 2018.  

I'd be delighted to work with someone on an application for this, and I am looking for a great postdoc in any case.  The best applicant would be a strong student who is interested in working on (i) noise and transport measurements in spin-orbit systems including 2d TIs; (ii) nanoscale studies (incl noise and transport) of correlated materials and non-Fermi liquids; and/or (iii) combined electronic and optical studies down to the molecular scale via plasmonic structures.  If you're a student finishing up and are interested, please contact me, and if you're a faculty member working with possible candidates, please feel free to point out this opportunity.


Saturday, April 21, 2018

The Einstein-de Haas effect

Angular momentum in classical physics is a well-defined quantity tied to the motion of mass about some axis - its value (magnitude and direction) is tied to a particular choice of coordinates.  When we think about some extended object spinning around an axis with some angular velocity \(\mathbf{\omega}\), we can define the angular momentum associated with that rotation by \(\mathbf{I}\cdot \mathbf{\omega}\), where \(\mathbf{I}\) is the "inertia tensor" that keeps track of how mass is distributed in space around the axis.  In general, conservation of angular momentum in isolated systems is a consequence of the rotational symmetry of the laws of physics (Noether's theorem). 

The idea of quantum particles possessing some kind of intrinsic angular momentum is a pretty weird one, but it turns out to be necessary to understand a huge amount of physics.  That intrinsic angular momentum is called "spin", but it's *not* correct to think of it as resulting from the particle being an extended physical object actually spinning.  As I learned from reading The Story of Spin (cool book by Tomonaga, though I found it a bit impenetrable toward the end - more on that below), Kronig first suggested that electrons might have intrinsic angular momentum and used the intuitive idea of spinning to describe it; Pauli pushed back very hard on Kronig about the idea that there could be some physical rotational motion involved - the intrinsic angular momentum is some constant on the order of \(\hbar\).  If it were the usual mechanical motion, dimensionally this would have to go something like \(m r v\), where \(m\) is the mass, \(r\) is the size of the particle, and \(v\) is a speed; as \(r\) gets small, like even approaching a scale we know to be much larger than any intrinsic size of the electron, \(v\) would exceed \(c\), the speed of light.  Pauli pounded on Kronig hard enough that Kronig didn't publish his ideas, and two years later Goudsmit and Uhlenbeck established intrinsic angular momentum, calling it "spin".

Because of its weird intrinsic nature, when we teach undergrads about spin, we often don't emphasize that it is just as much angular momentum as the classical mechanical kind.  If you somehow do something to a system a bunch of spins, that can have mechanical consequences.  I've written about one example before, a thought experiment described by Feynman and approximately implemented in micromechanical devices.  A related concept is the Einstein-de Haas effect, where flipping spins again exerts some kind of mechanical torque.  A new preprint on the arxiv shows a cool implementation of this, using ultrafast laser pulses to demagnetize a ferromagnetic material.  The sudden change of the spin angular momentum of the electrons results, through coupling to the atoms, in the launching of a mechanical shear wave as the angular momentum is dumped into the lattice.   The wave is then detected by time-resolved x-ray measurements.  Pretty cool!

(The part of Tomonaga's book that was hard for me to appreciate deals with the spin-statistics theorem, the quantum field theory statement that fermions have spins that are half-integer multiples of \(\hbar\) while bosons have spins that are integer multiples.  There is a claim that even Feynman could not come up with a good undergrad-level explanation of the argument.  Have any of my readers every come across a clear, accessible hand-wave proof of the spin-statistics theorem?)

Tuesday, April 10, 2018

Chapman Lecture: Using Topology to Build a Better Qubit

Yesterday, we hosted Prof. Charlie Marcus of the Niels Bohr Institute and Microsoft for our annual Chapman Lecture on Nanotechnology.   He gave a very fun, engaging talk about the story of Majorana fermions as a possible platform for topological quantum computing. 

Charlie used quipu to introduce the idea of topology as a way to store information, and made a very nice heuristic argument about how topology encodes information in a global rather than a local sense.  That is, if you have a big, loose tangle of string on the ground, and you do local measurements of little bits of the string, you really can't tell whether it's actually tied in a knot (topologically nontrivial) or just lying in a heap.  This hints at the idea that local interactions (measurements, perturbations) can't necessarily disrupt the topological state of a quantum system.

The talk was given a bit of a historical narrative flow, pointing out that while there had been a lot of breathless prose written about the long search for Majoranas, etc., in fact the timeline was actually rather compressed.  In 2001, Alexei Kitaev proposed a possible way of creating effective Majorana fermions, particles that encode topological information,  using semiconductor nanowires coupled to a (non-existent) p-wave superconductor.   In this scheme, Majorana quasiparticles localize at the ends of the wire.  You can get some feel for the concept by imagining string leading off from the ends of the wire, say downward through the substrate and off into space.  If you could sweep the Majoranas around each other somehow, the history of that wrapping would be encoded in the braiding of the strings, and even if the quasiparticles end up back where they started, there is a difference in the braiding depending on the history of the motion of the quasiparticles.   Theorists got very excited a bout the braiding concept and published lots of ideas, including how one might do quantum computing operations by this kind of braiding.

In 2010, other theorists pointed out that it should be possible to implement the Majoranas in much more accessible materials - InAs semiconductor nanowires and conventional s-wave superconductors, for example.  One experimental feature that could be sought would be a peak in the conductance of a superconductor/nanowire/superconductor device, right at zero voltage, that should turn on above a threshold magnetic field (in the plane of the wire).  That's really what jumpstarted the experimental action.  Fast forward a couple of years, and you have a paper that got a ton of attention, reporting the appearance of such a peak.  I pointed out at the time that that peak alone is not proof, but it's suggestive.  You have to be very careful, though, because other physics can mimic some aspects of the expected Majorana signature in the data.

A big advance was the recent success in growing epitaxial Al on the InAs wires.  Having atomically precise lattice registry between the semiconductor and the aluminum appears to improve the contacts significantly.   Note that this can be done in 2d as well, opening up the possibility of many investigations into proximity-induced superconductivity in gate-able semiconductor devices.  This has enabled some borrowing of techniques from other quantum computing approaches (transmons).

The main take-aways from the talk:

  • Experimental progress has actually been quite rapid, once a realistic material system was identified.
  • While many things point to these platforms as really having Majorana quasiparticles, the true unambiguous proof in the form of some demonstration of non-Abelian statistics hasn't happened yet.  Getting close.
  • Like many solid-state endeavors before, the true enabling advances here have come from high quality materials growth.
  • If this does work, scale-up may actually be do-able, since this does rely on planar semiconductor fabrication for the most part, and topological qubits may have a better ratio of physical qubits to logical qubits than other approaches.
  • Charlie Marcus remains an energetic, engaging speaker, something I first learned when I worked as the TA for the class he was teaching 24 years ago. 

Thursday, March 29, 2018

E-beam evaporators - recommendations?

Condensed matter experimentalists often need to prepare nanoscale thickness films of a variety of materials.  One approach is to use "physical vapor deposition" - in a good vacuum, a material of interest is heated to the point where it has some nonzero vapor pressure, and that vapor collides with a substrate of interest and sticks, building up the film.  One way to heat source material is with a high voltage electron beam, the kind of thing that used to be used at lower intensities to excite the phosphors on old-style cathode ray tube displays.  

My Edwards Auto306 4-pocket e-beam system is really starting to show its age.  It's been a great workhorse for quick things that don't require the cleanroom.  Does anyone out there have recommendations for a system (as inexpensive as possible of course) with similar capabilities, or a vendor you like for such things?  

Wednesday, March 28, 2018

Discussions of quantum mechanics

In a sure sign that I'm getting old, I find myself tempted to read some of the many articles, books, and discussions about interpretations of quantum mechanics that seem to be flaring up in number these days.  (Older physicists seem to return to this topic, I think because there tends to be a lingering feeling of dissatisfaction with just about every way of thinking about the issue.)

To be clear, the reason people refer to interpretations of quantum mechanics is that, in general, there is no disagreement about the results of well-defined calculations, and no observed disagreement between such calculations and experiments.   

There are deep ontological questions here about what physicists mean by something (say the wavefunction) being "real".  There are also fascinating history-of-science stories that capture the imagination, with characters like Einstein criticizing Bohr about whether God plays dice, Schroedinger and his cat, Wigner and his friend, Hugh Everett and his many worlds, etc.  Three of the central physics questions are:
  • Quantum systems can be in superpositions.  We don't see macroscopic quantum superpositions, even though "measuring" devices should also be described using quantum mechanics.  Is there some kind physical process at work that collapses superpositions that is not described by the ordinary Schroedinger equation?   
  • What picks out the classical states that we see?  
  • Is the Born rule a consequence of some underlying principle, or is that just the way things are?
Unfortunately real-life is very busy right now, but I wanted to collect some recent links and some relevant papers in one place, if people are interested.

From Peter Woit's blog, I gleaned these links:
Going down the google scholar rabbit hole, I also found these:
  • This paper has a clean explication of the challenge in whether decoherence due to interactions with large numbers of degrees of freedom really solves the outstanding issues.
  • This is a great review by Zurek about decoherence.
  • This is a subsequent review looking at these issues.
  • And this is a review of "collapse theories", attempts to modify quantum mechanics beyond Schroedinger time evolution to kill superpositions.
No time to read all of these, unfortunately.

Wednesday, March 14, 2018

Stephen Hawking, science communicator

An enormous amount has already been written by both journalists and scientists (here too) on the passing of Stephen Hawking.  Clearly he was an incredibly influential physicist with powerful scientific ideas.  Perhaps more important in the broad scheme of things, he was a gifted communicator who spread a fascination with science to an enormous audience, through his books and through the careful, clever use of his celebrity (as here, here, here, and here).   

While his illness clearly cost him dearly in many ways, I don't think it's too speculative to argue that it was a contributor to his success as a popularizer of science.  Not only was he a clear, expository writer with a gift for conveying a sense of the beauty of some deep ideas, but he was in some ways a larger-than-life heroic character - struck down physically in the prime of life, but able to pursue exotic, foundational ideas through the sheer force of his intellect.   Despite taking on some almost mythic qualities in the eyes of the public, he also conveyed that science is a human endeavor, pursued by complicated, interesting people (willing to do things like place bets on science, or even reconsider their preconceived ideas).

Hawking showed that both science and scientists can be inspiring to a broad audience.  It is rare that top scientists are able to do that, though a combination of their skill as communicators and their personalities.  In physics, besides Hawking the ones that best spring to mind are Feynman (anyone who can win a Nobel and also have their anecdotes described as the Adventures of a Curious Character is worth reading!) and Einstein.   

Sometimes there's a bias that gifted science communicators who care about public outreach are self-aggrandizing publicity hounds and not necessarily serious intellects (not that the two have to be mutually exclusive).  The outpouring of public sympathy on the occasion of Hawking's passing shows how deep an impact he had on so many.  Informing and inspiring people is a great legacy, and hopefully more scientists will be successful on that path thanks to Hawking.   



Wednesday, March 07, 2018

APS March Meeting, day 3 and summary thoughts

Besides the graphene bilayer excitement, a three other highlights from today:

David Cobden of the University of Washington gave a very nice talk about 2d topological insulator response of 1T'-WTe2.  Many of the main results are in this paper (arxiv link).    This system in the single-layer limit has very clear edge conduction while the bulk of the 2d layer is insulating, as determined by a variety of transport measurements.  There are also new eye-popping scanning microwave impedance microscopy results from Yongtao Cui's group at UC Riverside that show fascinating edge channels, indicating tears and cracks in the monolayer material that are otherwise hard to see. 

Steve Forrest of the University of Michigan gave a great presentation about "How Organic Light Emitting Diodes Revolutionized Displays (and maybe lighting)".  The first electroluminescent organic LED was reported about thirty years ago, and it had an external quantum efficiency of about 1%.  First, when an electron and a hole come together in the device, they only have a 1-in-4 chance of producing a singlet exciton, the kind that can readily decay radiatively.  Second, it isn't trivial to get light out of such a device because of total internal reflection.  Adding in the right kind of strong spin-orbit-coupling molecule, it is possible to convert those triplets to singlets and thus get nearly 100% internal quantum efficiency.  In real devices, there can be losses due to light trapped in waveguided modes, but you can create special substrates to couple that light into the far field.  Similarly, you can create modified substrates to avoid losses due to unintentional plasmon modes.  The net result is that you can have OLEDs with about 70% external quantum efficiencies.   OLED displays are a big deal - the global market was about $20B/yr in 2017, and will likely displace LCD displays.  OLED-based lighting is also on the way.  It's an amazing technology, and the industrial scale-up is very impressive.

Barry Stipe from Western Digital also gave a neat talk about the history and present state of the hard disk drive.  Despite the growth of flash memory, 90% of all storage in cloud data centers remains in magnetic hard disks, for capacity and speed.  The numbers are really remarkable.  If you scale all the parts of a hard drive up by a factor of a million, the disk platter would be 95 km in diameter, a bit would be about the size of your finger, and the read head would be flying above the surface at an altitude of 4 mm, and to get the same data rate as a drive, the head would have to be flying at 0.1 c.  I hadn't realized that they now hermetically seal the drives and fill them with He gas.  The He is an excellent thermal conductor for cooling, and because it has a density 1/7 that of air, the Reynolds number is lower for a given speed, meaning less turbulence, meaning they can squeeze additional, thinner platters into the drive housing.  Again, an amazing amount of science and physics, plus incredible engineering.

Some final thoughts (as I can't stay for the rest of the meeting):

  • In the old days, some physicists seemed to generate an intellectual impression by cultivating resemblance to Einstein.  Now, some physicists try to generate an intellectual impression by cultivating resemblance to Paul McEuen.
  • After many years of trying, the APS WiFi finally works properly and well!  
  • This was the largest March Meeting ever (~ 12000 attendees).  This is a genuine problem, as the meeting is growing by several percent per year, and this isn't sustainable, especially in terms of finding convention centers and hotels that can host.  There are serious discussions about what to do about this in the long term - don't be surprised if a survey is sent to some part of the APS membership about this.

Superconductivity in graphene bilayers - why is this exciting and important

As I mentioned here, the big story of this year's March Meeting is the report, in back-to-back Nature papers this week (arxiv pdf links in this sentence), of both Mott insulator and superconductivity in graphene bilayers.  I will post more here later today after seeing the actual talk on this (See below for some updates), but for now, let me give the FAQ-style report.  Skip to the end for the two big questions:
Moire pattern from twisted bilayer
graphene, image from NIST.

  • What's the deal with graphene?  Graphene is the name for a single sheet of graphite - basically an atomically thin hexagonal chickenwire lattice of carbon atoms.  See here and here.  Graphene is the most popular example of an enormous class of 2d materials.  The 2010 Nobel Prize in physics was awarded for the work that really opened up that whole body of materials for study by the physics community.  Graphene has some special electronic properties:  It can easily support either electrons or holes (effective positively charged "lack of electrons") for conduction (unlike a semiconductor, it has no energy gap, but it's a semimetal rather than a metal), and the relationship between kinetic energy and momentum of the charge carriers looks like what you see for massless relativistic things in free space (like light).
  • What is a bilayer?  Take two sheets of graphene and place one on top of the other.  Voila, you've made a bilayer.  The two layers talk to each other electronically.  In ordinary graphite, the layers are stacked in a certain way (Bernal stacking), and a Bernal bilayer acts like a semiconductor.  If you twist the two layers relative to each other, you end up with a Moire pattern (see image) so that along the plane, the electrons feel some sort of periodic potential.
  • What is gating?  It is possible to add or remove charge from the graphene layers by using an underlying or overlying electrode - this is the same mechanism behind the field effect transistors that underpin all of modern electronics.
  • What is actually being reported? If you have really clean graphene and twist the layers relative to each other just right ("magic angle"), the system becomes very insulating when you have just the right number of charge carriers in there.  If you add or remove charge away from that insulating regime, the system apparently becomes superconducting at a temperature below 1.7 K.
  • Why is the insulating behavior interesting?  It is believed that the insulating response in the special twisted case is because of electron-electron interactions - a Mott insulator.  Think about one of those toys with sliding tiles.  You can't park two tiles in the same location, so if there is no open location, the whole set of tiles locks in place.  Mott insulators usually involve atoms that contain d electrons, like NiO or the parent compounds of the high temperature copper oxide superconductors.  Mott response in an all carbon system would be realllllly interesting.  
  • Why is the superconductivity interesting?  Isn't 1.7 K too cold to be useful?  The idea of superconductivity-near-Mott has been widespread since the discovery of high-Tc in 1987.  If that's what's going on here, it means we have a new, highly tunable system to try to understand how this works.  High-Tc remains one of the great unsolved problems in (condensed matter) physics, and insights gained here have the potential to guide us toward greater understanding and maybe higher temperatures in those systems.  
  • Why is this important?  This is a new, tunable, controllable system to study physics that may be directly relevant to one of the great open problems in condensed matter physics.  This may be generalizable to the whole zoo of other 2d materials as well. 
  • Why should you care?  It has the potential to give us deep understanding of high temperature superconductivity.  That could be a big deal.  It's also just pretty neat.  Take a conductive sheet of graphene, and another conducting sheet of graphene, and if you stack them juuuuuust right, you get an insulator or a superconductor depending on how many charge carriers you stick in there.  Come on, that's just wild.
Update:  A few notes from seeing the actual talk.
  • Pablo painted a picture:  In the cuprates, the temperature (energy) scale is hundreds of Kelvin, and the size scale associated with the Mott insulating lattice is fractions of a nm (the spacing between Cu ions in the CuO2 planes).  In ultracold atom optical lattice attempts to look at Mott physics, the temperature scale is nK (and cooling is a real problem), while the spatial scale between sites is more like a micron.  In the twisted graphene bilayers, the temperature scale is a few K, and the spatial scale is about 13.4 nm (for the particular magic angle they use).
  • The way to think about what the twist does:  In real space, it creates a triangular lattice of roughly Bernal-stacked regions (the lighter parts of the Moire pattern above).  In reciprocal space, the Dirac cones at the K and K' points of the two lattices become separated by an amount given by \(k_{\theta} \approx K \theta\), where \(\theta\) is the twist angle, and we've used the small angle approximation.  When you do that and turn on interlayer coupling, you hybridize the bands from the upper and lower layers.  This splits off the parts of the bands that are close in energy to the dirac point, and at the magic angles those bands can be very very flat (like bandwidths of ~ 10 meV, as opposed to multiple eV of the full untwisted bands).  Flat bands = tendency to localize.   The Mott phase then happens if you park exactly one carrier (one hole, for the superconducting states in the paper) per Bernal-patch-site.  
  • Most persuasive reasons they think it's really a Mott insulating state and not something else, besides the fact that it happens right at half-filling of the twist-created triangular lattice:  Changing the angle by a fraction of a degree gets rid of the insulating state, and applying a magnetic field (in plane or perpendicular) makes the system become metallic, which is the opposite of what tends to happen in other insulating situations.  (Generally magnetic fields tend to favor localization.)
  • They see spectroscopic evidence that the important number of effective carriers is determined not by the total density, but by how far away they gate the system from half-filling.
  • At the Mott/superconducting border, they see what looks like Josephson-junction response, as if the system breaks up into superconducting regions separated by weak links.  
  • The ratio of superconducting Tc to the Fermi temperature is about 0.5, which makes this about as strongly coupled (and therefore likely to be some weird unconventional superconductor) as you ever see.
  • Pablo makes the point that this could be very general - for any combo of van der Waals layered materials, there are likely to be magic angles.  Increasing the interlayer coupling increases the magic angle, and could then increase the transition temperature.
Comments by me:
  • This is very exciting, and has great potential.  Really nice work.
  • I wonder what would happen if they used graphite as a gate material rather than a metal layer, given what I wrote here.   It should knock the disorder effects down a lot, and given how flat the bands are, that could really improve things.
  • There are still plenty of unanswered questions.  Why does the superconducting state seem more robust on the hole side of charge neutrality as well as on the hole side of half-filling?  This system is effectively a triangular lattice - that's a very different beast than the square lattice of the cuprates or the pnictides.  That has to matter somehow.  Twisting other 2d materials (square lattice MXenes?) could be very interesting.
  • I predict there will be dozens of theory papers in the next two months trying to predict magic twist angles for a whole zoo of systems.

APS March Meeting 2018, day 2

Day 2 of the meeting was even more scattered than usual for me, because several of my students were giving talks, all in different sessions spread around.  That meant I didn't have a chance to stay too long on any one topic.   A few highlights:

Jeff Urban from LBL gave an interesting talk about different aspects of the connection between electronic transport and thermal transport.  The Wiedemann-Franz relationship is a remarkably general expression based on a simple idea - when charge carriers move, they transport some (thermal) energy as well as charge, so thermal conductivity and electrical conductivity should be proportional to each other.  There are a bunch of assumptions that go into the serious derivation, though, and you could imagine scenarios when you'd expect large deviations from W-F response, particularly if scattering rates of carriers have some complicated energy dependence.  Urban spoke about hybrid materials (e.g., mixtures of inorganic components and conducting polymers).  He then pointed out a paper I'd somehow missed last year about apparent W-F violation in the metallic state of vanadium dioxide.  VO2 is a "bad metal", with an anomalously low electrical conductivity.  Makes me wonder how W-F fairs in other badly metallic systems.

Ali Hussain of the Abbamonte group at Illinois gave a nice talk about (charge) density fluctuations in the strange metal phase (and through the superconducting transition) of the copper oxide superconductor BSSCO.  The paper is here.  They use a particular technique (momentum-resolved electron energy loss spectroscopy) and find that it is peculiarly easy to create particle-hole excitations over a certain low energy range in the material, almost regardless of the momentum of those excitations.  There are also systematics with how this works as a function of doping (carrier concentration in the material), with optimally doped material having particularly temperature-independent response. 

Albert Fert spoke about spin-Hall physics, and the conversion of spin currents in to charge currents and vice versa.  One approach is the inverse Edelstein effect (IEE).  You have a stack of materials, where a ferromagnetic layer is on the top.  Driving ferromagnetic layer into FMR, you can pump a spin current vertically downward (say) into the stack.  Then, because of Rashba spin-orbit coupling, that vertical spin current can drive a lateral charge current (leading to the buildup of a lateral voltage) in a two-dimensional electron gas living at an interface in the stack.  One can use the interface between Bi and Ag (see here).  One can get better results if there is some insulating spacer to keep free conduction electrons not at the interface from interfering, as in LAO/STO structures.  Neat stuff, and it helped clarify for me the differences between the inverse spin Hall effect (3d charge current from 3d spin current) and the IEE (2d charge current from 3d spin current). 

Alexander Govorov of Ohio also gave a nice presentation about the generation of "hot" electrons from excitation of plasmons.  Non-thermally distributed electrons and holes can be extremely useful for a variety of processes (energy harvesting, photocatalysis, etc.). At issue is, what does the electronic distribution really look like.  Relevant papers are here and here.  There was a nice short talk similar in spirit by Yonatan Dubi earlier in the day.



Monday, March 05, 2018

APS March Meeting 2018, day 1

As I explained yesterday, my trip to the APS is even more scattered than in past years, but I'll try to give some key points.  Because of meetings and discussions with some collaborators and old friends, I didn't really sit down and watch entire sessions, but I definitely saw and heard some interesting things.

Markus Raschke of Colorado gave a nice talk about the kinds of ultrafast and nonlinear spectroscopy you can do if you use a very sharp gold tip as a plasmonic waveguide.  The tip has a grating milled onto it a few microns away from the sharp end, so that hitting the grating with a pulsed IR laser excites a propagating surface plasmon mode that is guided down to the really sharp point.  One way to think about this:  When you use the plasmon mode to confine light down to a length scale \(\ell\) comparable to the radius of curvature of the sharp tip, then you effectively probe a wavevector \(k_{\mathrm{eff}} \sim \2\pi/\ell\).  If \(\ell\) is a couple of nm, then you're dealing with \(k_{\mathrm{eff}}\) values associated in free space with x-rays (!).  This lets you do some pretty wild optical spectroscopies.  Because the waveguiding is actually effective over a pretty broad frequency range, that means that you can get very short pulses down there, and the intense electric field can lead to electron emission, generating the shortest electron pulses in the world.  

Andrea Young of UCSB gave a very pretty talk about looking at even-denominator fractional quantum Hall physics in extremely high quality bilayer graphene.  Using ordinary metal electrodes apparently limits how nice the effects can be in the bilayer, because the metal is polycrystalline and that disorder in local work function can actually matter.   By using graphite as both the bottom gate and the top gate (that is, a vertical stack of graphite/boron nitride/bilayer graphene/boron nitride/graphite), it is possible to tune both the filling fraction (ratio of carrier density to magnetic field) in the bilayer and the vertical electric field across the bilayer (which can polarize the states to sit more in one layer or the other).  Capacitance measurements (e.g., between the top gate and the bottom gate, or between either gate and the bilayer) can show extremely clean quantum hall data.

Sankar Das Sarma of Maryland spoke about the current status of trying to use Majorana fermions in semiconductor wire/superconductor electrode structures for topological quantum computing.  For a review of the topic overall, see here.   This is the approach to quantum computing that Microsoft is backing.  The talk was vintage Das Sarma, which is to say, full of amusing quotes, like "Physicists' record at predicting technological breakthroughs is dismal!" and "Just because something is obvious doesn't mean that you should not take it seriously."  The short version:  There has been great progress in the last 8 years, from the initial report of possible signatures of effective Majorana fermions in individual InSb nanowires contacted by NbTiN superconductors, to very clean looking data involving InAs nanowires with single-crystal, epitaxial Al contacts.  However, it remains very challenging to prove definitively that one has Majoranas rather than nearly-look-alike Andreev bound states.

In case you are interested in advanced (beyond-first-year) undergraduate labs and how to do them well, you should check out the University of Minnesota's site, as well as the ALPhA group from the AAPT.   There is also an analogous group working on projects to integrate computation into the undergraduate physics curriculum.

One potentially very big physics news story that I heard about during the day, but won't be here to see the relevant talk: [Update:  Hat tip to a colleague who pointed out that there is a talk tomorrow morning that will cover this!]  There are back-to-back brand new papers in Nature today by Yuan Cao et al. from the Jarillo-Herrero group at MIT.  (The URLs don't work yet for the articles, but I'll paste in what Nature has anyway.)  The first paper apparently shows that when you take two graphene layers and rotationally offset them from graphite-like stacking by 1.05 degrees (!), the resulting bilayer is alleged to be a Mott insulator.  The idea appears to be that the lateral Moire superlattice that results from the rotational offset gives you very flat minibands, so that electron-electron interactions are enough to lock the carriers into place when the number density of carriers is tuned correctly.  The second paper apparently (since I can't read it yet) shows that as the carrier density is tuned away from the Mott insulator filling, the system becomes a superconductor (!!), with a critical temperature of 1.7 K.  This isn't particularly high, but the idea of tuning carrier density away from a Mott state and getting superconductivity is basically the heart of our (incomplete) understanding of the copper oxide high temperature superconductors.  This is very exciting, as summarized in this News and Views commentary and this news report.  

Sunday, March 04, 2018

APS March Meeting 2018

It's that time of year again:  The running of the physicists annual APS March Meeting, a gathering of thousands of (mostly condensed matter) physicists.  These are (sarcasm mode on) famously rowdy conferences (/sarcasm).  This year the meeting is in Los Angeles.  I came to the 1998 March Meeting in LA, having just accepted a fall '98 postdoctoral fellow position at Bell Labs, and shortly after the LA convention center had been renovated.   At the time, the area around the convention center was really a bit of a pit - very few restaurants, few close hotels, and quite a bit of vacant and/or low-end commercial property.  Fast forward 20 years, and now the area around the meeting looks a lot more like a sanitized Times Square, with big video advertisements and tons of high end flashy stores.

Anyway, I will try again to write up some of what I see until I have to leave on Thursday morning, though this year between DCMP business, department chair constraints, and other deadlines, I might be more concise or abbreviated.  (As I wrote last year, if you're at the meeting and you don't already have a copy, now is the perfect time to swing by the Cambridge University Press exhibit at the trade show and pick up my book :-) ).

Thursday, February 22, 2018

Vibranium and its properties

Fictional materials can be a fun starting point for thinking about and maybe teaching about material properties.  Back in 2015 I touched on this here, when I mentioned a few of my favorite science fictional materials (more here, here, and here).  

With the release of Black Panther (BP), we now have much more information about the apparent properties of vibranium in the Marvel Cinematic Universe.   

Vibranium is pretty amazing stuff - like many fictional materials, it sometimes seems to have whatever properties are necessary to the story.  As a physicist I'm not qualified to talk about its putative medicinal properties mentioned in BP, but its physical properties are fun to consider.  Vibranium appears to be a strong, light, silvery metal (see here), and it also has some remarkable abilities in terms of taking macroscopic kinetic energy (e.g., of a projectile) and either dissipating it (look at the spent bullets in the previously linked video) or, according to BP, storing that energy for later release.  At the same time, Captain America's vibranium shield is able to bounce around with incredibly little dissipation of energy, prompting the Spider-Man quote at right.

In the spirit of handwaving physics, I think I've got this figured out.  

In all solids, there is some coupling between the deformation of the atomic lattice and the electronic states of the material (here is a nice set of slides about this).  When we talk about lattice vibrations, this is the electron-phonon coupling, and it is responsible for the transfer of energy from the electrons to the lattice (that is, this is why the actual lattice of atoms in a wire gets warm when you drive electrical current through the material).  The e-ph coupling is also responsible for the interaction that pairs up electrons in conventional superconductors.  If the electron-phonon coupling is really strong, the deformation of the lattice can basically trap the electron - this is polaron physics.  In some insulating materials, where charge is distributed asymmetrically within the unit cell of the crystal, deformation of the material can lead to big displacements of charge, with a corresponding buildup of a voltage across the system - this is piezoelectricity.  

The ability of vibranium to absorb kinetic energy, store it, and then later discharge it with a flash, suggests to me that lattice deformation ends up pumping energy into the electrons somehow.  Moreover, that electronically excited state must somehow be metastable for tens of seconds.  Ordinary electronic excitations in metals are very short-lived (e.g., tens of femtoseconds for individual excited quasiparticles to lose their energy to other electrons).  Gapped-off collective electronic states (like the superconducting condensate) can last very long times.  We have no evidence that vibranium is superconducting (though there are some interesting maglev trains in Wakanda).  That makes me think that what's really going in involves some topologically protected electronic states.  Clearly we need to run experiments (such as scanning SQUID, scanning NV center, or microwave impedance microscopy) to search for the presence of edge currents in percussively excited vibranium films to test this idea.


Thursday, February 15, 2018

Physics in the kitchen: Jamming

Last weekend while making dinner, I came across a great example of emergent physics.  What you see here are a few hundred grams of vacuum-packed arborio rice:
The rice consists of a few thousand oblong grains whose only important interactions here are a mutual "hard core" repulsion.  A chemist would say they are "sterically hindered".  An average person would say that the grains can't overlap.  The vacuum packing means that the whole ensemble of grains is being squeezed by the pressure of the surrounding air, a pressure of around 101,000 N/m2 or 14.7 pounds per in2.  The result is readily seen in the right hand image:  The ensemble of rice forms a mechanically rigid rectangular block.  Take my word for it, it was hard as a rock. 

However, as soon as I cut a little hole in the plastic packaging and thus removed the external pressure on the rice, the ensemble of rice grains lost all of its rigidity and integrity, and was soft and deformable as a beanbag, as shown here. 

So, what is going on here?  How come this collection of little hard objects acts as a single mechanically integral block when squeezed under pressure?  How much pressure does it take to get this kind of emergent rigidity?  Does that pressure depend on the size and shape of the grains, and whether they are deformable? 

This onset of collective resistance to deformation is called jamming.  This situation is entirely classical, and yet the physics is very rich.  This problem is clearly one of classical statistical physics, since it is only well defined in the aggregate and quantum mechanics is unimportant.  At the same time, it's very challenging, because systems like this are inherently not in thermal equilibrium.  When jammed, the particles are mechanically hindered and therefore can't explore lots of possible configurations.   It is possible to map out a kind of phase diagram of how rigid or jammed a system is, as a function of free volume, mechanical load from the outside, and temperature (or average kinetic energy of the particles).   For good discussions of this, try here (pdf), or more technically here and here.   Control over jamming can be very useful, as in this kind of gripping manipulator (see here for video).  



Tuesday, February 13, 2018

Rice Cleanroom position

In case someone out there is interested, Rice is hiring a cleanroom research scientist.  The official job listing is here.  To be clear:  This is not a soft money position.

The Cleanroom Facility at Rice University is a shared equipment facility for enabling micro- and nanofabrication research in the Houston metropolitan area. Current equipment includes deposition, lithography, etching and a number of characterization tools. This facility attracts users from the George R. Brown School of Engineering and the Wiess School of Natural Science and regional universities and corporations whose research programs require advanced fabrication and patterning at the micro- and nanoscale. A new state of the art facility is currently being constructed and is expected to be in operation in summer 2018. Additionally, with new initiatives in Molecular Nanotechnology, the Rice University cleanroom is poised to see significant growth in the next 5-10 years. This job announcement seeks a motivated individual who can lead, manage, teach and grow this advanced facility.

The job responsibilities of a Cleanroom Research Scientist include conducting periodic and scheduled maintenance and safety check of equipment and running qualification and calibration recipes. The incumbent will be expected to maintain the highest safety standards, author and update standard operation procedures (SOPs), maintain and calibrate processes for all equipment. The Cleanroom Research Scientist will help facilitate new equipment installation, contact vendors and manufacturers and work in tandem with them to resolve equipment issues in a timely and safe manner. Efficient inventory management of parts, chemicals and supplies will be required. The Cleanroom Scientist will also oversee personal one-to-one training of users. Additionally, the incumbent will help develop cleanroom laboratory short courses that provide lectures to small groups of students. The incumbent will also coordinate with technical staff members in Rice SEA (Shared Equipment Authority).



Saturday, February 10, 2018

This week in the arxiv

Back when my blogging was young, I had a semi-regular posting of papers that caught my eye that week on the condensed matter part of the arxiv.  As I got busy doing many things, I'd let that fall by the wayside, but I'm going to try to restart it at some rate.  I generally haven't had the time to read these in any detail, and my comments should not be taken too seriously, but these jumped out at me.

arxiv:1802.01045 - Sangwan and Hersam; Electronic transport in two-dimensional materials
If you've been paying any attention to condensed matter and materials physics in the last 14 years, you've noticed a huge amount of work on genuinely two-dimensional materials, often exfoliated from the bulk as in the scotch tape method, or grown by chemical vapor deposition.  This looks like a nice review of many of the relevant issues, and contains lots of references for interested students to chase if they want to learn more.

arxiv:1802.01385 - Froelich; Chiral Anomaly, Topological Field Theory, and Novel States of Matter
While quite mathematical (relativistic field theory always has a certain intimidating quality, at least to me), this also looks like a reasonably pedagogical introduction of topological aspects of condensed matter.  This is not for the general reader, but I'm hopeful that if I put in the time and read it carefully, I will gain a better understanding of some of the topological discussions I hear these days about things like axion insulators and chiral anomalies.

arXiv:1802.01339 - Ugeda et al.; Observation of Topologically Protected States at Crystalline Phase Boundaries in Single-layer WSe2
arXiv:1802.02999 - Huang et al.; Emergence of Topologically Protected Helical States in Minimally Twisted Bilayer Graphene
arXiv:1802.02585 - Schindler et al.; Higher-Order Topology in Bismuth
Remember back when people didn't think about topology in the band structure of materials?  Seems like a million years ago, now that a whole lot of systems (often 2d materials or interfaces between materials) seem to show evidence of topologically special edge states.   These are three examples just this week of new measurements (all using scanning tunneling microscopy as part of the tool-set, to image edge states directly) reporting previously unobserved topological states at edges or surface features.