## Saturday, August 18, 2018

### Phonons and negative mass

There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own).

The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.  Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.

Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all?  That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure).  In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter $\delta \rho/rho_{0}$ as a small perturbation, where $\rho_{0}$ is the average density and $\delta \rho$ is the change.  Linear regime sound usually doesn't transport mass.  The same is true for sound in the linear regime in a conventional liquid or a solid.

In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to $dc_{\mathrm{s}}/dP$, how the speed of sound $c_{\mathrm{s}}$ changes with pressure for that medium.  (Note that for an ideal gas, $c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}$, where $\gamma$ is the ratio of heat capacities at constant pressure and volume, $m$ is the mass of a gas molecule, and $T$ is the temperature.  There is no explicit pressure dependence, and sound is "massless" in that case.)

I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that $dc_{\mathrm{s}}/dP > 0$, sound wavepackets have a bit less mass than the average density of the surrounding medium.  That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium.  It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences).  That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.

## Tuesday, August 14, 2018

### APS March Meeting 2019 - DCMP invited symposia, DMP focused topics

A reminder to my condensed matter colleagues who go to the APS March Meeting:  We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.

Now is the time to put together nominations for these things.  The more high quality nominations, the better the content of the meeting.

The APS Division of Condensed Matter Physics is seeking nominations for invited symposia.  See here for the details.  The online submission deadline is August 24th!

Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions.  The list of Focus Topics is here.  The online submission deadline for these is August 29th.

## Sunday, August 12, 2018

### What is (dielectric) polarization?

This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization".  I have to say, this has been very educational for me.   Before I try to give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.

Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge.  A great example is a water molecule, which has a little bit of excess negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.

Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as $\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}$, where $\rho$ is the local charge density.  Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric field.  In that case, $\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}$, where $\alpha$ is the polarizability.  (In general $\alpha$ is a tensor, because $\mathbf{p}$ and $\mathbf{E}$ don't have to point in the same direction.)

If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the insulator slightly redistribute themselves - the material polarizes.  If we imagine dividing the material into little chunks, we can define the polarization $\mathbf{P}$ as the electric dipole moment per unit volume.  For a solid, we can pick some volume and define $\mathbf{P} = \mathbf{p}/V$, where $V$ is the volume over which the integral is done for calculating $\mathbf{p}$.

We can go farther than that.  If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization $\alpha$, then we can do a self-consistent calculation, where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles.  Then we can solve for $\mathbf{P}$ and therefore the relative dielectric constant in terms of $\alpha$.  The result is called the Clausius-Mossotti relation.

In crystalline solids, however, it turns out that there is a serious problem!  As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of $\mathbf{P}$ given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed.  This is a big deal.

The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase.  First, it's important to remember that polarization is really defined experimentally by how much charge flows when that capacitor described above has the voltage applied across it.  So, the problem we're really trying to solve is, find the integrated current that flows when an electric field is ramped up to some value across a periodic solid.  We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors $\mathbf{k}$.  For each $\mathbf{k}$ in a given band, there is a contribution that has to do with how the energy varies with $\mathbf{k}$ (that's the part that looks roughly like a classical velocity), and there's a second piece that has to do with how the actual electronic wavefunctions vary with $\mathbf{k}$, which is proportional to the Berry curvature.   If you add up all the $\mathbf{k}$ contributions over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.

Bottom line:  In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with $\mathbf{k}$ within the filled bands.  This is a really surprising and deep result, and it was only realized in the 1990s.  It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one that foreshadowed the whole topological insulator boom).

## Thursday, August 09, 2018

### Hydraulic jump: New insights into a very old phenomenon

Ever since I learned about them, I thought that hydraulic jumps were cool.  As I wrote here, a hydraulic jump is an analog of a standing shockwave.  The key dimensionless parameter in a shockwave in a gas is the Mach number, the ratio between the fluid speed $v$ and the local speed of sound, $c_{\mathrm{s}}$.   The gas that goes from supersonic ($\mathrm{Ma} > 1$) on one side of the shock to subsonic ($\mathrm{Ma} < 1$) on the other side.

For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, $\sqrt{g h}$, where $g$ is the gravitational acceleration and $h$ is the thickness of the liquid (say on the thin side of the jump).  That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.

However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin.  (Sorry that I can't find a non-pay link to the paper.)  They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling".  The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case.  Instead, the correct analysis is to worry about not just gravity but also surface tension.  They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now $\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1$, where $\mathrm{Fr} \sim v/\sqrt{g h}$ as usual, and the Weber number $\mathrm{We} \sim \rho v^{2} h/\gamma$, where $\rho$ is the fluid density and $\gamma$ is the surface tension.   The authors do a convincing analysis of experimental data with this model, and it works well.  I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.

## Tuesday, August 07, 2018

### Faculty position at Rice - experimental atomic/molecular/optical

Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University

The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and optical physics.  The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and service missions of the Department and University.

Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019.  Further inquiries should be directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).

Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.

## Tuesday, July 31, 2018

### What is Berry phase?

On the road to discussing the Modern Theory of Polarization (e.g., pdf), it's necessary to talk about Berry phase - here, unlike many uses of the word on this blog, "phase" actually refers to a phase angle, as in a complex number $e^{i\phi}$.   The Berry phase, named for Michael Berry, is a so-called geometric phase, in that the value of the phase depends on the "space" itself and the trajectory the system takes.  (For reference, the original paper is here (pdf), a nice talk about this is here, and reviews on how this shows up in electronic properties are here and here.)

A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces.  Imagine taking a walk while holding an arrow, initially pointed east, say.  You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right.  On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially.   On a curved (say spherical) surface, though, something different has happened.  As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction.  The angle of rotation is a geometric phase analogous to Berry phase.  The issue is that the local definition of "east" varies over the surface of the sphere.   In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space.  If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves.  This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.

So what about the actual Berry phase?  To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED.   The actual math is laid out here.  In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around.  For a state $| \psi\rangle$ (an energy eigenstate, one of the "energy levels" of our system) with energy $E$, we learn in quantum mechanics that the phase accumulates at a rate of $E/\hbar$, so that the phase angle after some time $t$ is given by $\Delta \phi = Et/\hbar$.   Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter $\lambda$.  For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates.  If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy $E(\lambda)$.  So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of $E(\lambda)$.  You might think that the phase that would be racked up would just be found by adding up the little contributions, $\Delta \phi = \int (E(\lambda(t))/\hbar) dt$.

However, this misses something!  In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path.  Here, it turns out that we have to keep track of how the state itself, $| \psi \rangle$, varies locally with $\lambda$.  To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by $| \psi \rangle$ ) is actually also varying along the path.

[Mathematically, that second contribution to the phase adds up to be $\int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda$.  Generally $\lambda$ could be a vectorial thing with multiple components, so that $\partial_{\lambda}$ would be a gradient operator with respect to $\lambda$, and the integral would be a line integral along some trajectory of $\lambda$.  It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection.  The curl of that vector potential is some effective magnetic field, called the Berry curvature.  Then the line integral above, if it's around some closed path in $\lambda$, is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]

Why is any of this of interest in condensed matter?

Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum $\mathbf{p} = \hbar \mathbf{k}$.  Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity $\partial E(\mathbf{k})/\partial \mathbf{k}$ (related to how the wavepacket racks up phase as it propagates in space), and we basically write down $\mathbf{F} = m\mathbf{a}$ using electric and magnetic fields.  Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout $\mathbf{k}$-space.  That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.

## Monday, July 23, 2018

### Math, beauty, and condensed matter physics

There is a lot of discussion these days about the beauty of mathematics in physics, and whether some ideas about mathematical elegance have led the high energy theory community down the wrong path.  And yet, despite that, high energy theory still seems like a very popular professed interest of graduating physics majors.  This has led me to identify what I think is another sociological challenge to be overcome by condensed matter in the broader consciousness.

Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it.  When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations.  You remember back when you took algebra, and you had to solve quadratic equations?  We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable.  Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers.  In more advanced high school algebra courses, students run across so-called transcendental equations, like $(x-3)ex + 3 = 0$.  There is no clean, algorithmic way to get an exact analytic solution to this.  Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer.

The same sort of thing happens again when we learn calculus.  When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions.  There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like $sin$ and $cos$ on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side).  However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods.  The exact same situation arises in solving differential equations.  Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions.  It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.

I think condensed matter suffers from a similar culturally acquired bias.  Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum.  At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations.  Condensed matter seems like it must be messy.  Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.

## Wednesday, July 18, 2018

### Items of interest

While trying to write a few things (some for the blog, some not), I wanted to pass along some links of interest:

• APS March Meeting interested parties:  The time to submit nominations for invited sessions for the Division of Condensed Matter Physics is now (deadline of August 24).  See here.  As a member-at-large for DCMP, I've been involved in the process now for a couple of years, and lots of high quality nominations are the best way to get a really good meeting.  Please take the time to nominate!
• Similarly, now is the time to nominate people for DCMP offices (deadline of Sept. 1).
• There is a new tool available called Scimeter that is a rather interesting add-on to the arxiv.  It has done some textual analysis of all the preprints on the arxiv, so you can construct a word cloud for an author (see at right for mine, which is surprisingly dominated by "field effect transistor" - I guess I use that phrase too often) or group of authors; or you can search for similar authors based on that same word cloud analysis.  Additionally, the tool uses that analysis to compare breadth of research topics spanned by an author's papers.  Apparently I am 0.3 standard deviations more broad than the mean broadness, whatever that means.
• Thanks to a colleague, I stumbled on Fermat's Library, a great site that stockpiles some truly interesting and foundational papers across many disciplines and allows shared commenting in the margins (hence the Fermat reference).

## Sunday, July 08, 2018

### Physics in the kitchen: Frying tofu

I was going to title this post "On the emergence of spatial and temporal coherence in frying tofu", or "Frying tofu:  Time crystal?", but decided that simplicity has virtues.

I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen.  I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light.  The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz.  Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized.  This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.

From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well.  Clearly I should have filmed this and called it a classical time crystal.   Would've been a cheap and tasty paper.  (I kid, I kid.)

What I think happened is this.  The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles.  There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu).  The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop.  I'm sure someone far more obsessive than I am could do serious dimensional analysis about this.  The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames).   This kind of self-organization happens all the time - here is a nice talk about this stuff.  This kind of synchronization is an example of universal, emergent physics.

## Tuesday, July 03, 2018

### A metal superconducting transistor (?!)

A paper was published yesterday in Nature Nanotechnology that is quite surprising, at least to me, and I thought I should point it out.

The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state.  [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" $h/2e^{2}\approx 12.9~\mathrm{k}\Omega$.  This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains).  Similarly, the inferred resistivity of around 30 $\mu\Omega$-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]

The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero.  This effect happens symmetrically with either polarity of bias voltage.

This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry.

The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect.  In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field.

Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors).    I need to think about whether this makes sense to me physically.

## Sunday, July 01, 2018

### Book review: The Secret Life of Science

I recently received a copy of The Secret Life of Science:  How It Really Works and Why It Matters, by Jeremy Baumberg of Cambridge University.  The book is meant to provide a look at the "science ecosystem", and it seems to be unique, at least in my experience.  From the perspective of a practitioner but with a wider eye, Prof. Baumberg tries to explain much of the modern scientific enterprise - what is modern science (with an emphasis on "simplifiers" [often reductionists] vs. "constructors" [closer to engineers, building new syntheses] - this is rather similar to Narayanamurti's take described here), who are the different stakeholders, publication as currency, scientific conferences, science publicizing and reporting, how funding decisions happen, career paths and competition, etc.

I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it.  I agree with many of the book's main observations:

• The actual scientific enterprise is complicated (as pointed out repeatedly with one particular busy figure that recurs throughout the text), with a bunch of stakeholders, some cooperating, some competing, and we've arrived at the present situation through a complex, emergent history of market forces, not some global optimization of how best to allocate resources or how to choose topics.
• Scientific publishing is pretty bizarre, functioning to disseminate knowledge as well as a way of keeping score; peer review is annoying in many ways but serves a valuable purpose; for-profit publications can distort people's behaviors because of the prestige associated with some.
• Conferences are also pretty weird, serving purposes (networking, researcher presentation training) that are not really what used to be the point (putting out and debating new results).
• Science journalism is difficult, with far more science than can be covered, squeezed resources for real journalism, incentives for PR that can oversimplify or amp up claims and controversy, etc.
The book ends with some observations and suggestions from the author's perspective on changes that might improve the system, with a realist recognition that big changes will be hard.

It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation.  My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields.  (I do wonder about some of the numbers that appear.  Has the number of scientists in France really grown by a factor of three since 1980?  And by a factor of five in Spain over that time?)

If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start.

## Monday, June 25, 2018

### Don't mince words, John Horgan. What do you really think?

In his review of Sabine Hossenfelder's new book for Scientific American, John Horgan begins by saying:
Does anyone who follows physics doubt it is in trouble? When I say physics, I don’t mean applied physics, material science or what Murray-Gell-Mann called “squalid-state physics.” I mean physics at its grandest, the effort to figure out reality. Where did the universe come from? What is it made of? What laws govern its behavior? And how probable is the universe? Are we here through sheer luck, or was our existence somehow inevitable?
Wow.  Way to back-handedly imply that condensed matter physics is not grand or truly important.  The frustrating thing is that Horgan knows perfectly well that condensed matter physics has been the root of multiple of profound ideas (Higgs mechanism, anyone?), as well as shaping basically all of the technology he used to write that review.   He goes out of his way here to make clear that he doesn't think any of that is really interesting.  Why do that as a rhetorical device?

## Sunday, June 24, 2018

### There is no such thing as a rigid solid.

How's that for a provocative, click-bait headline?

More than any other branch of physics, condensed matter physics highlights universality, the idea that some properties crop up repeatedly, in many physical systems, independent of and despite the differences in the microscopic building blocks of the system.  One example that affects you pretty much all the time is emergence of rigid solids from the microscopic building blocks that are atoms and molecules.  You may never think about it consciously, but mechanically rigid solids make up much of our environment - our buildings, our furniture, our roads, even ourselves.

A quartz crystal is an example of a rigid solid. By solid, I mean that the material maintains its own shape without confining walls, and by rigid, I mean that it “resists deformation”. Deforming the crystal – stretching it, squeezing it, bending it – involves trying to move some piece of the crystal relative to some other piece of the crystal. If you try to do this, it might flex a little bit, but the crystal pushes back on you. The ratio between the pressure (say) that you apply and the percentage change in the crystal’s size is called an elastic modulus, and it’s a measure of rigidity. Diamond has a big elastic modulus, as does steel. Rubber has a comparatively small elastic modulus – it’s squishier. Rigidity implies solidity. If a hunk of material has rigidity, it can withstand forces acting on it, like gravity.  (Note that I'm already assuming that atoms can't pass through each other, which turns out to be a macroscopic manifestation of quantum mechanics, even though people rarely think of it that way.  I've discussed this recently here.)

Take away the walls of an aquarium, and the rectangular “block” of water in there can’t resist gravity and splooshes all over the table. In free fall as in the International Space Station, a blob of water will pull itself into a sphere, as it doesn’t have the rigidity to resist surface tension, the tendency of a material to minimize its surface area.

Rigidity is an emergent property. One silicon or oxygen atom isn’t rigid, but somehow, when you put enough of them together under the right conditions, you get a mechanically solid object. A glass, in contrast to a crystal, looks very different if you zoom in to the atomic scale. In the case of silicon dioxide, while the detailed bonding of each silicon to two oxygens looks similar to the case of quartz, there is no long-range pattern to how the atoms are arranged. Indeed, while it would be incredibly difficult to do experimentally, if you could take a snapshot of molten silica glass at the atomic scale, from the positions of the atoms alone, you wouldn’t be able to tell whether it was molten or solidified.   However, despite the structural similarities to a liquid, solid glass is mechanically rigid. In fact, some glasses are actually far more stiff than crystalline solids – metallic glasses are highly prized for exactly this property – despite having a microscopic structure that looks like a liquid.

Somehow, these two systems (quartz and silica glass), with very different detailed structures, have very similar mechanical properties on large scales. Maybe this example isn't too convincing. After all, the basic building blocks in both of those materials are really the same. However, mechanical rigidity shows up all the time in materials with comparatively high densities. Water ice is rigid. The bumper on your car is rigid. The interior of a hard-boiled egg is rigid. Concrete is rigid. A block of wood is rigid. A vacuum-packed bag of ground espresso-roasted coffee is rigid. Somehow, mechanical rigidity is a common collective fate of many-particle systems. So where does it originate? What conditions are necessary to have rigidity?

Interestingly, this question remains one that is a subject of research.  Despite my click-bait headline, it sure looks like there are materials that are mechanically rigid.  However, it can be shown mathematically (!) that "equilibrium states of matter that break spontaneously translational invariance...flow if even an infinitesimal stress is applied".   That is, take some crystal or glass, where the constituent particles are sitting in well-defined locations (thus "breaking translational invariance"), and apply even a tiny bit of shear, and the material will flow.  It can be shown mathematically that the particles in the bulk of such a material can always rearrange a tiny amount that should end up propagating out to displace the surface of the material, which really is what we mean by "flow".   How do we reconcile this statement with what we see every day, for example that you touching your kitchen table really does not cause its surface to flow like a liquid?

Some of this is the kind of hair-splitting/no-true-Scotsman definitional stuff that shows up sometimes in theoretical physics.  A true equilibrium state would last forever.   To say that "equilibrium states of matter that break spontaneously translational invariance" are unstable under stress just means that the final, flowed rearrangement of atoms is energetically favored once stress is applied, but doesn't say anything on how long it takes the system to get there.

We see other examples of this kind of thing in condensed matter and statistical physics.  It is possible to superheat liquid water above its boiling point.  Under those conditions, the gas phase is thermodynamically favored, but to get from the homogeneous liquid to the gas requires creating a blob of gas, with an accompanying liquid/gas interface that is energetically expensive.  The result is an "activation barrier".

Turns out, that appears to be the right way to think about solids.  Solids only appear rigid on any useful timescale because the timescale to create defects and reach the flowed state is very very long.  A recent discussion of this is here, with some really good references, in a paper that only appeared this spring in the Proceedings of the National Academy of Sciences of the US.  An earlier work (a PRL) trying to quantify how this all works is here, if you're interested.

One could say that this is a bit silly - obviously we know empirically that there are rigid materials, and any analysis saying they don't exist has to be off the mark somehow.  However, in science, particularly physics, this kind of study, where observation and some fairly well-defined model seem to contradict each other, is precisely where we tend to gain a lot of insight.  (This is something we have to be better at explaining to non-scientists....)

## Monday, June 18, 2018

### Scientific American - what the heck is this?

Today, Scientific American ran this on their blogs page.  This article calls to mind weird mysticism stuff like crystal energy, homeopathy, and tree waves (a reference that attendees of mid-1990s APS meetings might get), and would not be out of place in Omni Magazine in about 1979.

I’ve written before about SciAm and their blogs.  My offer still stands, if they ever want a condensed matter/nano blog that I promise won’t verge into hype or pseudoscience.

## Saturday, June 16, 2018

### Water at the nanoscale

One reason the nanoscale is home to some interesting physics and chemistry is that the nanometer is a typical scale for molecules.   When the size of your system becomes comparable to the molecular scale, you can reasonably expect something to happen, in the sense that it should no longer be possible to ignore the fact that your system is actually built out of molecules.

Consider water as an example.  Water molecules have finite size (on the order of 0.2 nm between the hydrogens), a definite angled shape, and have a bit of an electric dipole moment (the oxygen has a slight excess of electron density and the hydrogens have a slight deficit).  In the liquid state, the water molecules are basically jostling around and have a typical intermolecular distance comparable to the size of the molecule.  If you confine water down to a nanoscale volume, you know at some point the finite size and interactions (steric and otherwise) between the water molecules have to matter.  For example, squeeze water down to a few molecular layers between solid boundaries, and it starts to act more like an elastic solid than a viscous fluid.

Another consequence of this confinement in water can be seen in measurements of its dielectric properties - how charge inside rearranges itself in response to an external electric field.  In bulk liquid water, there are two components to the dielectric response.  The electronic clouds in the individual molecules can polarize a bit, and the molecules themselves (with their electric dipole moments) can reorient.  This latter contribution ends up being very important for dc electric fields, and as a result the dc relative dielectric permittivity of water, $\kappa$, is about 80 (compared with 1 for the vacuum, and around 3.9 for SiO2).   At the nanoscale, however, the motion of the water molecules should be hindered, especially near a surface.  That should depress $\kappa$ for nanoconfined water.

In a preprint on the arxiv this week, that is exactly what is found.  Using a clever design, water is confined in nanoscale channels defined by a graphite floor, hexagonal boron nitride (hBN) walls, and a hBN roof.  A conductive atomic force microscope tip is used as a top electrode, the graphite is used as a bottom electrode, and the investigators are able to see results consistent with $\kappa$ falling to roughly 2.1 for layers about 0.6-0.9 nm thick adjacent to the channel floor and ceiling.  The result is neat, and it should provide a very interesting test case for attempts to model these confinement effects computationally.

## Friday, June 08, 2018

### What are steric interactions?

When first was reading chemistry papers, one piece of jargon jumped out at me:  "steric hindrance", which is an abstruse way of saying that you can't force pieces of molecules (atoms or groups of atoms) to pass through each other.  In physics jargon, they have a "hard core repulsion".  If you want to describe the potential energy of two atoms as you try to squeeze one into the volume of the other, you get a term that blows up very rapidly, like $1/r^{12}$, where $r$ is the distance between the nuclei.  Basically, you can do pretty well treating atoms like impenetrable spheres with diameters given by their outer electronic orbitals.  Indeed, Robert Hooke went so far as to infer, from the existence of faceted crystals, that matter is built from effectively impenetrable little spherical atoms.

It's a common thing in popular treatments of physics to point out that atoms are "mostly empty space".  With hydrogen, for example, if you said that the proton was the size of a pea, then the 1s orbital (describing the spatial probability distribution for finding the point-like electron) would be around 250 m in radius.  So, if atoms are such big, puffy objects, then why can't two atoms overlap in real space?  It's not just the electrostatic repulsion, since each atom is overall neutral.

The answer is (once again) the Pauli exclusion principle (PEP) and the fact that electrons obey Fermi statistics.  Sometimes the PEP is stated in a mathematically formal way that can obscure its profound consequences.  For our purposes, the bottom line is:  It is apparently a fundamental property of the universe that you can't stick two identical fermions (including having the same spin) in the same quantum state.    At the risk of getting technical, this can mean a particular atomic orbital, or more generally it can be argued to mean the same little "cell" of volume $h^{3}$ in r-p phase space.  It just can't happen

If you try to force it, what happens instead?  In practice, to get two carbon atoms, say, to overlap in real space, you would have to make the electrons in one of the atoms leave their ordinary orbitals and make transitions to states with higher kinetic energies.  That energy has to come from somewhere - you have to do work and supply that energy to squeeze two atoms into the volume of one.  Books have been written about this.

Leaving aside for a moment the question of why rigid solids are rigid, it's pretty neat to realize that the physics principle that keeps you from falling through your chair or the floor is really the same principle that holds up white dwarf stars.

## Thursday, May 31, 2018

### Coming attractions and short items

Here are a few items of interest.

I am planning to write a couple of posts about why solids are rigid, and in the course of thinking about this, I made a couple of discoveries:

• When you google "why are solids rigid?", you find a large number of websites that all have exactly the same wording:  "Solids are rigid because the intermolecular forces of attraction that are present in solids are very strong. The constituent particles of solids cannot move from their positions they can only vibrate from their mean positions."  Note that this is (1) not correct, and (2) also not much of an answer.  It seems that the wording is popular because it's an answer that has appeared on the IIT entrance examinations in India.
• I came across an absolutely wonderful paper by Victor Weisskopf, "Of Atoms, Mountains, and Stars:  A Study in Qualitative Physics", Science 187, 605-612 (1975).  Here is the only link I could find that might be reachable without a subscription.  It is a great example of "thinking like a physicist", showing how far one can get by starting from simple ideas and using order-of-magnitude estimates.  This seems like something that should be required reading of most undergrad physics majors, and more besides.
In politics-of-science news:

• There is an amendment pending in the US Congress on the big annual defense bill that has the potential to penalize US researchers who have received any (presently not well-defined) resources from Chinese talent recruitment efforts.  (Russia, Iran, and North Korea are also mentioned, but they're irrelevant here, since they are not running such programs.)  The amendment would allow the DOD to deny these folks research funding.  The idea seems to be that such people are perceived by some as a risk in terms of taking DOD-relevant knowledge and giving China an economic or strategic benefit.  Many major US research universities have been encouraging closer ties with China and Chinese universities in the last 15 years.  Makes you wonder how many people would be affected.
• The present US administration, according to AP, is apparently about to put in place (June 11?) new limitations on Chinese graduate student visas, for those working in STEM (and especially in fields mentioned explicitly in the Chinese government's big economic plan).   It would make relevant student visas one year in duration.  Given that the current visa renewal process can already barely keep up with the demand, it seems like this could become an enormous headache.  I could go on at length about why I think this is a bad idea.  Given that it's just AP that is reporting this so far, perhaps it won't happen or will be more narrowly construed.  We'll see.

## Tuesday, May 29, 2018

### What is tunneling?

I first learned about quantum tunneling from science fiction, specifically a short story by Larry Niven.  The idea is often tossed out there as one of those "quantum is weird and almost magical!" concepts.  It is surely far from our daily experience.

Imagine a car of mass $m$ rolling along a road toward a small hill.  Let’s make the car and the road ideal – we’re not going to worry about friction or drag from the air or anything like that.   You know from everyday experience that the car will roll up the hill and slow down.  This ideal car’s total energy is conserved, and it has (conventionally) two pieces, the kinetic energy $p^2/2m$ (where $p$ is the momentum; here I’m leaving out the rotational contribution of the tires), and the gravitational potential energy, $mgz$, where $g$ is the gravitational acceleration and $z$ is the height of the center of mass above some reference level.  As the car goes up, so does its potential energy, meaning its kinetic energy has to fall.  When the kinetic energy hits zero, the car stops momentarily before starting to roll backward down the hill.  The spot where the car stops is called a classical turning point.  Without some additional contribution to the energy, you won’t ever find the car on the other side of that hill, because the shaded region is “classically forbidden”.  We’d either have to sacrifice conservation of energy, or the car would have to have negative kinetic energy to exist in the forbidden region.  Since the kinetic piece is proportional to $p^2$, to have negative kinetic energy would require $p$ to be imaginary (!).

However, we know that the car is really a quantum object, built out of a huge number (more than $10^27$) other quantum objects.  The spatial locations of quantum objects can be described with “wavefunctions”, and you need to know a couple of things about these to get a feel for tunneling.  For the ideal case of a free particle with a definite momentum, the wavefunction really looks like a wave with a wavelength $h/p$, where $h$ is Planck’s constant.  Because a wave extends throughout all space, the probability of finding the ideal free particle anywhere is equal, in agreement with the oft-quoted uncertainty principle.

Here’s the essential piece of physics:  In a classically forbidden region, the wavefunction decays exponentially with distance (mathematically equivalent to the wave having an imaginary wavelength), but it can’t change abruptly.  That means that if you solve the problem of a quantum particle incident on a finite (in energy and spatial size) barrier from one side, there is always some probability that the particle will be found on the far side of the classically forbidden region.

This means that it’s technically possible for the car to “tunnel” through the hillside and end up on the downslope.  I would not recommend this as a transportation strategy, though, because that’s incredibly unlikely.  The more massive the particle, and the more forbidden the region (that is, the more negative the classical kinetic energy of the particle would have to be in the barrier), the faster the exponential decay of the probability of getting through.  For a 1000 kg car trying to tunnel through a 10 cm high speed bump 1 m long, the probability is around exp(-2.7e20).  That kind of number is why quantum tunneling is not an obvious part of your daily existence.  For something much less massive, like an electron, the tunneling probability from, say, a metal tip to a metal surface decays by around a factor of $e^2$ for every 0.1 nm of tip-surface distance separation.  It’s that exponential sensitivity to geometry that makes scanning tunneling microscopy possible.

However, quantum tunneling is very much a part of your life.  Protons can tunnel through the repulsion of their positive charges to bind to each other – that’s what powers the sun.  Electrons routinely tunnel in zillions of chemical reactions going on in your body right now, as well as in the photosynthesis process that drives most plant life.

On a more technological note, tunneling is a key ingredient in the physics of flash memory.  Flash is based on field-effect transistors, and as I described the other day, transistors are switched on or off depending on the voltage applied to a gate electrode.  Flash storage uses transistors with a “floating gate”, a conductive island surrounded by insulating material, some kind of glassy oxide.  Charge can be parked on that gate or removed from it, and depending on the amount of charge there, the underlying transistor channel is either conductive or not.   How does charge get on or off the island?  By a flavor of tunneling called field emission.  The insulator around the floating gate functions as a potential energy barrier for electrons.  If a big electric field is applied via some other electrodes, the barrier’s shape is distorted, allowing electrons to tunnel through it efficiently.  This is a tricky aspect of flash design.  The barrier has to be high/thick enough that charge stuck on the floating gate can stay there a very long time - you wouldn’t want the bits in your SSD or your flash drive losing their status on the timescale of months, right? - but ideally tunable enough that the data can be rewritten quickly, with low error rates, at low voltages.

## Monday, May 21, 2018

### Physics around you: the field-effect transistor

While dark matter and quantum gravity routinely get enormous play in the media, you are surrounded every day by physics that enables near miraculous technology.  Paramount among these is the field-effect transistor (FET).   That wikipedia link is actually pretty good, btw.  While I've written before about specific issues regarding FETs (here, here, here), I hadn't said much about the general device.

The idea of the FET is to use a third electrode, a gate, to control the flow of current through a channel between two other electrodes, the source and drain.  The electric field from the gate controls the mobile charge in the channel - this is the field effect.   You can imagine doing this in vacuum, with a hot filament to be a source of electrons, a second electrode (at a positive voltage relative to the source) to collect the electrons, and an intervening grid as the gate.  Implementing this in the solid state was proposed more than once (LilienfeldHeil) before it was done successfully.

Where is the physics?  There is a ton of physics involved in how these systems actually work.  For example, it's all well and good to talk about "free" electrons moving around in solids in analogy to electrons flying in space in a vacuum tube, but it's far from obvious that you should be able to do this.   Solids are built out of atoms and are inherently quantum mechanical, with particular allowed energies and electronic states picked out by quantum mechanics and symmetries.  The fact that allowed electronic states in periodic solids ("Bloch waves") resemble "free" electron states (plane waves, in the quantum context) is very deep and comes from the underlying symmetry of the material.  [Note that you can have transistors even when the charge carriers should be treated as hopping from site to site - that's how many organic FETs work.]  It's the Pauli principle that allows us to worry only about the highest energy electronic states and not have to worry about, e.g., the electrons deep down in the ion cores of the atoms in the material.  Still, you do have to make sure there aren't a bunch of electronic states at energies where you don't want them - these the are traps and surface states that made FETs hard to get working.  The combo of the Pauli principle and electrostatic screening is why we can largely ignore the electron-electron repulsion in the materials, but still use the gate electrode's electric field to affect the channel.  FETs have also been great tools for learning new physics, as in the quantum Hall effect

What's the big deal?  When you have a switch that is either open or closed, it's easy to realize that you can do binary-based computing with a bunch of them.  The integrated manufacturing of the FET has changed the world.  It's one of the few examples of a truly disruptive technology in the last 100 years.  The device you're using to read this probably contains several billion (!) transistors, and they pretty much all work, for years at a time.  FETs are the underlying technology for both regular and flash memory.  FETs are what drive the pixels in the flat panel display you're viewing.  Truly, they are so ubiquitous that they've become invisible.

## Wednesday, May 16, 2018

### "Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue.

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)

## Wednesday, May 02, 2018

### Short items

A couple of points of interest:
• Bill Gates apparently turned down an offer from the Trump administration to be presidential science advisor.  It's unclear if this was a serious offer or an off-hand remark.   Either way it underscores what a trivialized and minimal role OSTP appears to be playing in the present administration.  It's a fact of modern existence that there are many intersections between public policy and the need for technical understanding of scientific issues (in the broad sense that includes engineering).   While an engaged and highly functional OSTP doesn't guarantee good policy (because science is only one of many factors that drive decision-making), the US is doing itself a disservice by running a skeleton crew in that office.
• Phil Anderson has posted a document (not a paper submitted for publication anywhere, but more of an essay) on the arxiv with the sombre title, "Four last conjectures".  These concern: (1) the true ground state of solids made of atoms that are hard-core bosons, suggesting that at sufficiently low temperatures one could have "non-classical rotational inertia" - not exactly a supersolid, but similar in spirit; (2) a discussion of a liquid phase of (magnetic) vortices in superconductors in the context of heat transport; (3) an exposition of his take on high temperature superconductivity (the "hidden Fermi liquid"), where one can have non-Fermi-liquid scattering rates for longitudinal resistivity, yet Fermi liquid-like scattering rates for scattering in the Hall effect; and (4) a speculation about an alternative explanation (that, in my view, seems ill-conceived) for the accelerating expansion of the universe.   The document is vintage Anderson, and there's a melancholy subtext given that he's 94 years old and is clearly conscious that he likely won't be with us much longer.
• On a lighter note, a paper (link goes to publicly accessible version) came out a couple of weeks ago explaining how yarn works - that is, how the frictional interactions between a zillion constituent short fibers lead to thread acting like a mechanically robust object.  Here is a nice write-up.

## Sunday, April 29, 2018

### What is a quantum point contact? What is quantized conductance?

When we teach basic electrical phenomena to high school or college physics students, we usually talk about Ohm's Law, in the form $V = I R$, where $V$ is the voltage (how much effort it takes to push charge, in some sense), $I$ is the current (the flow rate of the charge), and $R$ is the resistance.  This simple linear relationship is a good first guess about how you might expect conduction to work.  Often we know the voltage and want to find the current, so we write $I = V/R$, and the conductance is defined as $G \equiv 1/R$, so $I = G V$.

In a liquid flow analogy, voltage is like the net pressure across some pipe, current is like the flow rate of liquid through the pipe, and the conductance characterizes how the pipe limits the flow of liquid.  For a given pressure difference between the ends of the pipe, there are two ways to lower the flow rate of the liquid:  make the pipe longer, and make the pipe narrower.  The same idea applies to electrical conductance of some given material - making the material longer or narrower lowers $G$ (increases $R$).

Does anything special happen when the conductance becomes small?  What does "small" mean here - small compared to what?  (Physicists love dimensionless ratios, where you compare some quantity of interest with some characteristic scale - see here and here.  I thought I'd written a long post about this before, but according to google I haven't; something to do in the future.)  It turns out that there is a combination of fundamental constants that has the same units as conductance:  $e^2/h$, where $e$ is the electronic charge and $h$ is Planck's constant.  Interestingly, evaluating this numerically gives a characteristic conductance of about 1/(26 k$\Omega$).   The fact that $h$ is in there tells you that this conductance scale is important if quantum effects are relevant to your system (not when you're in the classical limit of, say, a macroscopic, long spool of wire that happens to have $R \sim 26~\mathrm{k}\Omega$.
 Example of a quantum point contact, from here.

Conductance quantization can happen when you make the conductance approach this characteristic magnitude by having the conductor be very narrow, comparable to the spatial spread of the quantum mechanical electrons.  We know electrons are really quantum objects, described by wavefunctions, and those wavefunctions can have some characteristic spatial scale depending on the electronic energy and how tightly the electron is confined.  You can then think of the connection between the two conductors like a waveguide, so that only a handful of electronic "modes" or "channels" (compatible with the confinement of the electrons and what the wavefunctions are required to do) actually link the two conductors.  (See figure.) Each spatial electronic mode that connects between the two sides has a conductance of $G_{0} \equiv 2e^{2}/h$, where the 2 comes from the two possible spin states of the electron.

 Conductance quantization in a 2d electron system,from here.
A junction like this in a semiconductor system is called a quantum point contact.  In semiconductor devices you can use gate electrodes to confine the electrons, and when the conductance reaches the appropriate spatial scale you can see steps in the conductance near integer multiples of $G_{0}$, the conductance quantum.  A famous example of this is shown in the figure here.

In metals, because the density of (mobile) electrons is very high, the effective wavelength of the electrons is much shorter, comparable to the size of an atom, a fraction of a nanometer.  This means that constrictions between pieces of metal have to reach the atomic scale to see anything like conductance quantization.  This is, indeed, observed.

For a very readable review of all of this, see this Physics Today article by two of the experimental progenitors of this.  Quantized conductance shows up in other situations when only a countable number of electronic states are actually doing the job of carrying current (like along the edges of systems in the quantum Hall regime, or along the edges of 2d topological materials, or in carbon nanotubes).

Note 1:  It's really the "confinement so that only a few allowed waves can pass" that gives the quantization here.  That means that other confined wave systems can show the analog of this quantization.  This is explained in the PT article above, and an example is conduction of heat due to phonons.

Note 2:  What about when $G$ becomes comparable to $G_{0}$ in a long, but quantum mechanically coherent system?  That's a story for another time, and gets into the whole scaling theory of localization.

## Wednesday, April 25, 2018

### Postdoc opportunity

While I have already spammed a number of faculty colleagues about this, I wanted to point out a competitive, endowed postdoctoral opportunity at Rice, made possible through the Smalley-Curl Institute.  (I am interested in hiring a postdoc in general, but the endowed opportunity is a nice one to pursue as well.)

The endowed program is the J Evans Attwell Welch Postdoctoral Fellowship.  This is a competitive, two-year fellowship, and each additionally includes travel funds and research supplies/minor equipment resources.  The deadline for the applications is this coming July 1, 2018 with an anticipated start date around September, 2018.

I'd be delighted to work with someone on an application for this, and I am looking for a great postdoc in any case.  The best applicant would be a strong student who is interested in working on (i) noise and transport measurements in spin-orbit systems including 2d TIs; (ii) nanoscale studies (incl noise and transport) of correlated materials and non-Fermi liquids; and/or (iii) combined electronic and optical studies down to the molecular scale via plasmonic structures.  If you're a student finishing up and are interested, please contact me, and if you're a faculty member working with possible candidates, please feel free to point out this opportunity.

## Saturday, April 21, 2018

### The Einstein-de Haas effect

Angular momentum in classical physics is a well-defined quantity tied to the motion of mass about some axis - its value (magnitude and direction) is tied to a particular choice of coordinates.  When we think about some extended object spinning around an axis with some angular velocity $\mathbf{\omega}$, we can define the angular momentum associated with that rotation by $\mathbf{I}\cdot \mathbf{\omega}$, where $\mathbf{I}$ is the "inertia tensor" that keeps track of how mass is distributed in space around the axis.  In general, conservation of angular momentum in isolated systems is a consequence of the rotational symmetry of the laws of physics (Noether's theorem).

The idea of quantum particles possessing some kind of intrinsic angular momentum is a pretty weird one, but it turns out to be necessary to understand a huge amount of physics.  That intrinsic angular momentum is called "spin", but it's *not* correct to think of it as resulting from the particle being an extended physical object actually spinning.  As I learned from reading The Story of Spin (cool book by Tomonaga, though I found it a bit impenetrable toward the end - more on that below), Kronig first suggested that electrons might have intrinsic angular momentum and used the intuitive idea of spinning to describe it; Pauli pushed back very hard on Kronig about the idea that there could be some physical rotational motion involved - the intrinsic angular momentum is some constant on the order of $\hbar$.  If it were the usual mechanical motion, dimensionally this would have to go something like $m r v$, where $m$ is the mass, $r$ is the size of the particle, and $v$ is a speed; as $r$ gets small, like even approaching a scale we know to be much larger than any intrinsic size of the electron, $v$ would exceed $c$, the speed of light.  Pauli pounded on Kronig hard enough that Kronig didn't publish his ideas, and two years later Goudsmit and Uhlenbeck established intrinsic angular momentum, calling it "spin".

Because of its weird intrinsic nature, when we teach undergrads about spin, we often don't emphasize that it is just as much angular momentum as the classical mechanical kind.  If you somehow do something to a system a bunch of spins, that can have mechanical consequences.  I've written about one example before, a thought experiment described by Feynman and approximately implemented in micromechanical devices.  A related concept is the Einstein-de Haas effect, where flipping spins again exerts some kind of mechanical torque.  A new preprint on the arxiv shows a cool implementation of this, using ultrafast laser pulses to demagnetize a ferromagnetic material.  The sudden change of the spin angular momentum of the electrons results, through coupling to the atoms, in the launching of a mechanical shear wave as the angular momentum is dumped into the lattice.   The wave is then detected by time-resolved x-ray measurements.  Pretty cool!

(The part of Tomonaga's book that was hard for me to appreciate deals with the spin-statistics theorem, the quantum field theory statement that fermions have spins that are half-integer multiples of $\hbar$ while bosons have spins that are integer multiples.  There is a claim that even Feynman could not come up with a good undergrad-level explanation of the argument.  Have any of my readers every come across a clear, accessible hand-wave proof of the spin-statistics theorem?)

## Tuesday, April 10, 2018

### Chapman Lecture: Using Topology to Build a Better Qubit

Yesterday, we hosted Prof. Charlie Marcus of the Niels Bohr Institute and Microsoft for our annual Chapman Lecture on Nanotechnology.   He gave a very fun, engaging talk about the story of Majorana fermions as a possible platform for topological quantum computing.

Charlie used quipu to introduce the idea of topology as a way to store information, and made a very nice heuristic argument about how topology encodes information in a global rather than a local sense.  That is, if you have a big, loose tangle of string on the ground, and you do local measurements of little bits of the string, you really can't tell whether it's actually tied in a knot (topologically nontrivial) or just lying in a heap.  This hints at the idea that local interactions (measurements, perturbations) can't necessarily disrupt the topological state of a quantum system.

The talk was given a bit of a historical narrative flow, pointing out that while there had been a lot of breathless prose written about the long search for Majoranas, etc., in fact the timeline was actually rather compressed.  In 2001, Alexei Kitaev proposed a possible way of creating effective Majorana fermions, particles that encode topological information,  using semiconductor nanowires coupled to a (non-existent) p-wave superconductor.   In this scheme, Majorana quasiparticles localize at the ends of the wire.  You can get some feel for the concept by imagining string leading off from the ends of the wire, say downward through the substrate and off into space.  If you could sweep the Majoranas around each other somehow, the history of that wrapping would be encoded in the braiding of the strings, and even if the quasiparticles end up back where they started, there is a difference in the braiding depending on the history of the motion of the quasiparticles.   Theorists got very excited a bout the braiding concept and published lots of ideas, including how one might do quantum computing operations by this kind of braiding.

In 2010, other theorists pointed out that it should be possible to implement the Majoranas in much more accessible materials - InAs semiconductor nanowires and conventional s-wave superconductors, for example.  One experimental feature that could be sought would be a peak in the conductance of a superconductor/nanowire/superconductor device, right at zero voltage, that should turn on above a threshold magnetic field (in the plane of the wire).  That's really what jumpstarted the experimental action.  Fast forward a couple of years, and you have a paper that got a ton of attention, reporting the appearance of such a peak.  I pointed out at the time that that peak alone is not proof, but it's suggestive.  You have to be very careful, though, because other physics can mimic some aspects of the expected Majorana signature in the data.

A big advance was the recent success in growing epitaxial Al on the InAs wires.  Having atomically precise lattice registry between the semiconductor and the aluminum appears to improve the contacts significantly.   Note that this can be done in 2d as well, opening up the possibility of many investigations into proximity-induced superconductivity in gate-able semiconductor devices.  This has enabled some borrowing of techniques from other quantum computing approaches (transmons).

The main take-aways from the talk:

• Experimental progress has actually been quite rapid, once a realistic material system was identified.
• While many things point to these platforms as really having Majorana quasiparticles, the true unambiguous proof in the form of some demonstration of non-Abelian statistics hasn't happened yet.  Getting close.
• Like many solid-state endeavors before, the true enabling advances here have come from high quality materials growth.
• If this does work, scale-up may actually be do-able, since this does rely on planar semiconductor fabrication for the most part, and topological qubits may have a better ratio of physical qubits to logical qubits than other approaches.
• Charlie Marcus remains an energetic, engaging speaker, something I first learned when I worked as the TA for the class he was teaching 24 years ago.

## Thursday, March 29, 2018

### E-beam evaporators - recommendations?

Condensed matter experimentalists often need to prepare nanoscale thickness films of a variety of materials.  One approach is to use "physical vapor deposition" - in a good vacuum, a material of interest is heated to the point where it has some nonzero vapor pressure, and that vapor collides with a substrate of interest and sticks, building up the film.  One way to heat source material is with a high voltage electron beam, the kind of thing that used to be used at lower intensities to excite the phosphors on old-style cathode ray tube displays.

My Edwards Auto306 4-pocket e-beam system is really starting to show its age.  It's been a great workhorse for quick things that don't require the cleanroom.  Does anyone out there have recommendations for a system (as inexpensive as possible of course) with similar capabilities, or a vendor you like for such things?

## Wednesday, March 28, 2018

### Discussions of quantum mechanics

In a sure sign that I'm getting old, I find myself tempted to read some of the many articles, books, and discussions about interpretations of quantum mechanics that seem to be flaring up in number these days.  (Older physicists seem to return to this topic, I think because there tends to be a lingering feeling of dissatisfaction with just about every way of thinking about the issue.)

To be clear, the reason people refer to interpretations of quantum mechanics is that, in general, there is no disagreement about the results of well-defined calculations, and no observed disagreement between such calculations and experiments.

There are deep ontological questions here about what physicists mean by something (say the wavefunction) being "real".  There are also fascinating history-of-science stories that capture the imagination, with characters like Einstein criticizing Bohr about whether God plays dice, Schroedinger and his cat, Wigner and his friend, Hugh Everett and his many worlds, etc.  Three of the central physics questions are:
• Quantum systems can be in superpositions.  We don't see macroscopic quantum superpositions, even though "measuring" devices should also be described using quantum mechanics.  Is there some kind physical process at work that collapses superpositions that is not described by the ordinary Schroedinger equation?
• What picks out the classical states that we see?
• Is the Born rule a consequence of some underlying principle, or is that just the way things are?
Unfortunately real-life is very busy right now, but I wanted to collect some recent links and some relevant papers in one place, if people are interested.

From Peter Woit's blog, I gleaned these links:
Going down the google scholar rabbit hole, I also found these:
• This paper has a clean explication of the challenge in whether decoherence due to interactions with large numbers of degrees of freedom really solves the outstanding issues.
• This is a great review by Zurek about decoherence.
• This is a subsequent review looking at these issues.
• And this is a review of "collapse theories", attempts to modify quantum mechanics beyond Schroedinger time evolution to kill superpositions.
No time to read all of these, unfortunately.