Saturday, August 18, 2018

Phonons and negative mass

There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own). 

The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.  Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.

Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all?  That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure).  In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\delta \rho\) is the change.  Linear regime sound usually doesn't transport mass.  The same is true for sound in the linear regime in a conventional liquid or a solid. 

In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\) changes with pressure for that medium.  (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature.  There is no explicit pressure dependence, and sound is "massless" in that case.)

I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass than the average density of the surrounding medium.  That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium.  It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences).  That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.

Tuesday, August 14, 2018

APS March Meeting 2019 - DCMP invited symposia, DMP focused topics

A reminder to my condensed matter colleagues who go to the APS March Meeting:  We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.

Now is the time to put together nominations for these things.  The more high quality nominations, the better the content of the meeting.

The APS Division of Condensed Matter Physics is seeking nominations for invited symposia.  See here for the details.  The online submission deadline is August 24th!

Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions.  The list of Focus Topics is here.  The online submission deadline for these is August 29th. 


Sunday, August 12, 2018

What is (dielectric) polarization?

This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization".  I have to say, this has been very educational for me.   Before I try to give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.  

Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge.  A great example is a water molecule, which has a little bit of excess negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.  

Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as \(\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}\), where \(\rho\) is the local charge density.  Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric field.  In that case, \(\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}\), where \(\alpha\) is the polarizability.  (In general \(\alpha\) is a tensor, because \(\mathbf{p}\) and \(\mathbf{E}\) don't have to point in the same direction.)

If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the insulator slightly redistribute themselves - the material polarizes.  If we imagine dividing the material into little chunks, we can define the polarization \(\mathbf{P}\) as the electric dipole moment per unit volume.  For a solid, we can pick some volume and define \(\mathbf{P} = \mathbf{p}/V\), where \(V\) is the volume over which the integral is done for calculating \(\mathbf{p}\).

We can go farther than that.  If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization \(\alpha\), then we can do a self-consistent calculation, where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles.  Then we can solve for \(\mathbf{P}\) and therefore the relative dielectric constant in terms of \(\alpha\).  The result is called the Clausius-Mossotti relation.

In crystalline solids, however, it turns out that there is a serious problem!  As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of \(\mathbf{P}\) given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed.  This is a big deal.  

The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase.  First, it's important to remember that polarization is really defined experimentally by how much charge flows when that capacitor described above has the voltage applied across it.  So, the problem we're really trying to solve is, find the integrated current that flows when an electric field is ramped up to some value across a periodic solid.  We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors \(\mathbf{k}\).  For each \(\mathbf{k}\) in a given band, there is a contribution that has to do with how the energy varies with \(\mathbf{k}\) (that's the part that looks roughly like a classical velocity), and there's a second piece that has to do with how the actual electronic wavefunctions vary with \(\mathbf{k}\), which is proportional to the Berry curvature.   If you add up all the \(\mathbf{k}\) contributions over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.   

Bottom line:  In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with \(\mathbf{k}\) within the filled bands.  This is a really surprising and deep result, and it was only realized in the 1990s.  It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one that foreshadowed the whole topological insulator boom). 
 




Thursday, August 09, 2018

Hydraulic jump: New insights into a very old phenomenon

Ever since I learned about them, I thought that hydraulic jumps were cool.  As I wrote here, a hydraulic jump is an analog of a standing shockwave.  The key dimensionless parameter in a shockwave in a gas is the Mach number, the ratio between the fluid speed \(v\) and the local speed of sound, \(c_{\mathrm{s}}\).   The gas that goes from supersonic (\(\mathrm{Ma} > 1\)) on one side of the shock to subsonic (\(\mathrm{Ma} < 1\)) on the other side.

For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump).  That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.

However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin.  (Sorry that I can't find a non-pay link to the paper.)  They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling".  The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case.  Instead, the correct analysis is to worry about not just gravity but also surface tension.  They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension.   The authors do a convincing analysis of experimental data with this model, and it works well.  I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.

Tuesday, August 07, 2018

Faculty position at Rice - experimental atomic/molecular/optical

Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University

The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and optical physics.  The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and service missions of the Department and University.

Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019.  Further inquiries should be directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).

Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.

Tuesday, July 31, 2018

What is Berry phase?

On the road to discussing the Modern Theory of Polarization (e.g., pdf), it's necessary to talk about Berry phase - here, unlike many uses of the word on this blog, "phase" actually refers to a phase angle, as in a complex number \(e^{i\phi}\).   The Berry phase, named for Michael Berry, is a so-called geometric phase, in that the value of the phase depends on the "space" itself and the trajectory the system takes.  (For reference, the original paper is here (pdf), a nice talk about this is here, and reviews on how this shows up in electronic properties are here and here.)

A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces.  Imagine taking a walk while holding an arrow, initially pointed east, say.  You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right.  On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially.   On a curved (say spherical) surface, though, something different has happened.  As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction.  The angle of rotation is a geometric phase analogous to Berry phase.  The issue is that the local definition of "east" varies over the surface of the sphere.   In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space.  If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves.  This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.

So what about the actual Berry phase?  To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED.   The actual math is laid out here.  In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around.  For a state \(| \psi\rangle \) (an energy eigenstate, one of the "energy levels" of our system) with energy \(E\), we learn in quantum mechanics that the phase accumulates at a rate of \(E/\hbar\), so that the phase angle after some time \(t\) is given by \(\Delta \phi = Et/\hbar\).   Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter \(\lambda\).  For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates.  If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy \(E(\lambda)\).  So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of \(E(\lambda)\).  You might think that the phase that would be racked up would just be found by adding up the little contributions, \(\Delta \phi = \int (E(\lambda(t))/\hbar) dt\).

However, this misses something!  In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path.  Here, it turns out that we have to keep track of how the state itself, \(| \psi \rangle\), varies locally with \(\lambda\).  To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by \(| \psi \rangle \) ) is actually also varying along the path.

[Mathematically, that second contribution to the phase adds up to be \( \int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda\).  Generally \(\lambda\) could be a vectorial thing with multiple components, so that \(\partial_{\lambda}\) would be a gradient operator with respect to \(\lambda\), and the integral would be a line integral along some trajectory of \(\lambda\).  It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection.  The curl of that vector potential is some effective magnetic field, called the Berry curvature.  Then the line integral above, if it's around some closed path in \(\lambda\), is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]

Why is any of this of interest in condensed matter?

Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum \(\mathbf{p} = \hbar \mathbf{k}\).  Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity \(\partial E(\mathbf{k})/\partial \mathbf{k}\) (related to how the wavepacket racks up phase as it propagates in space), and we basically write down \(\mathbf{F} = m\mathbf{a}\) using electric and magnetic fields.  Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout \(\mathbf{k}\)-space.  That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.




Monday, July 23, 2018

Math, beauty, and condensed matter physics

There is a lot of discussion these days about the beauty of mathematics in physics, and whether some ideas about mathematical elegance have led the high energy theory community down the wrong path.  And yet, despite that, high energy theory still seems like a very popular professed interest of graduating physics majors.  This has led me to identify what I think is another sociological challenge to be overcome by condensed matter in the broader consciousness. 

Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it.  When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations.  You remember back when you took algebra, and you had to solve quadratic equations?  We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable.  Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers.  In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)ex + 3 = 0\).  There is no clean, algorithmic way to get an exact analytic solution to this.  Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer. 

The same sort of thing happens again when we learn calculus.  When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions.  There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side).  However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods.  The exact same situation arises in solving differential equations.  Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions.  It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.

I think condensed matter suffers from a similar culturally acquired bias.  Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum.  At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations.  Condensed matter seems like it must be messy.  Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.