Wednesday, August 29, 2018

Unidentified superconducting objects, again.

I've had a number of people ask me why I haven't written anything about the recent news and resulting kerfuffle (here, here, and here for example) in the media regarding possible high temperature superconductivity in Au/Ag nanoparticles.   The fact is, I've written before about unidentified superconducting objects (also see here), and so I didn't have much to say.  I've exchanged some email with the IIS PI back in late July with some questions, and his responses to my questions are in line with what others have said.   Extraordinary claims require extraordinary evidence.  The longer this goes on without independent confirmation, the more likely it is that this will fade away.

Various discussions I've had about this have, however, spurred me to try writing down my memories and lessons learned from the Schon scandal, before the inevitable passage of time wipes more of the details from my brain.  I'm a bit conflicted about this - it was 18 years ago, there's not much point in rehashing the past, and Eugenie Reich's book covered this very well.  At the same time, it's clear that many students today have never even heard of Schon, and I feel like I learned some valuable lessons from the whole situation.  It'll take some time to see if I am happy with how this turns out before I post some or all of it.  Update:  I've got a draft done, and it's too long for a blog post - around 9000 words.  I'll probably convert it to pdf when I'm happy with it and link to it somehow.

Friday, August 24, 2018

What is a Tomonaga-Luttinger Liquid?

I've written in the past (say here and here) about how we think about the electrons in a conventional metals as forming a Fermi Liquid.    (If the electrons didn't interact at all, then colloquially we call the system a Fermi gas.  The word "liquid" is shorthand for saying that the interactions between the particles that make up the liquid are important.  You can picture a classical liquid as a bunch of molecules bopping around, experiencing some kind of short-ranged repulsion so that they can't overlap, but with some attraction that favors the molecules to be bumping up against each other - the typical interparticle separation is comparable to the particle size in that classical case.)  People like Lev Landau and others had the insight that essential features of the Fermi gas (the Pauli principle being hugely important, for example) tend to remain robust even if one thinks about "dialing up" interactions between the electrons.  

A consequence of this is that in a typical metal, while the details may change, the lowest energy excitations of the Fermi liquid (the electronic quasiparticles) should be very much like the excitations of the Fermi gas - free electrons.  Fermi liquid quasiparticles each carry the electronic amount of charge, and they each carry "spin", angular momentum that, together with their charge, makes them act like tiny little magnets.  These quasiparticles move at a typical speed called the Fermi velocity.  This all works even though the like-charge electrons repel each other.

For electrons confined strictly in one dimension, though, the situation is different, and the interactions have a big effect on what takes place.  Tomonaga (shared the Nobel prize with Feynman and Schwinger for quantum electrodynamics, the quantum theory of how charges interact with the electromagnetic field) and later Luttinger worked out this case, now called a Tomonaga-Luttinger Liquid (TLL).  In one dimension, the electrons literally cannot get out of each other's way - the only kind of excitation you can have is analogous to a (longitudinal) sound wave, where there are regions of enhanced or decreased density of the electrons.  One surprising result from this is that charge in 1d propagates at one speed, tuned by the electron-electron interactions, while spin propagates at a different speed (close to the Fermi velocity).  This shows how interactions and restricted dimensionality can give collective properties that are surprising, seemingly separating the motion of spin and charge when the two are tied together for free electrons.

These unusual TLL properties show up when you have electrons confined to truly one dimension, as in some semiconductor nanowires and in single-walled carbon nanotubes.  Directly probing this physics is actually quite challenging.  It's tricky to look at charge and spin responses separately (though some experiments can do that, as here and here) and some signatures of TLL response can be subtle (e.g., power law responses in tunneling with voltage and temperature where the accessible experimentally reasonable ranges can be limited).   

The cold atom community can create cold atomic Fermi gases confined to one-dimensional potential channels.  In those systems the density of atoms plays the role of charge, and while some internal (hyperfine) state of the atoms plays the role of spin, and the experimentalists can tune the effective interactions.  This tunability plus the ability to image the atoms can enable very clean tests of the TLL predictions that aren't readily done with electrons.

So why care about TLLs?  They are an example of non-Fermi liquids, and there are other important systems in which interactions seem to lead to surprising, important changes in properties.  In the copper oxide high temperature superconductors, for example, the "normal" state out of which superconductivity emerges often seems to be a "strange metal", in which the Fermi Liquid description breaks down.  Studying the TLL case can give insights into these other important, outstanding problems.

Saturday, August 18, 2018

Phonons and negative mass

There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own). 

The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.  Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.

Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all?  That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure).  In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\delta \rho\) is the change.  Linear regime sound usually doesn't transport mass.  The same is true for sound in the linear regime in a conventional liquid or a solid. 

In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\) changes with pressure for that medium.  (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature.  There is no explicit pressure dependence, and sound is "massless" in that case.)

I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass than the average density of the surrounding medium.  That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium.  It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences).  That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.

Tuesday, August 14, 2018

APS March Meeting 2019 - DCMP invited symposia, DMP focused topics

A reminder to my condensed matter colleagues who go to the APS March Meeting:  We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.

Now is the time to put together nominations for these things.  The more high quality nominations, the better the content of the meeting.

The APS Division of Condensed Matter Physics is seeking nominations for invited symposia.  See here for the details.  The online submission deadline is August 24th!

Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions.  The list of Focus Topics is here.  The online submission deadline for these is August 29th. 


Sunday, August 12, 2018

What is (dielectric) polarization?

This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization".  I have to say, this has been very educational for me.   Before I try to give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.  

Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge.  A great example is a water molecule, which has a little bit of excess negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.  

Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as \(\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}\), where \(\rho\) is the local charge density.  Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric field.  In that case, \(\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}\), where \(\alpha\) is the polarizability.  (In general \(\alpha\) is a tensor, because \(\mathbf{p}\) and \(\mathbf{E}\) don't have to point in the same direction.)

If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the insulator slightly redistribute themselves - the material polarizes.  If we imagine dividing the material into little chunks, we can define the polarization \(\mathbf{P}\) as the electric dipole moment per unit volume.  For a solid, we can pick some volume and define \(\mathbf{P} = \mathbf{p}/V\), where \(V\) is the volume over which the integral is done for calculating \(\mathbf{p}\).

We can go farther than that.  If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization \(\alpha\), then we can do a self-consistent calculation, where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles.  Then we can solve for \(\mathbf{P}\) and therefore the relative dielectric constant in terms of \(\alpha\).  The result is called the Clausius-Mossotti relation.

In crystalline solids, however, it turns out that there is a serious problem!  As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of \(\mathbf{P}\) given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed.  This is a big deal.  

The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase.  First, it's important to remember that polarization is really defined experimentally by how much charge flows when that capacitor described above has the voltage applied across it.  So, the problem we're really trying to solve is, find the integrated current that flows when an electric field is ramped up to some value across a periodic solid.  We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors \(\mathbf{k}\).  For each \(\mathbf{k}\) in a given band, there is a contribution that has to do with how the energy varies with \(\mathbf{k}\) (that's the part that looks roughly like a classical velocity), and there's a second piece that has to do with how the actual electronic wavefunctions vary with \(\mathbf{k}\), which is proportional to the Berry curvature.   If you add up all the \(\mathbf{k}\) contributions over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.   

Bottom line:  In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with \(\mathbf{k}\) within the filled bands.  This is a really surprising and deep result, and it was only realized in the 1990s.  It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one that foreshadowed the whole topological insulator boom). 
 




Thursday, August 09, 2018

Hydraulic jump: New insights into a very old phenomenon

Ever since I learned about them, I thought that hydraulic jumps were cool.  As I wrote here, a hydraulic jump is an analog of a standing shockwave.  The key dimensionless parameter in a shockwave in a gas is the Mach number, the ratio between the fluid speed \(v\) and the local speed of sound, \(c_{\mathrm{s}}\).   The gas that goes from supersonic (\(\mathrm{Ma} > 1\)) on one side of the shock to subsonic (\(\mathrm{Ma} < 1\)) on the other side.

For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump).  That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.

However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin.  (Sorry that I can't find a non-pay link to the paper.)  They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling".  The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case.  Instead, the correct analysis is to worry about not just gravity but also surface tension.  They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension.   The authors do a convincing analysis of experimental data with this model, and it works well.  I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.

Tuesday, August 07, 2018

Faculty position at Rice - experimental atomic/molecular/optical

Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University

The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and optical physics.  The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and service missions of the Department and University.

Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019.  Further inquiries should be directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).

Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.