Sunday, October 14, 2018

Faculty position at Rice - theoretical biological physics

Faculty position in Theoretical Biological Physics at Rice University

As part of the Vision for the Second Century (V2C2), which is focused on investments in research excellence, Rice University seeks faculty members, preferably at the assistant professor level, starting as early as July 1, 2019, in all areas of Theoretical Biological Physics. Successful candidates will lead dynamic, innovative, and independent research programs supported by external funding, and will excel in teaching at the graduate and undergraduate levels, while embracing Rice’s culture of excellence and diversity.  This search will consider applicants from all science and engineering disciplines. Ideal candidates will pursue research with strong intellectual overlap with physics, chemistry, biosciences, bioengineering, chemical and biomolecular engineering, or other related disciplines. Applicants pursuing all styles of theory and computation integrating the physical and life sciences are encouraged to apply.

For full details and to apply, please visit https://jobs.rice.edu/postings/17099.  Applicants should please submit the following materials: (1) cover letter (2) curriculum vitae, (3) research statement, (4) statement of teaching philosophy, and the names and contact information for three references. Application review will commence no later than November 30, 2018 and continue until the positions are filled. Candidates must have a PhD or equivalent degree and outstanding potential in research and teaching. We particularly encourage applications from women and members of historically underrepresented groups who bring diverse cultural experiences and who are especially qualified to mentor and advise members of our diverse student population.

Rice University, located in Houston, Texas, is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status.

Friday, October 12, 2018

Short items

A few interesting things I've found this past week:

  • The connection between particle spin and quantum statistics (fermions = half-integer spin, bosons = integer spin) is subtle, as I've mentioned before.  This week I happened upon a neat set of slides (pdf) by Jonathan Bain on this topic.  He looks at how we should think about why a pretty restrictive result from non-interacting relativistic quantum field theories has such profound, general implications.  He has a book on this, too.  
  • There is a new book about the impact of condensed matter physics on the world and why it's the comparatively unsung branch of the discipline.   I have a copy on the way; once I read it I'll post a review.
  • It's also worth reading about why mathematics as a discipline is viewed the way it is culturally.
  • This is a really well-written article about turbulence, and why it's hard even though it's "just \(\mathbf{F} = m\mathbf{a}\)" for little blobs of fluid.
  • Humanoid robots are getting more and more impressive.  I would really like to know the power consumption of one of those, though, given that the old ones used to have either big external power cables or on-board diesel engines.  The robot apocalypse is less scary if they have to recharge every ten minutes of operating time.
  • I always wondered if fidget spinners were good for something.

Sunday, October 07, 2018

A modest proposal: Congressional Science and Technology Office, or equivalent

I was in a meeting at the beginning of the week where the topic of science and technology in policy-making came up.  One person in the meeting made an off-hand comment that one role for university practitioners could be to "educate policy-makers".  Another person in the meeting, with a lot of experience in public policy, pointed out that from the perspective of policy-makers, the previous statement often comes across as condescending and an immediate turn-off (regardless of whether policy-makers actually have expert knowledge relevant to their decisions).  

At the same time, with the seemingly ever-quickening pace of technological change, it sure seems like Congress lacks sources of information and resources for getting legislators (and perhaps more importantly their staffs) up to speed on scientific and technological issues.  These include issues of climate, election security, artificial intelligence, robots coming to take our jobs, etc.  The same could be said for the Judiciary, from the federal district level all the way up to the Supreme Court.   Wouldn't it be a good idea for at least the staffs of the federal judges to have some non-partisan way to get needed help in understanding, e.g., encryption?   The National Academies do outstanding work in their studies and reports, but I'm thinking of a non-partisan information-gathering and coaching office specifically to support Congress and perhaps the Judiciary.  The Congressional Budget Office serves a somewhat similar role in terms of supporting budgeting and appropriations.  The executive branch (nominally) has the Office of Science and Technology Policy.   I could be convinced that the Academies could launch something analogous, but it's not clear that this is a reasonable expectation.

Realistically, now is not the best time to bring this up in the US, given the level of political dysfunction and the looming financial challenges facing the government.   There used to be a congressional Office of Technology Assessment, but that was shut down ostensibly to save money in 1995.  Attempts to restart it such as Bill Foster's this past spring have failed.  Still, better to keep pushing for something to play this role, rather than simply being content with the status quo level of technical knowledge of Congress (and federal judges).  Complex scientific and technological issues are shaping the world around us, and I have to hope that decision-makers want to know more about these topics.

Sunday, September 30, 2018

Can you heat up your coffee by stirring?

A fun question asked by a student in my class:  To what extent do you heat up your coffee by stirring it? 

It was a huge conceptual advance when James Prescott Joule demonstrated that "heat", as inferred by the increase in the temperature of some system, is a form of energy.  In 1876, Joule set up an experiment described here, where a known mass falling a known distance turns a paddle-wheel within a volume of liquid in an insulated container.  The paddle-wheel stirs the liquid, and eventually the liquid's viscosity, the frictional transfer of momentum between adjacent layers of fluid moving at slightly different velocities, damps out the paddle-wheel's rotation and, if you wait long enough, the fluid's motion.  Joule found that this was accompanied by an increase in the fluid's temperature, an increase directly proportional to the distance fallen by the mass.  The viscosity is the means by which the energy of the organized motion of the swirling fluid is transferred to the kinetic energy of the disorganized motion of individual fluid molecules.

Suppose you stir your coffee at a roughly constant stirring speed.  This is adding at a steady rate to the (disorganized) energy content of the coffee.  If we are content with rough estimates, we can get a sense of the power you are dumping into the coffee by an approach close to dimensional analysis.

The way viscosity \(\mu\) is defined, the frictional shear force per unit area is given by the viscosity times the velocity gradient - that is, the frictional force per area in the \(x\)-direction at some piece of the \(x-y\) plane for fluid flowing in the x direction is going to be given by \(\mu (\partial u/\partial z) \), where \(z\) is the normal direction and \(u\) is the \(x\)-component of the fluid velocity).

Very very roughly (because the actual fluid flow geometry and velocity field are messy and complicated), the power dumped in by stirring is going to be something like (volume of cup)*(viscosity)*(typical velocity gradient)^2.  A mug holds about 0.35L = 3.5e-4 m^3 of coffee.  The viscosity of coffee is going to be something like that of warm water.  Looking that up here, the viscosity is going to be something like 3.54e-4 Pa-s.  A really rough velocity gradient is something like the steady maximum stirring velocity (say 20 cm/s) divided by the radius of the mug (say 3 cm).  If you put all that together, you get that the effective input power to your coffee from stirring is at the level of a few microwatts.  Pretty meager, and unlikely to balance the rate at which energy leaves by thermal conduction through the mug walls and evaporation of the hottest water molecules.

Still, when you stir your coffee, you are veeeerrry slightly heating it!  update:  As the comments point out, and as I tried to imply above, you are unlikely to produce a net increase in temperature through stirring.  When you stir you improve the heat transfer between the coffee and the mug walls (basically short-circuiting the convective processes that would tend to circulate the coffee around if you left the coffee alone). 

Friday, September 28, 2018

Annual Nobel speculation thread

As my friend DanM pointed out in the comments of a previous post, it's Nobel season again, next Tuesday for physics.  Dan puts forward his prediction of Pendry and Smith for metamaterials/negative index of refraction.  (You could throw in Yablonovitch for metamaterials.)  I will, once again, make my annual (almost certainly wrong) prediction of Aharonov and Berry for geometric phases.   Another possibility in this dawning age of quantum information is Aspect, Zeilinger, and Clauser for Bell's inequality tests.   Probably not an astrophysics one, since gravitational radiation was the winner last year.

Thursday, September 20, 2018

What’s in a name? CMP

At a recent DCMP meeting, my colleague Erica Carlson raised an important point:  Condensed matter physics as a discipline is almost certainly hurt relative to other areas, and in the eye of the public, by having the least interesting, most obscure descriptive name.  Seemingly every other branch of physics has a name that either sounds cool, describes the discipline at a level immediately appreciated by the general public, or both.  Astrophysics is astro-physics, and just sounds badass.  Plasma physics is exciting because, come on, plasma.  Biophysics is clearly the physics relevant to biology.  High energy or particle physics are descriptive and have no shortage of public promotion.  Atomic physics has a certain retro-future vibe.

In contrast, condensed matter, while accurate, really does not conjure any imagery at all for the general public, or sound very interesting.  If the first thing you have to do after saying “condensed matter” is use two or three sentences to explain what that means, then the name has failed in one of its essential missions.

So, what would be better alternatives?  “Quantum matter” sounds cool, but doesn’t really explain much, and leaves out soft CM.  The physics of everything you can touch is interesting, but prosaic.  Suggestions in the comments, please!

Friday, September 14, 2018

Recently on the arxiv

While it's been a busy time, a couple of interesting papers caught my eye:

arxiv:1808.07865 - Yankowitz et al., Tuning superconductivity in twisted bilayer graphene
This lengthy paper, a collaboration between the groups of Andrea Young at UCSB and Cory Dean at Columbia, is (as far as I know) the first independent confirmation of the result from Pablo Jarillo-Herrero's group at MIT about superconductivity in twisted bilayer graphene.  The new paper also shows how tuning the interlayer coupling via in situ pressure (a capability of the Dean lab) affects the phase diagram.  Cool stuff.

arXiv:1809.04637 - Fatemi et al., Electrically Tunable Low Density Superconductivity in a Monolayer Topological Insulator
arxiv:1809.04691 - Sajadi et al., Gate-induced superconductivity in a monolayer topological insulator
While I haven't had a chance to read them in any depth, these two papers report superconductivity in gated monolayer WTe2, a remarkable material already shown to act as a 2D topological insulator (quantum spin Hall insulator). 

Seems like there is plenty of interesting physics that is going to keep turning up in these layered systems as material quality and device fabrication processes continue to improve.

Tuesday, September 04, 2018

Looking back at the Schön scandal

As I mentioned previously, I've realized in recent weeks that many current students out there may never have heard of Jan Hendrik Schön, and that seems wrong, a missed opportunity for a cautionary tale about responsible conduct of research.  It's also a story that gives a flavor of the time and touches on other issues still current today - faddishness and competitiveness in top-level science, the allure of glossy publications, etc.  It ended up being too long for a blog post, and it seemed inappropriate to drag out over many posts, so here is a link to a pdf.  Any errors are mine and are probably the result of middle-aged memory.  After all, this story did start twenty years ago.  I'm happy to make corrections if appropriate.  update 9/9/18 - corrected typos and added a couple of sentences to clarify things.

Wednesday, August 29, 2018

Unidentified superconducting objects, again.

I've had a number of people ask me why I haven't written anything about the recent news and resulting kerfuffle (here, here, and here for example) in the media regarding possible high temperature superconductivity in Au/Ag nanoparticles.   The fact is, I've written before about unidentified superconducting objects (also see here), and so I didn't have much to say.  I've exchanged some email with the IIS PI back in late July with some questions, and his responses to my questions are in line with what others have said.   Extraordinary claims require extraordinary evidence.  The longer this goes on without independent confirmation, the more likely it is that this will fade away.

Various discussions I've had about this have, however, spurred me to try writing down my memories and lessons learned from the Schon scandal, before the inevitable passage of time wipes more of the details from my brain.  I'm a bit conflicted about this - it was 18 years ago, there's not much point in rehashing the past, and Eugenie Reich's book covered this very well.  At the same time, it's clear that many students today have never even heard of Schon, and I feel like I learned some valuable lessons from the whole situation.  It'll take some time to see if I am happy with how this turns out before I post some or all of it.  Update:  I've got a draft done, and it's too long for a blog post - around 9000 words.  I'll probably convert it to pdf when I'm happy with it and link to it somehow.

Friday, August 24, 2018

What is a Tomonaga-Luttinger Liquid?

I've written in the past (say here and here) about how we think about the electrons in a conventional metals as forming a Fermi Liquid.    (If the electrons didn't interact at all, then colloquially we call the system a Fermi gas.  The word "liquid" is shorthand for saying that the interactions between the particles that make up the liquid are important.  You can picture a classical liquid as a bunch of molecules bopping around, experiencing some kind of short-ranged repulsion so that they can't overlap, but with some attraction that favors the molecules to be bumping up against each other - the typical interparticle separation is comparable to the particle size in that classical case.)  People like Lev Landau and others had the insight that essential features of the Fermi gas (the Pauli principle being hugely important, for example) tend to remain robust even if one thinks about "dialing up" interactions between the electrons.  

A consequence of this is that in a typical metal, while the details may change, the lowest energy excitations of the Fermi liquid (the electronic quasiparticles) should be very much like the excitations of the Fermi gas - free electrons.  Fermi liquid quasiparticles each carry the electronic amount of charge, and they each carry "spin", angular momentum that, together with their charge, makes them act like tiny little magnets.  These quasiparticles move at a typical speed called the Fermi velocity.  This all works even though the like-charge electrons repel each other.

For electrons confined strictly in one dimension, though, the situation is different, and the interactions have a big effect on what takes place.  Tomonaga (shared the Nobel prize with Feynman and Schwinger for quantum electrodynamics, the quantum theory of how charges interact with the electromagnetic field) and later Luttinger worked out this case, now called a Tomonaga-Luttinger Liquid (TLL).  In one dimension, the electrons literally cannot get out of each other's way - the only kind of excitation you can have is analogous to a (longitudinal) sound wave, where there are regions of enhanced or decreased density of the electrons.  One surprising result from this is that charge in 1d propagates at one speed, tuned by the electron-electron interactions, while spin propagates at a different speed (close to the Fermi velocity).  This shows how interactions and restricted dimensionality can give collective properties that are surprising, seemingly separating the motion of spin and charge when the two are tied together for free electrons.

These unusual TLL properties show up when you have electrons confined to truly one dimension, as in some semiconductor nanowires and in single-walled carbon nanotubes.  Directly probing this physics is actually quite challenging.  It's tricky to look at charge and spin responses separately (though some experiments can do that, as here and here) and some signatures of TLL response can be subtle (e.g., power law responses in tunneling with voltage and temperature where the accessible experimentally reasonable ranges can be limited).   

The cold atom community can create cold atomic Fermi gases confined to one-dimensional potential channels.  In those systems the density of atoms plays the role of charge, and while some internal (hyperfine) state of the atoms plays the role of spin, and the experimentalists can tune the effective interactions.  This tunability plus the ability to image the atoms can enable very clean tests of the TLL predictions that aren't readily done with electrons.

So why care about TLLs?  They are an example of non-Fermi liquids, and there are other important systems in which interactions seem to lead to surprising, important changes in properties.  In the copper oxide high temperature superconductors, for example, the "normal" state out of which superconductivity emerges often seems to be a "strange metal", in which the Fermi Liquid description breaks down.  Studying the TLL case can give insights into these other important, outstanding problems.

Saturday, August 18, 2018

Phonons and negative mass

There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own). 

The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.  Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.

Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all?  That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure).  In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\delta \rho\) is the change.  Linear regime sound usually doesn't transport mass.  The same is true for sound in the linear regime in a conventional liquid or a solid. 

In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\) changes with pressure for that medium.  (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature.  There is no explicit pressure dependence, and sound is "massless" in that case.)

I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass than the average density of the surrounding medium.  That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium.  It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences).  That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.

Tuesday, August 14, 2018

APS March Meeting 2019 - DCMP invited symposia, DMP focused topics

A reminder to my condensed matter colleagues who go to the APS March Meeting:  We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.

Now is the time to put together nominations for these things.  The more high quality nominations, the better the content of the meeting.

The APS Division of Condensed Matter Physics is seeking nominations for invited symposia.  See here for the details.  The online submission deadline is August 24th!

Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions.  The list of Focus Topics is here.  The online submission deadline for these is August 29th. 


Sunday, August 12, 2018

What is (dielectric) polarization?

This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization".  I have to say, this has been very educational for me.   Before I try to give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.  

Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge.  A great example is a water molecule, which has a little bit of excess negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.  

Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as \(\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}\), where \(\rho\) is the local charge density.  Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric field.  In that case, \(\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}\), where \(\alpha\) is the polarizability.  (In general \(\alpha\) is a tensor, because \(\mathbf{p}\) and \(\mathbf{E}\) don't have to point in the same direction.)

If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the insulator slightly redistribute themselves - the material polarizes.  If we imagine dividing the material into little chunks, we can define the polarization \(\mathbf{P}\) as the electric dipole moment per unit volume.  For a solid, we can pick some volume and define \(\mathbf{P} = \mathbf{p}/V\), where \(V\) is the volume over which the integral is done for calculating \(\mathbf{p}\).

We can go farther than that.  If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization \(\alpha\), then we can do a self-consistent calculation, where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles.  Then we can solve for \(\mathbf{P}\) and therefore the relative dielectric constant in terms of \(\alpha\).  The result is called the Clausius-Mossotti relation.

In crystalline solids, however, it turns out that there is a serious problem!  As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of \(\mathbf{P}\) given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed.  This is a big deal.  

The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase.  First, it's important to remember that polarization is really defined experimentally by how much charge flows when that capacitor described above has the voltage applied across it.  So, the problem we're really trying to solve is, find the integrated current that flows when an electric field is ramped up to some value across a periodic solid.  We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors \(\mathbf{k}\).  For each \(\mathbf{k}\) in a given band, there is a contribution that has to do with how the energy varies with \(\mathbf{k}\) (that's the part that looks roughly like a classical velocity), and there's a second piece that has to do with how the actual electronic wavefunctions vary with \(\mathbf{k}\), which is proportional to the Berry curvature.   If you add up all the \(\mathbf{k}\) contributions over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.   

Bottom line:  In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with \(\mathbf{k}\) within the filled bands.  This is a really surprising and deep result, and it was only realized in the 1990s.  It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one that foreshadowed the whole topological insulator boom). 
 




Thursday, August 09, 2018

Hydraulic jump: New insights into a very old phenomenon

Ever since I learned about them, I thought that hydraulic jumps were cool.  As I wrote here, a hydraulic jump is an analog of a standing shockwave.  The key dimensionless parameter in a shockwave in a gas is the Mach number, the ratio between the fluid speed \(v\) and the local speed of sound, \(c_{\mathrm{s}}\).   The gas that goes from supersonic (\(\mathrm{Ma} > 1\)) on one side of the shock to subsonic (\(\mathrm{Ma} < 1\)) on the other side.

For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump).  That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.

However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin.  (Sorry that I can't find a non-pay link to the paper.)  They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling".  The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case.  Instead, the correct analysis is to worry about not just gravity but also surface tension.  They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension.   The authors do a convincing analysis of experimental data with this model, and it works well.  I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.

Tuesday, August 07, 2018

Faculty position at Rice - experimental atomic/molecular/optical

Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University

The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and optical physics.  The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and service missions of the Department and University.

Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019.  Further inquiries should be directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).

Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.

Tuesday, July 31, 2018

What is Berry phase?

On the road to discussing the Modern Theory of Polarization (e.g., pdf), it's necessary to talk about Berry phase - here, unlike many uses of the word on this blog, "phase" actually refers to a phase angle, as in a complex number \(e^{i\phi}\).   The Berry phase, named for Michael Berry, is a so-called geometric phase, in that the value of the phase depends on the "space" itself and the trajectory the system takes.  (For reference, the original paper is here (pdf), a nice talk about this is here, and reviews on how this shows up in electronic properties are here and here.)

A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces.  Imagine taking a walk while holding an arrow, initially pointed east, say.  You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right.  On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially.   On a curved (say spherical) surface, though, something different has happened.  As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction.  The angle of rotation is a geometric phase analogous to Berry phase.  The issue is that the local definition of "east" varies over the surface of the sphere.   In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space.  If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves.  This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.

So what about the actual Berry phase?  To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED.   The actual math is laid out here.  In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around.  For a state \(| \psi\rangle \) (an energy eigenstate, one of the "energy levels" of our system) with energy \(E\), we learn in quantum mechanics that the phase accumulates at a rate of \(E/\hbar\), so that the phase angle after some time \(t\) is given by \(\Delta \phi = Et/\hbar\).   Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter \(\lambda\).  For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates.  If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy \(E(\lambda)\).  So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of \(E(\lambda)\).  You might think that the phase that would be racked up would just be found by adding up the little contributions, \(\Delta \phi = \int (E(\lambda(t))/\hbar) dt\).

However, this misses something!  In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path.  Here, it turns out that we have to keep track of how the state itself, \(| \psi \rangle\), varies locally with \(\lambda\).  To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by \(| \psi \rangle \) ) is actually also varying along the path.

[Mathematically, that second contribution to the phase adds up to be \( \int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda\).  Generally \(\lambda\) could be a vectorial thing with multiple components, so that \(\partial_{\lambda}\) would be a gradient operator with respect to \(\lambda\), and the integral would be a line integral along some trajectory of \(\lambda\).  It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection.  The curl of that vector potential is some effective magnetic field, called the Berry curvature.  Then the line integral above, if it's around some closed path in \(\lambda\), is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]

Why is any of this of interest in condensed matter?

Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum \(\mathbf{p} = \hbar \mathbf{k}\).  Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity \(\partial E(\mathbf{k})/\partial \mathbf{k}\) (related to how the wavepacket racks up phase as it propagates in space), and we basically write down \(\mathbf{F} = m\mathbf{a}\) using electric and magnetic fields.  Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout \(\mathbf{k}\)-space.  That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.




Monday, July 23, 2018

Math, beauty, and condensed matter physics

There is a lot of discussion these days about the beauty of mathematics in physics, and whether some ideas about mathematical elegance have led the high energy theory community down the wrong path.  And yet, despite that, high energy theory still seems like a very popular professed interest of graduating physics majors.  This has led me to identify what I think is another sociological challenge to be overcome by condensed matter in the broader consciousness. 

Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it.  When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations.  You remember back when you took algebra, and you had to solve quadratic equations?  We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable.  Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers.  In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)ex + 3 = 0\).  There is no clean, algorithmic way to get an exact analytic solution to this.  Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer. 

The same sort of thing happens again when we learn calculus.  When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions.  There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side).  However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods.  The exact same situation arises in solving differential equations.  Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions.  It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.

I think condensed matter suffers from a similar culturally acquired bias.  Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum.  At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations.  Condensed matter seems like it must be messy.  Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.




Wednesday, July 18, 2018

Items of interest

While trying to write a few things (some for the blog, some not), I wanted to pass along some links of interest:

  • APS March Meeting interested parties:  The time to submit nominations for invited sessions for the Division of Condensed Matter Physics is now (deadline of August 24).  See here.  As a member-at-large for DCMP, I've been involved in the process now for a couple of years, and lots of high quality nominations are the best way to get a really good meeting.  Please take the time to nominate!
  • Similarly, now is the time to nominate people for DCMP offices (deadline of Sept. 1).
  • There is a new tool available called Scimeter that is a rather interesting add-on to the arxiv.  It has done some textual analysis of all the preprints on the arxiv, so you can construct a word cloud for an author (see at right for mine, which is surprisingly dominated by "field effect transistor" - I guess I use that phrase too often) or group of authors; or you can search for similar authors based on that same word cloud analysis.  Additionally, the tool uses that analysis to compare breadth of research topics spanned by an author's papers.  Apparently I am 0.3 standard deviations more broad than the mean broadness, whatever that means.  
  • Thanks to a colleague, I stumbled on Fermat's Library, a great site that stockpiles some truly interesting and foundational papers across many disciplines and allows shared commenting in the margins (hence the Fermat reference).  

Sunday, July 08, 2018

Physics in the kitchen: Frying tofu

I was going to title this post "On the emergence of spatial and temporal coherence in frying tofu", or "Frying tofu:  Time crystal?", but decided that simplicity has virtues.

I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen.  I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light.  The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz.  Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized.  This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.

From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well.  Clearly I should have filmed this and called it a classical time crystal.   Would've been a cheap and tasty paper.  (I kid, I kid.)

What I think happened is this.  The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles.  There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu).  The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop.  I'm sure someone far more obsessive than I am could do serious dimensional analysis about this.  The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames).   This kind of self-organization happens all the time - here is a nice talk about this stuff.  This kind of synchronization is an example of universal, emergent physics.

Tuesday, July 03, 2018

A metal superconducting transistor (?!)

A paper was published yesterday in Nature Nanotechnology that is quite surprising, at least to me, and I thought I should point it out.

The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state.  [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" \(h/2e^{2}\approx 12.9~\mathrm{k}\Omega\).  This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains).  Similarly, the inferred resistivity of around 30 \(\mu\Omega\)-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]

The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero.  This effect happens symmetrically with either polarity of bias voltage. 

This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry. 

The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect.  In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field. 

Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors).    I need to think about whether this makes sense to me physically.

Sunday, July 01, 2018

Book review: The Secret Life of Science

I recently received a copy of The Secret Life of Science:  How It Really Works and Why It Matters, by Jeremy Baumberg of Cambridge University.  The book is meant to provide a look at the "science ecosystem", and it seems to be unique, at least in my experience.  From the perspective of a practitioner but with a wider eye, Prof. Baumberg tries to explain much of the modern scientific enterprise - what is modern science (with an emphasis on "simplifiers" [often reductionists] vs. "constructors" [closer to engineers, building new syntheses] - this is rather similar to Narayanamurti's take described here), who are the different stakeholders, publication as currency, scientific conferences, science publicizing and reporting, how funding decisions happen, career paths and competition, etc. 

I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it.  I agree with many of the book's main observations:

  • The actual scientific enterprise is complicated (as pointed out repeatedly with one particular busy figure that recurs throughout the text), with a bunch of stakeholders, some cooperating, some competing, and we've arrived at the present situation through a complex, emergent history of market forces, not some global optimization of how best to allocate resources or how to choose topics. 
  • Scientific publishing is pretty bizarre, functioning to disseminate knowledge as well as a way of keeping score; peer review is annoying in many ways but serves a valuable purpose; for-profit publications can distort people's behaviors because of the prestige associated with some.
  • Conferences are also pretty weird, serving purposes (networking, researcher presentation training) that are not really what used to be the point (putting out and debating new results).
  • Science journalism is difficult, with far more science than can be covered, squeezed resources for real journalism, incentives for PR that can oversimplify or amp up claims and controversy, etc.
The book ends with some observations and suggestions from the author's perspective on changes that might improve the system, with a realist recognition that big changes will be hard.   

It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation.  My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields.  (I do wonder about some of the numbers that appear.  Has the number of scientists in France really grown by a factor of three since 1980?  And by a factor of five in Spain over that time?)

If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start. 

Monday, June 25, 2018

Don't mince words, John Horgan. What do you really think?

In his review of Sabine Hossenfelder's new book for Scientific American, John Horgan begins by saying:
Does anyone who follows physics doubt it is in trouble? When I say physics, I don’t mean applied physics, material science or what Murray-Gell-Mann called “squalid-state physics.” I mean physics at its grandest, the effort to figure out reality. Where did the universe come from? What is it made of? What laws govern its behavior? And how probable is the universe? Are we here through sheer luck, or was our existence somehow inevitable?
Wow.  Way to back-handedly imply that condensed matter physics is not grand or truly important.  The frustrating thing is that Horgan knows perfectly well that condensed matter physics has been the root of multiple of profound ideas (Higgs mechanism, anyone?), as well as shaping basically all of the technology he used to write that review.   He goes out of his way here to make clear that he doesn't think any of that is really interesting.  Why do that as a rhetorical device?  

Sunday, June 24, 2018

There is no such thing as a rigid solid.

How's that for a provocative, click-bait headline?

More than any other branch of physics, condensed matter physics highlights universality, the idea that some properties crop up repeatedly, in many physical systems, independent of and despite the differences in the microscopic building blocks of the system.  One example that affects you pretty much all the time is emergence of rigid solids from the microscopic building blocks that are atoms and molecules.  You may never think about it consciously, but mechanically rigid solids make up much of our environment - our buildings, our furniture, our roads, even ourselves.

A quartz crystal is an example of a rigid solid. By solid, I mean that the material maintains its own shape without confining walls, and by rigid, I mean that it “resists deformation”. Deforming the crystal – stretching it, squeezing it, bending it – involves trying to move some piece of the crystal relative to some other piece of the crystal. If you try to do this, it might flex a little bit, but the crystal pushes back on you. The ratio between the pressure (say) that you apply and the percentage change in the crystal’s size is called an elastic modulus, and it’s a measure of rigidity. Diamond has a big elastic modulus, as does steel. Rubber has a comparatively small elastic modulus – it’s squishier. Rigidity implies solidity. If a hunk of material has rigidity, it can withstand forces acting on it, like gravity.  (Note that I'm already assuming that atoms can't pass through each other, which turns out to be a macroscopic manifestation of quantum mechanics, even though people rarely think of it that way.  I've discussed this recently here.)

Take away the walls of an aquarium, and the rectangular “block” of water in there can’t resist gravity and splooshes all over the table. In free fall as in the International Space Station, a blob of water will pull itself into a sphere, as it doesn’t have the rigidity to resist surface tension, the tendency of a material to minimize its surface area.

Rigidity is an emergent property. One silicon or oxygen atom isn’t rigid, but somehow, when you put enough of them together under the right conditions, you get a mechanically solid object. A glass, in contrast to a crystal, looks very different if you zoom in to the atomic scale. In the case of silicon dioxide, while the detailed bonding of each silicon to two oxygens looks similar to the case of quartz, there is no long-range pattern to how the atoms are arranged. Indeed, while it would be incredibly difficult to do experimentally, if you could take a snapshot of molten silica glass at the atomic scale, from the positions of the atoms alone, you wouldn’t be able to tell whether it was molten or solidified.   However, despite the structural similarities to a liquid, solid glass is mechanically rigid. In fact, some glasses are actually far more stiff than crystalline solids – metallic glasses are highly prized for exactly this property – despite having a microscopic structure that looks like a liquid. 

Somehow, these two systems (quartz and silica glass), with very different detailed structures, have very similar mechanical properties on large scales. Maybe this example isn't too convincing. After all, the basic building blocks in both of those materials are really the same. However, mechanical rigidity shows up all the time in materials with comparatively high densities. Water ice is rigid. The bumper on your car is rigid. The interior of a hard-boiled egg is rigid. Concrete is rigid. A block of wood is rigid. A vacuum-packed bag of ground espresso-roasted coffee is rigid. Somehow, mechanical rigidity is a common collective fate of many-particle systems. So where does it originate? What conditions are necessary to have rigidity?

Interestingly, this question remains one that is a subject of research.  Despite my click-bait headline, it sure looks like there are materials that are mechanically rigid.  However, it can be shown mathematically (!) that "equilibrium states of matter that break spontaneously translational invariance...flow if even an infinitesimal stress is applied".   That is, take some crystal or glass, where the constituent particles are sitting in well-defined locations (thus "breaking translational invariance"), and apply even a tiny bit of shear, and the material will flow.  It can be shown mathematically that the particles in the bulk of such a material can always rearrange a tiny amount that should end up propagating out to displace the surface of the material, which really is what we mean by "flow".   How do we reconcile this statement with what we see every day, for example that you touching your kitchen table really does not cause its surface to flow like a liquid?

Some of this is the kind of hair-splitting/no-true-Scotsman definitional stuff that shows up sometimes in theoretical physics.  A true equilibrium state would last forever.   To say that "equilibrium states of matter that break spontaneously translational invariance" are unstable under stress just means that the final, flowed rearrangement of atoms is energetically favored once stress is applied, but doesn't say anything on how long it takes the system to get there.

We see other examples of this kind of thing in condensed matter and statistical physics.  It is possible to superheat liquid water above its boiling point.  Under those conditions, the gas phase is thermodynamically favored, but to get from the homogeneous liquid to the gas requires creating a blob of gas, with an accompanying liquid/gas interface that is energetically expensive.  The result is an "activation barrier".

Turns out, that appears to be the right way to think about solids.  Solids only appear rigid on any useful timescale because the timescale to create defects and reach the flowed state is very very long.  A recent discussion of this is here, with some really good references, in a paper that only appeared this spring in the Proceedings of the National Academy of Sciences of the US.  An earlier work (a PRL) trying to quantify how this all works is here, if you're interested.

One could say that this is a bit silly - obviously we know empirically that there are rigid materials, and any analysis saying they don't exist has to be off the mark somehow.  However, in science, particularly physics, this kind of study, where observation and some fairly well-defined model seem to contradict each other, is precisely where we tend to gain a lot of insight.  (This is something we have to be better at explaining to non-scientists....)





Monday, June 18, 2018

Scientific American - what the heck is this?

Today, Scientific American ran this on their blogs page.  This article calls to mind weird mysticism stuff like crystal energy, homeopathy, and tree waves (a reference that attendees of mid-1990s APS meetings might get), and would not be out of place in Omni Magazine in about 1979.

I’ve written before about SciAm and their blogs.  My offer still stands, if they ever want a condensed matter/nano blog that I promise won’t verge into hype or pseudoscience.

Saturday, June 16, 2018

Water at the nanoscale

One reason the nanoscale is home to some interesting physics and chemistry is that the nanometer is a typical scale for molecules.   When the size of your system becomes comparable to the molecular scale, you can reasonably expect something to happen, in the sense that it should no longer be possible to ignore the fact that your system is actually built out of molecules.

Consider water as an example.  Water molecules have finite size (on the order of 0.2 nm between the hydrogens), a definite angled shape, and have a bit of an electric dipole moment (the oxygen has a slight excess of electron density and the hydrogens have a slight deficit).  In the liquid state, the water molecules are basically jostling around and have a typical intermolecular distance comparable to the size of the molecule.  If you confine water down to a nanoscale volume, you know at some point the finite size and interactions (steric and otherwise) between the water molecules have to matter.  For example, squeeze water down to a few molecular layers between solid boundaries, and it starts to act more like an elastic solid than a viscous fluid.  

Another consequence of this confinement in water can be seen in measurements of its dielectric properties - how charge inside rearranges itself in response to an external electric field.  In bulk liquid water, there are two components to the dielectric response.  The electronic clouds in the individual molecules can polarize a bit, and the molecules themselves (with their electric dipole moments) can reorient.  This latter contribution ends up being very important for dc electric fields, and as a result the dc relative dielectric permittivity of water, \(\kappa\), is about 80 (compared with 1 for the vacuum, and around 3.9 for SiO2).   At the nanoscale, however, the motion of the water molecules should be hindered, especially near a surface.  That should depress \(\kappa\) for nanoconfined water.

In a preprint on the arxiv this week, that is exactly what is found.  Using a clever design, water is confined in nanoscale channels defined by a graphite floor, hexagonal boron nitride (hBN) walls, and a hBN roof.  A conductive atomic force microscope tip is used as a top electrode, the graphite is used as a bottom electrode, and the investigators are able to see results consistent with \(\kappa\) falling to roughly 2.1 for layers about 0.6-0.9 nm thick adjacent to the channel floor and ceiling.  The result is neat, and it should provide a very interesting test case for attempts to model these confinement effects computationally.

Friday, June 08, 2018

What are steric interactions?

When first was reading chemistry papers, one piece of jargon jumped out at me:  "steric hindrance", which is an abstruse way of saying that you can't force pieces of molecules (atoms or groups of atoms) to pass through each other.  In physics jargon, they have a "hard core repulsion".  If you want to describe the potential energy of two atoms as you try to squeeze one into the volume of the other, you get a term that blows up very rapidly, like \(1/r^{12}\), where \(r\) is the distance between the nuclei.  Basically, you can do pretty well treating atoms like impenetrable spheres with diameters given by their outer electronic orbitals.  Indeed, Robert Hooke went so far as to infer, from the existence of faceted crystals, that matter is built from effectively impenetrable little spherical atoms.

It's a common thing in popular treatments of physics to point out that atoms are "mostly empty space".  With hydrogen, for example, if you said that the proton was the size of a pea, then the 1s orbital (describing the spatial probability distribution for finding the point-like electron) would be around 250 m in radius.  So, if atoms are such big, puffy objects, then why can't two atoms overlap in real space?  It's not just the electrostatic repulsion, since each atom is overall neutral.

The answer is (once again) the Pauli exclusion principle (PEP) and the fact that electrons obey Fermi statistics.  Sometimes the PEP is stated in a mathematically formal way that can obscure its profound consequences.  For our purposes, the bottom line is:  It is apparently a fundamental property of the universe that you can't stick two identical fermions (including having the same spin) in the same quantum state.    At the risk of getting technical, this can mean a particular atomic orbital, or more generally it can be argued to mean the same little "cell" of volume \(h^{3}\) in r-p phase space.  It just can't happen

If you try to force it, what happens instead?  In practice, to get two carbon atoms, say, to overlap in real space, you would have to make the electrons in one of the atoms leave their ordinary orbitals and make transitions to states with higher kinetic energies.  That energy has to come from somewhere - you have to do work and supply that energy to squeeze two atoms into the volume of one.  Books have been written about this.

Leaving aside for a moment the question of why rigid solids are rigid, it's pretty neat to realize that the physics principle that keeps you from falling through your chair or the floor is really the same principle that holds up white dwarf stars.

Thursday, May 31, 2018

Coming attractions and short items

Here are a few items of interest. 

I am planning to write a couple of posts about why solids are rigid, and in the course of thinking about this, I made a couple of discoveries:

  • When you google "why are solids rigid?", you find a large number of websites that all have exactly the same wording:  "Solids are rigid because the intermolecular forces of attraction that are present in solids are very strong. The constituent particles of solids cannot move from their positions they can only vibrate from their mean positions."  Note that this is (1) not correct, and (2) also not much of an answer.  It seems that the wording is popular because it's an answer that has appeared on the IIT entrance examinations in India.
  • I came across an absolutely wonderful paper by Victor Weisskopf, "Of Atoms, Mountains, and Stars:  A Study in Qualitative Physics", Science 187, 605-612 (1975).  Here is the only link I could find that might be reachable without a subscription.  It is a great example of "thinking like a physicist", showing how far one can get by starting from simple ideas and using order-of-magnitude estimates.  This seems like something that should be required reading of most undergrad physics majors, and more besides.
In politics-of-science news:

  • There is an amendment pending in the US Congress on the big annual defense bill that has the potential to penalize US researchers who have received any (presently not well-defined) resources from Chinese talent recruitment efforts.  (Russia, Iran, and North Korea are also mentioned, but they're irrelevant here, since they are not running such programs.)  The amendment would allow the DOD to deny these folks research funding.  The idea seems to be that such people are perceived by some as a risk in terms of taking DOD-relevant knowledge and giving China an economic or strategic benefit.  Many major US research universities have been encouraging closer ties with China and Chinese universities in the last 15 years.  Makes you wonder how many people would be affected.
  • The present US administration, according to AP, is apparently about to put in place (June 11?) new limitations on Chinese graduate student visas, for those working in STEM (and especially in fields mentioned explicitly in the Chinese government's big economic plan).   It would make relevant student visas one year in duration.  Given that the current visa renewal process can already barely keep up with the demand, it seems like this could become an enormous headache.  I could go on at length about why I think this is a bad idea.  Given that it's just AP that is reporting this so far, perhaps it won't happen or will be more narrowly construed.  We'll see.

Tuesday, May 29, 2018

What is tunneling?


I first learned about quantum tunneling from science fiction, specifically a short story by Larry Niven.  The idea is often tossed out there as one of those "quantum is weird and almost magical!" concepts.  It is surely far from our daily experience.

Imagine a car of mass \(m\) rolling along a road toward a small hill.  Let’s make the car and the road ideal – we’re not going to worry about friction or drag from the air or anything like that.   You know from everyday experience that the car will roll up the hill and slow down.  This ideal car’s total energy is conserved, and it has (conventionally) two pieces, the kinetic energy \(p^2/2m\) (where \(p\) is the momentum; here I’m leaving out the rotational contribution of the tires), and the gravitational potential energy, \(mgz\), where \(g\) is the gravitational acceleration and \(z\) is the height of the center of mass above some reference level.  As the car goes up, so does its potential energy, meaning its kinetic energy has to fall.  When the kinetic energy hits zero, the car stops momentarily before starting to roll backward down the hill.  The spot where the car stops is called a classical turning point.  Without some additional contribution to the energy, you won’t ever find the car on the other side of that hill, because the shaded region is “classically forbidden”.  We’d either have to sacrifice conservation of energy, or the car would have to have negative kinetic energy to exist in the forbidden region.  Since the kinetic piece is proportional to \(p^2\), to have negative kinetic energy would require \(p\) to be imaginary (!).

However, we know that the car is really a quantum object, built out of a huge number (more than \(10^27\)) other quantum objects.  The spatial locations of quantum objects can be described with “wavefunctions”, and you need to know a couple of things about these to get a feel for tunneling.  For the ideal case of a free particle with a definite momentum, the wavefunction really looks like a wave with a wavelength \(h/p\), where \(h\) is Planck’s constant.  Because a wave extends throughout all space, the probability of finding the ideal free particle anywhere is equal, in agreement with the oft-quoted uncertainty principle. 

Here’s the essential piece of physics:  In a classically forbidden region, the wavefunction decays exponentially with distance (mathematically equivalent to the wave having an imaginary wavelength), but it can’t change abruptly.  That means that if you solve the problem of a quantum particle incident on a finite (in energy and spatial size) barrier from one side, there is always some probability that the particle will be found on the far side of the classically forbidden region.  

This means that it’s technically possible for the car to “tunnel” through the hillside and end up on the downslope.  I would not recommend this as a transportation strategy, though, because that’s incredibly unlikely.  The more massive the particle, and the more forbidden the region (that is, the more negative the classical kinetic energy of the particle would have to be in the barrier), the faster the exponential decay of the probability of getting through.  For a 1000 kg car trying to tunnel through a 10 cm high speed bump 1 m long, the probability is around exp(-2.7e20).  That kind of number is why quantum tunneling is not an obvious part of your daily existence.  For something much less massive, like an electron, the tunneling probability from, say, a metal tip to a metal surface decays by around a factor of \(e^2\) for every 0.1 nm of tip-surface distance separation.  It’s that exponential sensitivity to geometry that makes scanning tunneling microscopy possible.

However, quantum tunneling is very much a part of your life.  Protons can tunnel through the repulsion of their positive charges to bind to each other – that’s what powers the sun.  Electrons routinely tunnel in zillions of chemical reactions going on in your body right now, as well as in the photosynthesis process that drives most plant life. 

On a more technological note, tunneling is a key ingredient in the physics of flash memory.  Flash is based on field-effect transistors, and as I described the other day, transistors are switched on or off depending on the voltage applied to a gate electrode.  Flash storage uses transistors with a “floating gate”, a conductive island surrounded by insulating material, some kind of glassy oxide.  Charge can be parked on that gate or removed from it, and depending on the amount of charge there, the underlying transistor channel is either conductive or not.   How does charge get on or off the island?  By a flavor of tunneling called field emission.  The insulator around the floating gate functions as a potential energy barrier for electrons.  If a big electric field is applied via some other electrodes, the barrier’s shape is distorted, allowing electrons to tunnel through it efficiently.  This is a tricky aspect of flash design.  The barrier has to be high/thick enough that charge stuck on the floating gate can stay there a very long time - you wouldn’t want the bits in your SSD or your flash drive losing their status on the timescale of months, right? - but ideally tunable enough that the data can be rewritten quickly, with low error rates, at low voltages.

Monday, May 21, 2018

Physics around you: the field-effect transistor

While dark matter and quantum gravity routinely get enormous play in the media, you are surrounded every day by physics that enables near miraculous technology.  Paramount among these is the field-effect transistor (FET).   That wikipedia link is actually pretty good, btw.  While I've written before about specific issues regarding FETs (here, here, here), I hadn't said much about the general device.

The idea of the FET is to use a third electrode, a gate, to control the flow of current through a channel between two other electrodes, the source and drain.  The electric field from the gate controls the mobile charge in the channel - this is the field effect.   You can imagine doing this in vacuum, with a hot filament to be a source of electrons, a second electrode (at a positive voltage relative to the source) to collect the electrons, and an intervening grid as the gate.  Implementing this in the solid state was proposed more than once (LilienfeldHeil) before it was done successfully. 

Where is the physics?  There is a ton of physics involved in how these systems actually work.  For example, it's all well and good to talk about "free" electrons moving around in solids in analogy to electrons flying in space in a vacuum tube, but it's far from obvious that you should be able to do this.   Solids are built out of atoms and are inherently quantum mechanical, with particular allowed energies and electronic states picked out by quantum mechanics and symmetries.  The fact that allowed electronic states in periodic solids ("Bloch waves") resemble "free" electron states (plane waves, in the quantum context) is very deep and comes from the underlying symmetry of the material.  [Note that you can have transistors even when the charge carriers should be treated as hopping from site to site - that's how many organic FETs work.]  It's the Pauli principle that allows us to worry only about the highest energy electronic states and not have to worry about, e.g., the electrons deep down in the ion cores of the atoms in the material.  Still, you do have to make sure there aren't a bunch of electronic states at energies where you don't want them - these the are traps and surface states that made FETs hard to get working.  The combo of the Pauli principle and electrostatic screening is why we can largely ignore the electron-electron repulsion in the materials, but still use the gate electrode's electric field to affect the channel.  FETs have also been great tools for learning new physics, as in the quantum Hall effect

What's the big deal?  When you have a switch that is either open or closed, it's easy to realize that you can do binary-based computing with a bunch of them.  The integrated manufacturing of the FET has changed the world.  It's one of the few examples of a truly disruptive technology in the last 100 years.  The device you're using to read this probably contains several billion (!) transistors, and they pretty much all work, for years at a time.  FETs are the underlying technology for both regular and flash memory.  FETs are what drive the pixels in the flat panel display you're viewing.  Truly, they are so ubiquitous that they've become invisible.

Wednesday, May 16, 2018

"Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue. 

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)

Wednesday, May 02, 2018

Short items

A couple of points of interest:
  • Bill Gates apparently turned down an offer from the Trump administration to be presidential science advisor.  It's unclear if this was a serious offer or an off-hand remark.   Either way it underscores what a trivialized and minimal role OSTP appears to be playing in the present administration.  It's a fact of modern existence that there are many intersections between public policy and the need for technical understanding of scientific issues (in the broad sense that includes engineering).   While an engaged and highly functional OSTP doesn't guarantee good policy (because science is only one of many factors that drive decision-making), the US is doing itself a disservice by running a skeleton crew in that office.  
  • Phil Anderson has posted a document (not a paper submitted for publication anywhere, but more of an essay) on the arxiv with the sombre title, "Four last conjectures".  These concern: (1) the true ground state of solids made of atoms that are hard-core bosons, suggesting that at sufficiently low temperatures one could have "non-classical rotational inertia" - not exactly a supersolid, but similar in spirit; (2) a discussion of a liquid phase of (magnetic) vortices in superconductors in the context of heat transport; (3) an exposition of his take on high temperature superconductivity (the "hidden Fermi liquid"), where one can have non-Fermi-liquid scattering rates for longitudinal resistivity, yet Fermi liquid-like scattering rates for scattering in the Hall effect; and (4) a speculation about an alternative explanation (that, in my view, seems ill-conceived) for the accelerating expansion of the universe.   The document is vintage Anderson, and there's a melancholy subtext given that he's 94 years old and is clearly conscious that he likely won't be with us much longer.
  • On a lighter note, a paper (link goes to publicly accessible version) came out a couple of weeks ago explaining how yarn works - that is, how the frictional interactions between a zillion constituent short fibers lead to thread acting like a mechanically robust object.  Here is a nice write-up.