On the road to discussing the Modern Theory of Polarization (e.g., pdf), it's necessary to talk about Berry phase - here, unlike many uses of the word on this blog, "phase" actually refers to a phase angle, as in a complex number \(e^{i\phi}\). The Berry phase, named for Michael Berry, is a so-called geometric phase, in that the value of the phase depends on the "space" itself and the trajectory the system takes. (For reference, the original paper is here (pdf), a nice talk about this is here, and reviews on how this shows up in electronic properties are here and here.)
A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces. Imagine taking a walk while holding an arrow, initially pointed east, say. You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right. On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially. On a curved (say spherical) surface, though, something different has happened. As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction. The angle of rotation is a geometric phase analogous to Berry phase. The issue is that the local definition of "east" varies over the surface of the sphere. In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space. If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves. This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.
So what about the actual Berry phase? To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED. The actual math is laid out here. In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around. For a state \(| \psi\rangle \) (an energy eigenstate, one of the "energy levels" of our system) with energy \(E\), we learn in quantum mechanics that the phase accumulates at a rate of \(E/\hbar\), so that the phase angle after some time \(t\) is given by \(\Delta \phi = Et/\hbar\). Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter \(\lambda\). For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates. If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy \(E(\lambda)\). So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of \(E(\lambda)\). You might think that the phase that would be racked up would just be found by adding up the little contributions, \(\Delta \phi = \int (E(\lambda(t))/\hbar) dt\).
However, this misses something! In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path. Here, it turns out that we have to keep track of how the state itself, \(| \psi \rangle\), varies locally with \(\lambda\). To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by \(| \psi \rangle \) ) is actually also varying along the path.
[Mathematically, that second contribution to the phase adds up to be \( \int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda\). Generally \(\lambda\) could be a vectorial thing with multiple components, so that \(\partial_{\lambda}\) would be a gradient operator with respect to \(\lambda\), and the integral would be a line integral along some trajectory of \(\lambda\). It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection. The curl of that vector potential is some effective magnetic field, called the Berry curvature. Then the line integral above, if it's around some closed path in \(\lambda\), is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]
Why is any of this of interest in condensed matter?
Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum \(\mathbf{p} = \hbar \mathbf{k}\). Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity \(\partial E(\mathbf{k})/\partial \mathbf{k}\) (related to how the wavepacket racks up phase as it propagates in space), and we basically write down \(\mathbf{F} = m\mathbf{a}\) using electric and magnetic fields. Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout \(\mathbf{k}\)-space. That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Tuesday, July 31, 2018
Monday, July 23, 2018
Math, beauty, and condensed matter physics
There is a lot of discussion these days about the beauty of mathematics in physics, and whether some ideas about mathematical elegance have led the high energy theory community down the wrong path. And yet, despite that, high energy theory still seems like a very popular professed interest of graduating physics majors. This has led me to identify what I think is another sociological challenge to be overcome by condensed matter in the broader consciousness.
Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it. When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations. You remember back when you took algebra, and you had to solve quadratic equations? We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable. Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers. In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)e^{x} + 3 = 0\). There is no clean, algorithmic way to get an exact analytic solution to this. Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer.
The same sort of thing happens again when we learn calculus. When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions. There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side). However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods. The exact same situation arises in solving differential equations. Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions. It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.
I think condensed matter suffers from a similar culturally acquired bias. Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum. At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations. Condensed matter seems like it must be messy. Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.
Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it. When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations. You remember back when you took algebra, and you had to solve quadratic equations? We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable. Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers. In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)e^{x} + 3 = 0\). There is no clean, algorithmic way to get an exact analytic solution to this. Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer.
The same sort of thing happens again when we learn calculus. When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions. There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side). However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods. The exact same situation arises in solving differential equations. Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions. It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.
I think condensed matter suffers from a similar culturally acquired bias. Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum. At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations. Condensed matter seems like it must be messy. Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.
Wednesday, July 18, 2018
Items of interest
While trying to write a few things (some for the blog, some not), I wanted to pass along some links of interest:
- APS March Meeting interested parties: The time to submit nominations for invited sessions for the Division of Condensed Matter Physics is now (deadline of August 24). See here. As a member-at-large for DCMP, I've been involved in the process now for a couple of years, and lots of high quality nominations are the best way to get a really good meeting. Please take the time to nominate!
- Similarly, now is the time to nominate people for DCMP offices (deadline of Sept. 1).
- There is a new tool available called Scimeter that is a rather interesting add-on to the arxiv. It has done some textual analysis of all the preprints on the arxiv, so you can construct a word cloud for an author (see at right for mine, which is surprisingly dominated by "field effect transistor" - I guess I use that phrase too often) or group of authors; or you can search for similar authors based on that same word cloud analysis. Additionally, the tool uses that analysis to compare breadth of research topics spanned by an author's papers. Apparently I am 0.3 standard deviations more broad than the mean broadness, whatever that means.
- Thanks to a colleague, I stumbled on Fermat's Library, a great site that stockpiles some truly interesting and foundational papers across many disciplines and allows shared commenting in the margins (hence the Fermat reference).
Sunday, July 08, 2018
Physics in the kitchen: Frying tofu
I was going to title this post "On the emergence of spatial and temporal coherence in frying tofu", or "Frying tofu: Time crystal?", but decided that simplicity has virtues.
I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen. I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light. The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz. Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized. This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.
From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well. Clearly I should have filmed this and called it a classical time crystal. Would've been a cheap and tasty paper. (I kid, I kid.)
What I think happened is this. The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles. There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu). The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop. I'm sure someone far more obsessive than I am could do serious dimensional analysis about this. The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames). This kind of self-organization happens all the time - here is a nice talk about this stuff. This kind of synchronization is an example of universal, emergent physics.
I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen. I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light. The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz. Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized. This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.
From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well. Clearly I should have filmed this and called it a classical time crystal. Would've been a cheap and tasty paper. (I kid, I kid.)
What I think happened is this. The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles. There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu). The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop. I'm sure someone far more obsessive than I am could do serious dimensional analysis about this. The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames). This kind of self-organization happens all the time - here is a nice talk about this stuff. This kind of synchronization is an example of universal, emergent physics.
Tuesday, July 03, 2018
A metal superconducting transistor (?!)
A paper was published yesterday in Nature Nanotechnology that is quite surprising, at least to me, and I thought I should point it out.
The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state. [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" \(h/2e^{2}\approx 12.9~\mathrm{k}\Omega\). This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains). Similarly, the inferred resistivity of around 30 \(\mu\Omega\)-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]
The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero. This effect happens symmetrically with either polarity of bias voltage.
This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry.
The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect. In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field.
Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors). I need to think about whether this makes sense to me physically.
The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state. [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" \(h/2e^{2}\approx 12.9~\mathrm{k}\Omega\). This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains). Similarly, the inferred resistivity of around 30 \(\mu\Omega\)-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]
The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero. This effect happens symmetrically with either polarity of bias voltage.
This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry.
The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect. In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field.
Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors). I need to think about whether this makes sense to me physically.
Sunday, July 01, 2018
Book review: The Secret Life of Science
I recently received a copy of The Secret Life of Science: How It Really Works and Why It Matters, by Jeremy Baumberg of Cambridge University. The book is meant to provide a look at the "science ecosystem", and it seems to be unique, at least in my experience. From the perspective of a practitioner but with a wider eye, Prof. Baumberg tries to explain much of the modern scientific enterprise - what is modern science (with an emphasis on "simplifiers" [often reductionists] vs. "constructors" [closer to engineers, building new syntheses] - this is rather similar to Narayanamurti's take described here), who are the different stakeholders, publication as currency, scientific conferences, science publicizing and reporting, how funding decisions happen, career paths and competition, etc.
I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it. I agree with many of the book's main observations:
It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation. My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields. (I do wonder about some of the numbers that appear. Has the number of scientists in France really grown by a factor of three since 1980? And by a factor of five in Spain over that time?)
If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start.
I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it. I agree with many of the book's main observations:
- The actual scientific enterprise is complicated (as pointed out repeatedly with one particular busy figure that recurs throughout the text), with a bunch of stakeholders, some cooperating, some competing, and we've arrived at the present situation through a complex, emergent history of market forces, not some global optimization of how best to allocate resources or how to choose topics.
- Scientific publishing is pretty bizarre, functioning to disseminate knowledge as well as a way of keeping score; peer review is annoying in many ways but serves a valuable purpose; for-profit publications can distort people's behaviors because of the prestige associated with some.
- Conferences are also pretty weird, serving purposes (networking, researcher presentation training) that are not really what used to be the point (putting out and debating new results).
- Science journalism is difficult, with far more science than can be covered, squeezed resources for real journalism, incentives for PR that can oversimplify or amp up claims and controversy, etc.
It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation. My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields. (I do wonder about some of the numbers that appear. Has the number of scientists in France really grown by a factor of three since 1980? And by a factor of five in Spain over that time?)
If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start.