Crystals are fascinating. Somehow, for reasons that don't seem at all obvious at first glance, some materials grow in cool shapes as solids, with facets and obvious geometric symmetries. This was early support for the idea of atoms, and it's no wonder at all that people throughout history have looked upon obviously crystalline materials as amazing, possibly connected with magical powers.
In science fiction (or maybe more properly science fantasy), crystals show up repeatedly as having special properties, often able to control or direct energies that seem more appropriate for particle physics. In Star Trek, dilithium crystals are able to channel and control the flow of matter-antimatter reactions needed for warp drive, the superluminal propulsion system favored by the Federation and the Klingon Empire. In Star Wars, kyber crystals are at the heart of lightsabers, and were also heavily mined by the Empire for use in the planet-killing main weapon of the Death Star.
In real life, though, crystals don't do so well in interacting with very high energy electromagnetic or particle radiation. Yes, it is possible for crystals to scatter x-rays and high energy electrons - that's the way x-ray diffraction and electron diffraction work. On very rare occasions, crystals can lead to surprising nuclear processes, such as all the iron atoms in a crystal sharing the recoil when an excited iron nucleus spits out a gamma ray, as in the Mossbauer Effect. Much more typically, though, crystals are damaged by high energy radiation - if the energy scale of the photon or other particle is much larger than the few eV chemical energy scales that hold atoms in place or bind core electrons (say a few tens of eV), then the cool look and spatial arrangement of the atoms really doesn't matter, and atoms get kicked around. The result is the creation of vacancies or interstitial defects, some of which can even act as "color centers", so that otherwise colorless Al2O3, for example, can take on color after being exposed to ionizing radiation in a reactor.
Ahh well. Crystals are still amazing even if they can't propel starships faster than light.
(Happy new year to my readers! I'm still trying to be optimistic, even if it's not always easy.)
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Monday, December 30, 2019
Sunday, December 22, 2019
Condensed matter and Christmas decorations - 'tis the season
Modern outdoor decorations owe quite a bit to modern science - polymers; metallurgy; electric power for the lighting, fans, sensors, and motors which make possible the motion-actuated inflatable Halloween decorations that scare my dog.... Condensed matter physics has, as in many areas of society, had a big impact on Christmas decorations that is so ubiquitous and pervasive that no one even thinks about it. In particular, I'm thinking about the light emitting diode and its relative, the diode laser. I'm pretty sure that Nick Holonyak and Shuji Nakamura never imagined that LEDs would pave the way for animated multicolor icicle decorations. Likewise, I suspect that the inventors discussed here (including Holonyak) never envisioned laser projected holiday lighting. So, the next time someone asks if any of this quantum stuff or basic research is useful, remember that these inherently quantum devices have changed the world in all kinds of ways that everyone sees but few observe.
Wednesday, December 18, 2019
Materials and neuromorphic computing
(In response to a topic suggestion from the Pizza Perusing Physicist....)
Neuromorphic computing is a trendy concept aimed at producing computing devices that are structured and operate like biological neural networks.
In standard digital computers, memory and logic are physically separated and handled by distinct devices, and both are (nearly always) based on binary states and highly regular connectivity. That is, logic gates take inputs that are two-valued (1 or 0), and produce outputs that are similarly two-valued; logic gates have no intrinsic memory of past operations that they've conducted; memory elements are also binary, with data stored as a 1 or 0; and everything is organized in a regular, immutable pattern - memory registers populated and read by clocked, sequential logic gates via a bus.
Natural neural networks, on the other hand, are very different. Each neuron can be connected to many others via synapses. Somehow memory and logic are performed by the same neuronal components. The topology of the connections varies with time - some connections are reinforced by repeated use, while others are demoted, in a continuous rather than binary way. Information traffic involves temporal trains of pulses called spikes.
All of these things can be emulated with standard digital computers. Deep learning methods do this, with multiple layers playing the roles of neurons, and weighted links between nodes modeling the connections and strengths. This is all a bit opaque and doesn't necessarily involve simulating the spiking dynamics at all. Implementing neural networks via standard hardware loses some of the perceived benefits of biological neural nets, like very good power efficiency.
In the last few years, as machine learning and big data have become increasingly important, there has been a push to try to implement in device hardware architectures that look a lot more like the biological analogs. To do this, you might want nonvolatile memory elements that can also be used for logic, and can have continuously graded values of "on"-ness determined by their history. Resistive switching memory elements, sometimes called memristors (though that is a loaded term - see here and here), can fit the bill, as in this example. Many systems can act as resistive switches, with conduction changes often set by voltage-driven migration of ions or vacancies in the material.
On top of this, there has been a lot of interest in using strongly correlated materials in such applications. There are multiple examples of correlated materials (typically transition metal oxides) that undergo dramatic metal-insulator transitions as a function of temperature. These materials then offer a chance to emulate spiking - driving a current can switch such a material from the insulating to the metallic state via local Joule heating or more nontrivial mechanisms, and then revert to the insulating state. See the extensive discussion here.
Really implementing all of this at scale is not simple. The human brain involves something like 100,000,000,000 neurons, and connections run in three dimensions. Getting large numbers of effective solid-state neurons with high connectivity via traditional 2D planar semiconductor-style fab (basically necessary if one wants to have many millions of neurons) is not easy, particularly if it requires adapting processing techniques to accommodate new classes of materials.
If you're interested in this and how materials physics can play a role, check out this DOE report and this recent review article.
Sunday, December 08, 2019
Brief items
Here are some tidbits that came across my eyeballs this past week:
- I just ran into this article from early in 2019. It touches on my discussion about liquids, and is a great example of a recurring theme in condensed matter physics. The authors look at the vibrational excitations of liquid droplets on surfaces. As happens over and over in physics, the imposition of boundary conditions on the liquid motion (e.g., wetting conditions on the surface and approximately incompressible liquid with a certain surface tension) leads to quantization of the allowed vibrations. Discrete frequencies/mode shapes/energies are picked out due to those constraints, leading to a "periodic table" of droplet vibrations. (This one looks moderately like atomic states, because spherical harmonics show up in the mode description, as they do when looking at atomic orbitals.)
- Another article from the past, this one from 2014 in the IEEE Spectrum. It talks about how we arrived at the modern form for Maxwell's equations. Definitely a good read for those interested in the history of physics. Maxwell's theory was developing in parallel with what became vector calculus, and Maxwell's original description (like Faraday's intuition) was very mechanistic rather than abstract.
- Along those lines, this preprint came out recently promoting a graphical pedagogical approach to vector calculus. The spirit at work here is that Feynman's graphical diagrammatic methods were a great way to teach people perturbative quantum field theory, and do perhaps a diagrammatic scheme for vector calc could be good. I'm a bit of a skeptic - I found the approach by Purcell to be very physical and intuitive, and this doesn't look simpler to me.
- This preprint about twisted bilayer graphene and the relationship between superconductivity and strongly insulating states caught my eye, and I need to read it carefully. The short version: While phase diagrams showing superconductivity and insulating states as a function of carrier density make it tempting to think that SC evolves out of the insulating states via doping (as likely in the cuprates), the situation may be more complicated.
Saturday, November 30, 2019
What is a liquid?
I wrote recently about phases of matter (and longer ago here). The phase that tends to get short shrift in the physics curriculum is the liquid, and this is actually a symptom indicating that liquids are not simple things.
We talk a lot about gases, and they tend to be simple in large part because they are low density systems - the constituents spend the overwhelming majority of their time far apart (compared to the size of the constituents), and therefore tend to interact with each other only very weakly. We can even look in the ideal limit of infinitesimal particle size and zero interactions, so that the only energy in the problem is the kinetic energy of the particles, and derive the Ideal Gas Law.
There is no such thing as an Ideal Liquid Law. That tells you something about the complexity of these systems right there.
A classical liquid is a phase of matter in which the constituent particles have a typical interparticle distance comparable to the particle size, and therefore interact strongly, with both a "hard core repulsion" so that the particles are basically impenetrable, and usually through some kind of short-ranged attraction, either from van der Waals forces and/or longer-ranged/stronger interactions. The kinetic energy of the particles is sufficiently large that they don't bond rigidly to each other and therefore move past and around each other continuously. However, the density is so high that you can't even do very well by only worrying about pairs of interacting particles - you have to keep track of three-body, four-body, etc. interactions somehow.
The very complexity of these strongly interacting collections of particles leads to the emergence of some simplicity at larger scales. Because the particles are cheek-by-jowl and impenetrable, liquids are about as incompressible as solids. The lack of tight bonding and enough kinetic energy to keep everyone moving means that, on average and on scales large compared to the particle size, liquids are homogeneous (uniform properties in space) and isotropic (uniform properties in all directions). When pushed up against solid walls by gravity or other forces, liquids take on the shapes of their containers. (If the typical kinetic energy per particle can't overcome the steric interactions with the local environment, then particles can get jammed. Jammed systems act like "rigid" solids.)
Because of the constant interparticle collisions, energy and momentum get passed along readily within liquids, leading to good thermal conduction (the transport of kinetic energy of the particles via microscopic, untraceable amounts we call heat) and viscosity (the transfer of transverse momentum between adjacent rough layers of particles just due to collisions - the fluid analog of friction). The lack of rigid bonding interactions means that liquids can't resist shear; layers of particles slide past each other. This means that liquids, like gases, don't have transverse sound waves. The flow of particles is best described by hydrodynamics, a continuum approach that makes sense on scales much bigger than the particles.
Quantum liquids are those for which the quantum statistics of the constituents are important to the macroscopic properties. Liquid helium is one such example. Physicists have also adopted the term "liquid" to mean any strongly interacting, comparatively incompressible, flow-able system, such as the electrons in a metal ("Fermi liquid").
Liquids are another example emergence that is deep, profound, and so ubiquitous that people tend to look right past it. "Liquidity" is a set of properties so well-defined that a small child can tell you whether something is a liquid by looking at a video of it; those properties emerge largely independent of the microscopic details of the constituents and their interactions (water molecules with hydrogen bonds; octane molecules with van der Waals attraction; very hot silica molecules in flowing lava); and none of those properties are obvious if one starts with, say, the Standard Model of particle physics.
Monday, November 25, 2019
General relativity (!) and band structure
Today we had a seminar at Rice by Qian Niu of the University of Texas, and it was a really nice, pedagogical look at this paper (arxiv version here). Here's the basic idea.
As I wrote about here, in a crystalline solid the periodic lattice means that single-particle electronic states look like Bloch waves, labeled by some wavevector \(\mathbf{k}\), of the form \(u_{\mathbf{k}}(\mathbf{r}) \exp(i \mathbf{k}\cdot \mathbf{r})\) where \(u_{\mathbf{k}}\) is periodic in space like the lattice. It is possible to write down semiclassical equations of motion of some wavepacket that starts centered around some spatial position \(\mathbf{r}\) and some (crystal) momentum \(\hbar \mathbf{k}\). These equations tell you that the momentum of the wavepacket changes with time as due to the external forces (looking a lot like the Lorentz force law), and the position of the wavepacket has a group velocity, plus an additional "anomalous" velocity related to the Berry phase (which has to do with the variation of \(u_{\mathbf{k}}\) over the allowed values of \(\mathbf{k}\)).
The paper asks the question, what are the semiclassical equations of motion for a wavepacket if the lattice is actually distorted a bit as a function of position in real space. That is, imagine a strain gradient, or some lattice deformation. In that case, the wavepacket can propagate through regions where the lattice is varying spatially on very long scales while still being basically periodic on shorter scales still long compared to the Fermi wavelength.
It turns out that the right way to tackle this is with the tools of differential geometry, the same tools used in general relativity. In GR, when worrying how the coordinates of a particle change as it moves along, there is the ordinary velocity, and then there are other changes in the components of the velocity vector because the actual geometry of spacetime (the coordinate system) is varying with position. You need to describe this with a "covariant derivative", and that involves Christoffel symbols. In this way, gravity isn't a force - it's freely falling particles propagating as "straight" as they can, but the actual geometry of spacetime makes their trajectory look curved based on our choice of coordinates.
For the semiclassical motion problem in a distorted lattice, something similar happens. You have to worry about how the wavepacket evolves both because of the local equations of motion, and because the wavepacket is propagating into a new region of the lattice where the \(u_{\mathbf{k}}\) functions are different because the actual lattice is different (and that also affects the Berry phase anomalous velocity piece). Local rotations of the lattice can lead to an affective Coriolis force on the wavepacket; local strain gradients can lead to effective accelerations of the wavepacket.
(For more fun, you can have temporal periodicity as well. That means you don't just have Bloch functions in 3d, you have Bloch-Floquet functions in 3+1d, and that's where I fell behind.)
Bottom line: The math of general relativity is an elegant way to look at semiclassical carrier dynamics in real materials. I knew that undergrad GR course would come in handy....
As I wrote about here, in a crystalline solid the periodic lattice means that single-particle electronic states look like Bloch waves, labeled by some wavevector \(\mathbf{k}\), of the form \(u_{\mathbf{k}}(\mathbf{r}) \exp(i \mathbf{k}\cdot \mathbf{r})\) where \(u_{\mathbf{k}}\) is periodic in space like the lattice. It is possible to write down semiclassical equations of motion of some wavepacket that starts centered around some spatial position \(\mathbf{r}\) and some (crystal) momentum \(\hbar \mathbf{k}\). These equations tell you that the momentum of the wavepacket changes with time as due to the external forces (looking a lot like the Lorentz force law), and the position of the wavepacket has a group velocity, plus an additional "anomalous" velocity related to the Berry phase (which has to do with the variation of \(u_{\mathbf{k}}\) over the allowed values of \(\mathbf{k}\)).
The paper asks the question, what are the semiclassical equations of motion for a wavepacket if the lattice is actually distorted a bit as a function of position in real space. That is, imagine a strain gradient, or some lattice deformation. In that case, the wavepacket can propagate through regions where the lattice is varying spatially on very long scales while still being basically periodic on shorter scales still long compared to the Fermi wavelength.
It turns out that the right way to tackle this is with the tools of differential geometry, the same tools used in general relativity. In GR, when worrying how the coordinates of a particle change as it moves along, there is the ordinary velocity, and then there are other changes in the components of the velocity vector because the actual geometry of spacetime (the coordinate system) is varying with position. You need to describe this with a "covariant derivative", and that involves Christoffel symbols. In this way, gravity isn't a force - it's freely falling particles propagating as "straight" as they can, but the actual geometry of spacetime makes their trajectory look curved based on our choice of coordinates.
For the semiclassical motion problem in a distorted lattice, something similar happens. You have to worry about how the wavepacket evolves both because of the local equations of motion, and because the wavepacket is propagating into a new region of the lattice where the \(u_{\mathbf{k}}\) functions are different because the actual lattice is different (and that also affects the Berry phase anomalous velocity piece). Local rotations of the lattice can lead to an affective Coriolis force on the wavepacket; local strain gradients can lead to effective accelerations of the wavepacket.
(For more fun, you can have temporal periodicity as well. That means you don't just have Bloch functions in 3d, you have Bloch-Floquet functions in 3+1d, and that's where I fell behind.)
Bottom line: The math of general relativity is an elegant way to look at semiclassical carrier dynamics in real materials. I knew that undergrad GR course would come in handy....
Friday, November 22, 2019
Recent results on the arxiv
Here are a few interesting results I stumbled upon recently:
- This preprint has become this Science paper that was published this week. The authors take a cuprate superconductor (YBCO) and use reactive ion etching to pattern an array of holes in the film. Depending on how long they etch, they can kill global superconductivity but leave the system such that it still behaves as a funny kind of metal (resistance decreasing with decreasing temperature), with some residual resistance at low temperatures. The Hall effect in this metallic state produces no signal - a sign that the is balance between particle-like and hole-like carriers (particle-hole symmetry). For magnetic field perpendicular to the film, they also see magnetoresistance with features periodic in flux through one cell of the pattern, with a periodicity that indicates the charge carriers have charge 2e. This is an example of a "Bose metal". Neat! (The question about whether there are pairs without superconductivity touches on our own recent work.)
- This preprint was recently revised (and thus caught my eye in the arxiv updates). In it, the authors are using machine learning to try to find new superconductors. The results seem encouraging. I do wonder if one could do a more physics-motivated machine learning approach (that is, something with an internal structure to the classification scheme and the actual weighting procedure) to look at this and other related problems (like identifying which compounds might be growable via which growth techniques).
- This preprint is not a condensed matter topic, but has gotten a lot of attention. The authors look at a particular nuclear transition in 4He, and find a peculiar angular distribution for the electron-positron pairs that come out. The reason this is of particular interest is that this paper by the same investigators looking at a nuclear transition in 8Be three years ago found something very similar. If one assumes that there is a previously unobserved boson (a dark matter candidate perhaps) of some sort with a mass of around 17 MeV that couples in there, that could explain both results. Intriguing, but it would be great if these observations were confirmed independently by a different group.
Tuesday, November 12, 2019
Advice on proposal writing
Many many people have written about how to write scientific grant proposals, and much of that advice is already online. Rather than duplicate that work, and recognizing that sometimes different people need to hear advice in particular language, I want to link to some examples.
This last one has taken a pre-eminent position of importance because it's something that can be readily counted and measured. There is a rough rule that many program officers in NSF and DOE will tell you; averaging over their programs, they get roughly one high impact paper per $100K total cost. They would like more, of course.
Talk with program officers before writing and submitting - know the audience. Program officers (including foundational ones) tend to take real pride in their portfolios. Everyone likes funding successful, high-impact, exciting, trend-setting work. Still, particular program officers have areas of emphasis, in part so that there is not duplication of effort or support within an agency or across agencies. (This is especially true in areas like high energy theory, where if you've got DOE funding, you essentially can't get NSF support, and vice versa.) You will be wasting your time if you submit to the wrong program or pitch your idea to the wrong reviewing audience. NSF takes a strong line that their research directions are broadly set by the researchers themselves, via their deep peer review process (mail-in reviews, in-person or virtual panel discussions) and workshops that define programmatic goals. DOE likewise has workshops to help define major challenges and open questions, though my sense is that the department takes a more active role in delineating priorities. The DOD is more goal-directed, with program officers having a great deal of sway on topics of interest, and the prospect that such research may transition closer to technology-readiness. Foundations are idiosyncratic, but a common refrain is that they prefer to fund topics that are not already supported by federal agencies.
Think it through, and think like a referee. When coming up with an idea, do your best to consider in some detail how you would actually pull this off. How could you tell if it works? What would the implications be of success? What are the likely challenges and barriers? If some step doesn't go as planned, is it a show-stopper, or are their other ways to go? As an experimentalist: Do you have the tools you need to do this? How big a signal are you trying to detect? Remember, referees are frequently asked to evaluate strengths and weaknesses of technical approach. Better to have this in mind while at an early stage of the process.
Clearly state the problem, and explain the proposal's organization. Reviewers might be asked to read several proposals in a short timeframe. It seems like a good idea to say up front, in brief (like in a page or so): What is the problem? What are the open scientific/engineering questions you are specifically addressing? What is your technical approach? What will the results mean? Then, explain the organization of the proposal (e.g., section 2 gives a more detailed introduction to the problem and open questions; section 3 explains the technical approach, including a timeline of proposed work; etc.). This lets readers know where to find things.
I'll confess: I got this organizational approach by emulating the structure of an excellent proposal that I reviewed a number of years ago. It was really terrific - clear; pedagogical, so that a non-expert in that precise area could understand the issues and ideas; very cleanly written; easy-to-read figures, including diagrams that really showed how the ideas would work. Reviewing proposals is very helpful in improving your own. Very quickly you will get a sense of what you think makes a good or bad proposal. NSF is probably the most open to getting new investigators involved in the reviewing process.
Don't wait until the last minute. You know that classmate of yours from undergrad days, the one who used to brag about how they waited until the night before to blitz through a 20 page writing assignment? Amazingly, some of these people end up as successful academics. I genuinely don't know how they do it, because these days research funding is so competitive and proposals are detailed and complicated. There are many little formatting details that agencies enforce now. You don't want to get to an hour before the deadline and realize that all of your bibliographic references are missing a URL field. People really do read sections like data management plans and postdoctoral mentoring plans - you can't half-ass them. Also, while it is unlikely to sink a really good proposal, it definitely comes across badly to referees if there are missing or mislabeled references, figures, etc.
I could write more, and probably will amend this down the line, but work calls and this is at least a start.
- Here (pdf) is some advice straight from the National Science Foundation about how to write a compelling proposal. It's older (2004) and a bit out of date, but the main points are foundational.
- This is a very good collection of advice that has been updated (2015) to reflect current practice about NSF.
- Here are lecture notes from a course at Illinois that touched on this as well, generalizing beyond the NSF.
This last one has taken a pre-eminent position of importance because it's something that can be readily counted and measured. There is a rough rule that many program officers in NSF and DOE will tell you; averaging over their programs, they get roughly one high impact paper per $100K total cost. They would like more, of course.
Talk with program officers before writing and submitting - know the audience. Program officers (including foundational ones) tend to take real pride in their portfolios. Everyone likes funding successful, high-impact, exciting, trend-setting work. Still, particular program officers have areas of emphasis, in part so that there is not duplication of effort or support within an agency or across agencies. (This is especially true in areas like high energy theory, where if you've got DOE funding, you essentially can't get NSF support, and vice versa.) You will be wasting your time if you submit to the wrong program or pitch your idea to the wrong reviewing audience. NSF takes a strong line that their research directions are broadly set by the researchers themselves, via their deep peer review process (mail-in reviews, in-person or virtual panel discussions) and workshops that define programmatic goals. DOE likewise has workshops to help define major challenges and open questions, though my sense is that the department takes a more active role in delineating priorities. The DOD is more goal-directed, with program officers having a great deal of sway on topics of interest, and the prospect that such research may transition closer to technology-readiness. Foundations are idiosyncratic, but a common refrain is that they prefer to fund topics that are not already supported by federal agencies.
Think it through, and think like a referee. When coming up with an idea, do your best to consider in some detail how you would actually pull this off. How could you tell if it works? What would the implications be of success? What are the likely challenges and barriers? If some step doesn't go as planned, is it a show-stopper, or are their other ways to go? As an experimentalist: Do you have the tools you need to do this? How big a signal are you trying to detect? Remember, referees are frequently asked to evaluate strengths and weaknesses of technical approach. Better to have this in mind while at an early stage of the process.
Clearly state the problem, and explain the proposal's organization. Reviewers might be asked to read several proposals in a short timeframe. It seems like a good idea to say up front, in brief (like in a page or so): What is the problem? What are the open scientific/engineering questions you are specifically addressing? What is your technical approach? What will the results mean? Then, explain the organization of the proposal (e.g., section 2 gives a more detailed introduction to the problem and open questions; section 3 explains the technical approach, including a timeline of proposed work; etc.). This lets readers know where to find things.
I'll confess: I got this organizational approach by emulating the structure of an excellent proposal that I reviewed a number of years ago. It was really terrific - clear; pedagogical, so that a non-expert in that precise area could understand the issues and ideas; very cleanly written; easy-to-read figures, including diagrams that really showed how the ideas would work. Reviewing proposals is very helpful in improving your own. Very quickly you will get a sense of what you think makes a good or bad proposal. NSF is probably the most open to getting new investigators involved in the reviewing process.
Don't wait until the last minute. You know that classmate of yours from undergrad days, the one who used to brag about how they waited until the night before to blitz through a 20 page writing assignment? Amazingly, some of these people end up as successful academics. I genuinely don't know how they do it, because these days research funding is so competitive and proposals are detailed and complicated. There are many little formatting details that agencies enforce now. You don't want to get to an hour before the deadline and realize that all of your bibliographic references are missing a URL field. People really do read sections like data management plans and postdoctoral mentoring plans - you can't half-ass them. Also, while it is unlikely to sink a really good proposal, it definitely comes across badly to referees if there are missing or mislabeled references, figures, etc.
I could write more, and probably will amend this down the line, but work calls and this is at least a start.
Thursday, November 07, 2019
Rice Academy of Fellows 2020
As I had posted a year ago: Rice has a university-wide competitive postdoctoral fellow program known as the Rice Academy of Fellows. Like all such things, it's very competitive. The new application listing has gone live here with a deadline of January 3, 2020. Applicants have to have a faculty mentor, so in case someone is interested in working with me on this, please contact me via email. We've got some fun, exciting stuff going on!
Friday, November 01, 2019
Sorry for the hiatus
My apologies for the unusually long hiatus in posts. Proposal deadlines + department chair obligations + multiple papers in process made the end of October very challenging. Later next week I expect to pick up again. Suggested topics (in the comments?) are always appreciated. I realize I've never written an advice-on-grant-proposal-writing post. On the science side, I'm still mulling over the most accessible way to describe quantum Hall physics, and there are plenty of other "primer" topics that I should really write at some point.
If I hadn't been so busy, I would've written a post during the baseball World Series about how the hair of Fox Sports broadcaster Joe Buck is a study in anisotropic light scattering. Viewed straight on, it's a perfectly normal color, but when lit and viewed from an angle, it's a weirdly iridescent yellow - I'm thinking that this really might have interesting physics behind it, in the form of some accidental structural color.
If I hadn't been so busy, I would've written a post during the baseball World Series about how the hair of Fox Sports broadcaster Joe Buck is a study in anisotropic light scattering. Viewed straight on, it's a perfectly normal color, but when lit and viewed from an angle, it's a weirdly iridescent yellow - I'm thinking that this really might have interesting physics behind it, in the form of some accidental structural color.
Thursday, October 17, 2019
More items of interest
This continues to be a very very busy time, but here are a few interesting things to read:
- "Voodoo fusion" - an article from the APS Forum on Physics and Society that pretty much excoriates all of the fusion-related startup efforts, basically saying that they're about as legitimate as Theranos. Definitely an interesting read.
- https://arxiv.org/abs/1910.06389 - This is Kenneth Libbrecht's tour de force monograph on snow crystals. Talk about an example of emergence: From the modest water molecule comes, when many of them get together, the remarkable, intricate structure of snowflakes, with highly complex six-fold rotational symmetry.
- Somehow this tenure announcement just showed up in my newsfeed. It's very funny, and the titles and abstracts of his talks are in a similar vein. Perhaps I need to start writing up my physics talks this way.
- Beware of unintended consequences of ranking metrics.
- https://arxiv.org/abs/1910.05813 - insert snarky comment here about how this is a mathematical model for hiring/awards/funding/publication.
Monday, October 07, 2019
"Phase of matter" is a pretty amazing emergent concept
As we await the announcement of this year's physics Nobel tomorrow morning (last chance for predictions in the comments), a brief note:
I think it's worth taking a moment to appreciate just how amazing it is that matter has distinct thermodynamic phases or states.
We teach elementary school kids that there are solids, liquids, and gases, and those are easy to identify because they have manifestly different properties. Once we know more about microscopic details that are hard to see with unaided senses, we realize that there are many more macroscopic states - different structural arrangements of solids; liquid crystals; magnetic states; charge ordered states; etc.
When we take statistical physics, we learn descriptively what happens. When you get a large number of particles (say atoms for now) together, the macroscopic state that they take on in thermal equilibrium is the one that corresponds to the largest number of microscopic arrangements of the constituents under the given conditions. So, the air in my office is a gas because, at 298 K and 101 kPa, there are many many more microscopic arrangements of the molecules with that temperature and pressure that look like a gas than there are microscopic arrangements of the molecules that correspond to a puddle of N2/O2 mixture on the floor.
Still, there is something special going on. It's not obvious that there should have to be distinct phases at all, and such a small number of them. There is real universality about solids - their rigidity, resistance to shear, high packing density of atoms - independent of details. Likewise, liquids with their flow under shear, comparative incompressibility, and general lack of spatial structure. Yes, there are detailed differences, but any kid can recognize that water, oil, and lava all have some shared "liquidity". Why does matter end up in those configurations, and not end up being a homogeneous mush over huge ranges of pressure and temperature? This is called emergence, because while it's technically true that the standard model of particle physics undergirds all of this, it is not obvious in the slightest how to deduce the properties of snowflakes, raindrops, or water vapor from there. Like much of condensed matter physics, this stuff is remarkable (when you think about it), but so ubiquitous that it slides past everyone's notice pretty much of the time.
I think it's worth taking a moment to appreciate just how amazing it is that matter has distinct thermodynamic phases or states.
We teach elementary school kids that there are solids, liquids, and gases, and those are easy to identify because they have manifestly different properties. Once we know more about microscopic details that are hard to see with unaided senses, we realize that there are many more macroscopic states - different structural arrangements of solids; liquid crystals; magnetic states; charge ordered states; etc.
When we take statistical physics, we learn descriptively what happens. When you get a large number of particles (say atoms for now) together, the macroscopic state that they take on in thermal equilibrium is the one that corresponds to the largest number of microscopic arrangements of the constituents under the given conditions. So, the air in my office is a gas because, at 298 K and 101 kPa, there are many many more microscopic arrangements of the molecules with that temperature and pressure that look like a gas than there are microscopic arrangements of the molecules that correspond to a puddle of N2/O2 mixture on the floor.
Still, there is something special going on. It's not obvious that there should have to be distinct phases at all, and such a small number of them. There is real universality about solids - their rigidity, resistance to shear, high packing density of atoms - independent of details. Likewise, liquids with their flow under shear, comparative incompressibility, and general lack of spatial structure. Yes, there are detailed differences, but any kid can recognize that water, oil, and lava all have some shared "liquidity". Why does matter end up in those configurations, and not end up being a homogeneous mush over huge ranges of pressure and temperature? This is called emergence, because while it's technically true that the standard model of particle physics undergirds all of this, it is not obvious in the slightest how to deduce the properties of snowflakes, raindrops, or water vapor from there. Like much of condensed matter physics, this stuff is remarkable (when you think about it), but so ubiquitous that it slides past everyone's notice pretty much of the time.
Saturday, September 28, 2019
Items of interest
As I struggle with being swamped this semester, some news items:
- Scott Aaronson has a great summary/discussion about the forthcoming google/John Martinis result about quantum supremacy. The super short version: There is a problem called "random circuit sampling", where a sequence of quantum gate operations is applied to some number of quantum bits, and one would like to know the probability distribution of the outcomes. Simulating this classically becomes very very hard as the number of qubits grows. The google team apparently just implemented the actual problem directly using their 53-qubit machine, and could infer the probability distribution by directly sampling a large number of outcomes. They could get the answer this way in 3 min 20 sec for a number of qubits where it would take the best classical supercomputer 10000 years to simulate. Very impressive and certainly a milestone (though the paper is not yet published or officially released). This has led to some fascinating semantic discussions with colleagues of mine about what we mean by computation. For example, this particular situation feels a bit to me like comparing the numerical solution to a complicated differential equation (i.e. some Runge-Kutta method) on a classical computer with an analog computer using op-amps and R/L/C components. Is the quantum computer here really solving a computational problem, or is it being used as an experimental platform to simulate a quantum system? And what is the difference, and does it matter? Either way, a remarkable achievement. (I'm also a bit jealous that Scott routinely has 100+ comment conversations on his blog.)
- Speaking of computational solutions to complex problems.... Many people have heard about chaotic systems and why numerical solutions to differential equations can be fraught with peril due to, e.g., rounding errors. However, I've seen two papers this week that show just how bad this can be. This very good news release pointed me to this paper, where it shows that even using 64 bit precision doesn't save you from issues in some systems. Also this blog post points to this paper, which shows that n-body gravitational simulations have all sorts of problems along these lines. Yeow.
- SpaceX has assembled their mammoth sub-orbital prototype down in Boca Chica. This is going to be used for test flights up to 22 km altitude, and landings. I swear, it looks like something out of Tintin or The Conquest of Space. Awesome.
- Time to start thinking about Nobel speculation. Anyone?
Wednesday, September 18, 2019
DOE Experimental Condensed Matter PI Meeting, day 3 and wrapup
On the closing day of the PI meeting, some further points and wrap-up:
- I had previously missed work that shows that electric field can modulate magnetic exchange in ultrathin iron (overview).
- Ferroelectric layers can modulate transport in spin valves by altering the electronic energetic alignment at interfaces. This can result in some unusual response (e.g., the sign of the magnetoresistance can flip with the sign of the current, implying spin-diode-like properties).
- Artificial spin ices are still cool model systems. With photoelectron emission microscopy (PEEM), it's possible to image ultrathin, single-domain structures to reveal their mangetization noninvasively. This means movies can be made showing thermal fluctuations of the spin ice constituents, revealing the topological character of the magnetic excitations in these systems.
- Ultrathin oxide membranes mm in extent can be grown, detached from their growth substrates, and transferred or stacked. When these membranes are really thin, it becomes difficult to nucleate cracks, allowing the membranes to withstand large strains (several percent!), opening up the study of strain effects on a variety of oxide systems.
- Controlled growth of stacked phthalocyanines containing transition metals can generate nice model systems for studying 1d magnetism, even using conventional (large-area) methods like vibrating sample magnetometry.
- In situ oxide MBE and ARPES, plus either vacuum annealing or ozone annealing, has allowed the investigation of the BSCCO superconducting phase diagram over the whole range of dopings, from severely underdoped to so overdoped that superconductivity is completely suppressed. In the overdoped limit, analyzing the kink found in the band dispersion near the antinode, it seems superconductivity is suppressed at high doping because the coupling (to the mode that causes the kink) goes to zero at large doping.
- It's possible to grow nice films of C60 molecules on Bi2Se3 substrates, and use ARPES to see the complicated multiple valence bands at work in this system. Moreover, by doing measurements as a function of the polarization of the incoming light, the particular molecular orbitals contributing to those bands can be identified.
- Through careful control of conditions during vacuum filtration, it's possible to produce dense, locally crystalline films of aligned carbon nanotubes. These have remarkable optical properties, and with the anisotropy of their electronic structure plus ultraconfined character, it's possible to get exciton polaritons in these into the ultrastrong coupling regime.
Tuesday, September 17, 2019
DOE Experimental Condensed Matter PI Meeting, Day 2
Among the things I heard about today, as I wondered whether newly formed Tropical Storm Imelda would make my trip home a challenge:
- In "B20" magnetic compounds, where the crystal structure is chiral but lacks mirror or inversion symmetry, a phase can form under some circumstances that is a spontaneous lattice of skyrmions. By adding disorder through doping, it is possible to un-pin that lattice.
- Amorphous cousins of those materials still show anomalous Hall effect (AHE), even though the usual interpretation these days of the AHE is as a consequence of Berry phase in momentum space that is deeply connected to having a lattice. It's neat to see that some Berry physics survives even when the lattice does not.
- There is a lot of interest in coupling surface states of topological insulators to ferromagnets, including using spin-orbit torque to switch the magnetization direction of a ferromagnetic insulator.
- You could also try to switch the magnetization of \(\alpha-Fe_{2}O_{3}\) using spin-orbit torques, but watch out when you try to cram too much current through a 2 nm thick Pt film.
- The interlayer magnetic exchange in van der Waals magnets continues to be interesting and rich.
- Heck, you could look at several 2D materials with various kinds of reduced symmetry, to see what kinds of spin-orbit torques are possible.
- It's always fun to find a material where there are oscillations in magnetization with applied field even though the bulk is an insulator.
- Two-terminal devices made using (Weyl superconducting) MoTe2 show clear magnetoresistance signatures, indicating supercurrents carried along the material edges.
- By side-gating graphene structures hooked up to superconductors, you can also make a superconducting quantum intereference device using edge states of the fractional quantum Hall effect.
- In similar spirit, coupling a 2D topological insulator (1T'-WTe2) to a superconductor (NbSe2) means it's possible to use scanning tunneling spectroscopy to see induced superconducting properties in the edge state.
- Just in time, another possible p-wave superconductor.
- In a special stack sandwiching a TI between two magnetic TI layers, it's possible to gate the system to break inversion symmetry, and thus tune between quantum anomalous Hall and "topological Hall" response.
- Via a typo on a slide, I learned of the existence of the Ohion, apparently the smallest quantized amount of Ohio.
DOE experimental condensed matter PI meeting, day 1
The first day of the DOE ECMP PI meeting was very full, including two poster sessions. Here are a few fun items:
- Transition metal dichalcogenides (TMDs) can have very strongly bound excitons, and if two different TMDs are stacked, you can have interlayer excitons, where the electron and hole reside in different TMD layers, perhaps separated by a layer or two of insulating hBN. Those interlayer excitons can have long lifetimes, undergo Bose condensation, and have interesting optical properties. See here, for example.
- Heterojunctions of different TMDs can produce moire lattices even with zero relative twist, and the moire coupling between the layers can strongly affect the optical properties via the excitons.
- Propagating plasmons in graphene can have surprisingly high quality factors (~ 750), and combined with their strong confinement have interesting potential.
- You can add AlAs quantum wells to the list of materials systems in which it is possible to have very clean electronic transport and see fractional quantum Hall physics, which is a bit different because of the valley degeneracy in the AlAs conduction band (that can be tuned by strain).
- And you can toss in WSe2 in there, too - after building on this and improving material quality even further.
- There continues to be progress in trying to interface quantum Hall edge states with superconductors, with the end goal of possible topological quantum computing. A key question is understanding how the edge states undergo Andreev processes at superconducting contacts.
- Application of pressure can take paired quantum Hall states (like those at \(\nu = 5/2, 7/2\)) and turn them into unpaired nematic states, a kind of quantum phase transition.
- With clever (and rather involved) designs, it is possible to make high quality interferometers for fractional quantum Hall edge states, setting the stage for detailed studies of exotic anyons.
Sunday, September 15, 2019
DOE Experimental Condensed Matter PI meeting, 2019
The US Department of Energy's Basic Energy Sciences component of the Office of Science funds a lot of basic scientific research, and for the last decade or so had a tradition of regular gatherings of their funded principal investigators for a number of programs. Every two years there has been a PI meeting for the Experimental Condensed Matter Physics program, and this year's meeting starts tomorrow.
These meetings are very educational (at least for me) and, because of their modest size, a much better networking setting than large national conferences. In past years I've tried to write up brief highlights of the meetings (for 2017, see a, b, c; for 2015 see a, b, c; for 2013 see a, b). I will try to do this again; the format of the meeting has changed to include more poster sessions, which makes summarizing trickier, but we'll see.
update: Here are my write-ups for day 1, day 2, and day 3.
These meetings are very educational (at least for me) and, because of their modest size, a much better networking setting than large national conferences. In past years I've tried to write up brief highlights of the meetings (for 2017, see a, b, c; for 2015 see a, b, c; for 2013 see a, b). I will try to do this again; the format of the meeting has changed to include more poster sessions, which makes summarizing trickier, but we'll see.
update: Here are my write-ups for day 1, day 2, and day 3.
Tuesday, September 10, 2019
Faculty position at Rice - Astronomy
Faculty position in Astronomy at Rice University
The Department of Physics and Astronomy at Rice University invites applications for a tenure-track faculty position in astronomy in the general field of galactic star formation and planet formation, including exoplanet characterization. We seek an outstanding theoretical, observational, or computational astronomer whose research will complement and extend existing activities in these areas within the department. In addition to developing an independent and vigorous research program, the successful applicant will be expected to teach, on average, one undergraduate or graduate course each semester, and contribute to the service missions of the department and university. The department expects to make the appointment at the assistant professor level. A Ph.D. in astronomy/astrophysics or related field is required.
Applications for this position must be submitted electronically at http://jobs.rice.edu/postings/21236. Applicants will be required to submit the following: (1) cover letter; (2) curriculum vitae; (3) statement of research; (4) teaching statement; (5) PDF copies of up to three publications; and (6) the names, affiliations, and email addresses of three professional references. We will begin reviewing applications December 1, 2019. To receive full consideration, all application materials must be received by January 10, 2020. The appointment is expected to begin in July, 2020.
Rice University is an Equal Opportunity Employer with a commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status. We encourage applicants from diverse backgrounds to apply.
Friday, September 06, 2019
Faculty position at Rice - Theoretical Biological Physics
Faculty position in Theoretical Biological Physics at Rice University
As part of the Vision for the Second Century (V2C2), which is focused on investments in research excellence, Rice University seeks faculty members, preferably at the assistant professor level, starting as early as July 1, 2020, in all areas of theoretical biological physics. Successful candidates will lead dynamic, innovative, and independent research programs supported by external funding, and will excel in teaching at the graduate and undergraduate levels, while embracing Rice’s culture of excellence and diversity.
This search will consider applicants from all science and engineering disciplines. Ideal candidates will pursue research with strong intellectual overlap with physics, chemistry, biosciences, bioengineering, chemical and biomolecular engineering, or other related disciplines. Applicants pursuing all styles of theory and computation integrating the physical and life sciences are encouraged to apply.
For full details and to apply, please visit https://jobs.rice.edu/postings/21170. Applicants should please submit the following materials: (1) cover letter, including the names and contact information for three references, (2) curriculum vitae, (3) research statement, and (4) statement of teaching philosophy. Application review will commence no later than October 15, 2019 and continue until the position is filled. Candidates must have a PhD or equivalent degree and outstanding potential in research and teaching. We particularly encourage applications from women and members of historically underrepresented groups who bring diverse cultural experiences and who are especially qualified to mentor and advise members of our diverse student population.
Rice University, located in Houston, Texas, is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status.
Big questions about condensed matter (part 3)
More questions asked by Ross McKenzie's son about the culture/history of condensed matter physics:
3. What are the most interesting historical anecdotes? What are the most significant historical events? Who were the major players?
The first couple of these are hard to address in anything resembling an unbiased way. For events that happened before I was in the field, I have to rely on stories I've read or things I've heard. Certainly the discovery of superconductivity by Onnes is a good example - where they thought that they had an experimental problem with their wiring, until they realized that their voltmeter reading dropping to zero (trying to measure the voltage drop across some mercury in the presence of a known current) happened at basically the same temperature every time. (Pretty good for 1911!). Major experimental results very often have fun story components. From my thesis adviser, I'd heard lots of stories about the discovery of superfluidity in 3He, including plugging a leaky vacuum flange using borax; thinking up the experiment while recovering from a broken leg skiing accident; the wee-hour phone call to the adviser. He also told me a story about this paper, where he and Gerry Dolan came up with a very clever way to see tiny deviations away from a mostly linear current-voltage curve, an observation connected with weak localization that paved the way for a lot of mesoscopic physics work.
There are fun theory stories, too. Bob Laughlin figuring out the theory of the fractional quantum Hall effect while stuck in a trailer at Livermore because his clearance paperwork hadn't come through yet.
Other stories I've read in books. Strong recommendations for Crystal Fire; the less popular/more scholarly Out of the Crystal Maze. Ohl's discovery of the photovoltaic effect in silicon. The story about how Bell Labs and IBM researchers may or may not have traded hints poolside in Las Vegas about how to get field-effect transistors really working. Shockley's inability to manage people eventually resulting in Silicon Valley.
These aren't necessarily the best anecdotes, but they have elements of interest. I'm sure there are many out there who could tell fun stories.
As for the major players, it seems that everyone mentioned on Prof. McKenzie's post and in the comments are theorists. That seems limiting. It's fair to talk about theorists if you're concentrating on theoretical developments, but experimentalists have often opened up whole areas. Onnes liquefied helium and discovered superconductivity. Laue invented x-ray diffraction. Brattain made transistors. Nick Holonyak was an inventor of the light emitting diode, which has been revolutionary. Binnig and Rohrer invented the STM. Bednorz and Muller discovered the cuprates.
Wednesday, September 04, 2019
Big questions about condensed matter (part 2)
Continuing, another question asked by Ross McKenzie's son:
2. Scientific knowledge changes with time. Sometimes long-accepted ``facts'' and ``theories'' become overturned? What ideas and results are you presenting that you are almost absolutely certain of? What might be overturned?
I think this question is framed interestingly. Physics in general and condensed matter in particular is a discipline where the overturning of long-accepted ideas has often really meant a clearer appreciation and articulation about the limits of validity of models, rather than a wholesale revision of understanding.
For example, the Mermin-Wagner theorem is often mentioned as showing that one cannot have true two-dimensional crystals (this would be a breaking of continuous translational symmetry). However, the existence of graphene and other atomically thin systems like transition metal dichalcogenides, and the existence of magnetic order in some of those materials, are experimentally demonstrated. That doesn't mean that Mermin-Wagner is mathematically incorrect. It means that one must be very careful in defining what is meant by "truly two-dimensional".
There are many things in condensed matter that are as "absolutely certain" as anything gets in science. The wave nature of x-rays, electrons, and neutrons plus the spatial periodicity of matter in crystals leads to clear diffraction patterns. That same spatial periodicity strongly influences the electronic properties of crystals (Bloch's theorem, labeling of states by some wavevector-like parameter \(\mathbf{k}\), some energy dependence of those states \(E(\mathbf{k})\)). More broadly, there are phases of matter that can be classified by symmetries and topology, with distinct macroscopic properties. The macroscopic phases that are seen in equilibrium are those that correspond to the largest number of microscopic configurations subject to any overall constraints (that's the statistical physics basis for thermodynamics). Amazingly, knowing the ground state electronic density of a system everywhere means its possible in principle to calculate just about everything about the ground state.
Leaving those aside, asking what might be overturned is a bit like asking where we might find either surprises or mistakes in the literature. Sometimes seemingly established wisdom does get upset. One recent example: For a couple of decades, it's been thought that Sr2RuO4 is likely a spin-triplet superconductor, where the electron pairs are p-wave paired (have net orbital angular momentum), and is an electronic analog to the A phase of superfluid 3He. Recent results suggest that this is not correct, and that the early evidence for this is not seen in new measurements. There are probably more things like this out there, but it's hard to speculate. Bear in mind, though, that science is supposed to work like this. In the long run, the truth will out.
2. Scientific knowledge changes with time. Sometimes long-accepted ``facts'' and ``theories'' become overturned? What ideas and results are you presenting that you are almost absolutely certain of? What might be overturned?
I think this question is framed interestingly. Physics in general and condensed matter in particular is a discipline where the overturning of long-accepted ideas has often really meant a clearer appreciation and articulation about the limits of validity of models, rather than a wholesale revision of understanding.
For example, the Mermin-Wagner theorem is often mentioned as showing that one cannot have true two-dimensional crystals (this would be a breaking of continuous translational symmetry). However, the existence of graphene and other atomically thin systems like transition metal dichalcogenides, and the existence of magnetic order in some of those materials, are experimentally demonstrated. That doesn't mean that Mermin-Wagner is mathematically incorrect. It means that one must be very careful in defining what is meant by "truly two-dimensional".
There are many things in condensed matter that are as "absolutely certain" as anything gets in science. The wave nature of x-rays, electrons, and neutrons plus the spatial periodicity of matter in crystals leads to clear diffraction patterns. That same spatial periodicity strongly influences the electronic properties of crystals (Bloch's theorem, labeling of states by some wavevector-like parameter \(\mathbf{k}\), some energy dependence of those states \(E(\mathbf{k})\)). More broadly, there are phases of matter that can be classified by symmetries and topology, with distinct macroscopic properties. The macroscopic phases that are seen in equilibrium are those that correspond to the largest number of microscopic configurations subject to any overall constraints (that's the statistical physics basis for thermodynamics). Amazingly, knowing the ground state electronic density of a system everywhere means its possible in principle to calculate just about everything about the ground state.
Leaving those aside, asking what might be overturned is a bit like asking where we might find either surprises or mistakes in the literature. Sometimes seemingly established wisdom does get upset. One recent example: For a couple of decades, it's been thought that Sr2RuO4 is likely a spin-triplet superconductor, where the electron pairs are p-wave paired (have net orbital angular momentum), and is an electronic analog to the A phase of superfluid 3He. Recent results suggest that this is not correct, and that the early evidence for this is not seen in new measurements. There are probably more things like this out there, but it's hard to speculate. Bear in mind, though, that science is supposed to work like this. In the long run, the truth will out.
Monday, September 02, 2019
Big questions about condensed matter physics (pt 1)
Ross McKenzie, blogger of Condensed Concepts, is working on a forthcoming book, a Very Short Introduction about condensed matter physics. After reading some sample chapters, his son posed some questions about the field, and Prof. McKenzie put forward some of his preliminary answers here. These are fun, thought-provoking topics, and I regret being so busy writing other things that I haven't had a chance to think about these as much as I'd like. Still, here are some thoughts.
1. What do you think is the coolest or most exciting thing that CMP has discovered?
Tricky. There are some things that are very intellectually profound and cool to the initiated that would not strike an average person as cool or exciting. The fractional quantum Hall effect was completely surprising. I heard a story about Dan Tsui looking at the data coming in on a strip chart recorder (Hall voltage as a function of time as the magnetic field was swept), roughly estimating the magnetic field from the graph with the span of his fingers, realizing that they were seeing a Hall plateau that seemed to imply a charge of 1/3 e, and saying, jokingly, "Quarks!" In fact, there really are fractionally charged excitations in that system. That's very cool, and but not something any non-expert would appreciate.
Prof. McKenzie votes for superconductivity, and that's definitely up there. In some ways, superfluidity is even wilder. Kamerlingh Onnes, the first to liquefy helium and cool it below the superfluid transition, somehow missed discovering superfluidity, which had to wait for Kapitsa and independently Allen in 1937. Still, it is very weird - fluid flowing without viscosity through tiny pores, and climbing walls (!). While it's less useful than superconductivity, you can actually see its weird properties readily with the naked eye.
1. What do you think is the coolest or most exciting thing that CMP has discovered?
Tricky. There are some things that are very intellectually profound and cool to the initiated that would not strike an average person as cool or exciting. The fractional quantum Hall effect was completely surprising. I heard a story about Dan Tsui looking at the data coming in on a strip chart recorder (Hall voltage as a function of time as the magnetic field was swept), roughly estimating the magnetic field from the graph with the span of his fingers, realizing that they were seeing a Hall plateau that seemed to imply a charge of 1/3 e, and saying, jokingly, "Quarks!" In fact, there really are fractionally charged excitations in that system. That's very cool, and but not something any non-expert would appreciate.
Prof. McKenzie votes for superconductivity, and that's definitely up there. In some ways, superfluidity is even wilder. Kamerlingh Onnes, the first to liquefy helium and cool it below the superfluid transition, somehow missed discovering superfluidity, which had to wait for Kapitsa and independently Allen in 1937. Still, it is very weird - fluid flowing without viscosity through tiny pores, and climbing walls (!). While it's less useful than superconductivity, you can actually see its weird properties readily with the naked eye.
Wednesday, August 28, 2019
ACS symposium - Chemistry in Real Space and Time
On Sunday I was fortunate enough to be able to speak at the first day of a week-long symposium at the American Chemical Society national meeting in San Diego, titled "Chemistry in Real Space and Time". This symposium was organized by Ara Apkarian, Eric Potma, and Venkat Bommisetty, all from the UC Irvine NSF-supported center for Chemistry at the Space-Time Limit. During its span as a center, CaSTL has been at the forefront of technique development, including integrating ultrafast optics-based time-resolved measurements with the atomic-scale precision of scanning tunneling microscopy. The center is sun-setting, and the symposium is a bit of a valedictory celebration.
The start of our semester made it necessary for me to return to Houston, but a couple of highlights from the first day:
The start of our semester made it necessary for me to return to Houston, but a couple of highlights from the first day:
- Pri Narang spoke about her group's efforts to do serious combined quantum electrodynamics calculations and microscopic nanostructure modeling. If one wants to try to understand strong coupling problems between matter and light in nanostructures nonperturbatively, this is the direction things need to go. An example.
- Erik Nibbering talked about ultrafast proton transport - something I'd never thought about that depends critically on the positioning and alignment, say, of water molecules, so that hydrogens can swap their oxygen bonding partners. His group uses a combination of photoacids (for optical control over when protons are released) and time-resolved infrared spectroscopy to follow what's going on.
- Ji-Xin Cheng showed some impressive results of applying plasmon-enhanced stimulated Raman spectroscopy, basically tagging living systems with plasmonically active nanoparticles and performing pump-probe stimulated Raman to follow biological processes in living tissue. Very impressive hyperspectral imaging.
- My colleague Stephan Link showed some nice, clean results (related to these) in understanding chirality effects (trochoidal dichroism) in scattering of light by curved nanostructures, where the longitudinal component of the electric field (only happens at surfaces) is critically important.
- Frank Hegmann and Tyler Cocker spoke about various aspects of THz-based STM. I can't really do this justice in a brief blurb, but check out this paper and this paper for a flavor. Similarly, Dominik Peller spoke about this paper and this paper, and Hidemi Shigekawa showed what you can do when you can achieve phase control over the THz light pulse. Combining STM with femtosecond time resolution lets you see some impressive things.
- While not STM, but none-the-less very cool, Yichao Zhang showed movies from the Flannigan group taken by time-resolved transmission electron microscopy, so that you can actually see the propagation of sound waves.
Wish I could've stayed to see more - I felt like I was learning a lot.
Wednesday, August 21, 2019
Pairs in the cuprates at higher energies than superconductivity
I've been asked by student readers over the years about how scientists come up with research ideas. Sometimes you make an unanticipated observation or discovery, and that can launch a research direction that proves fruitful. One example of that is the work our group has done on photothermoelectric effects in plasmonic nanostructures - we started trying to understand laser-induced heating in some of our plasmonic devices, found that the thermoelectric response of comparatively simple metal nanostructures was surprisingly complicated, and that's led to some surprising (to us) insights and multiple papers including a couple in preparation.
In contrast, sometimes you have a specific physics experiment in mind for a long time, aimed at a long-standing problem or question, and getting there takes a while. That's the case with our recent publication in Nature.
I've written a bit about high temperature superconductivity over the years (here, here, here, here). For non-experts, it's hard to convey the long historical arc of the problem of high temperature superconductivity in the copper oxides.
Superconductivity was first discovered in 1911 in elemental mercury, after the liquefaction of helium made it possible to reach very low temperatures. Over the years, many more superconductors were discovered, metallic elements and compounds. Superconductivity is a remarkable state of matter and it took decades of study and contributions by many brilliant people before Bardeen, Cooper, and Schrieffer produced the BCS theory, which does a good job of explaining superconductivity in many systems. Briefly and overly simplified, the idea is that the ordinary metallic state (a Fermi liquid) is often not stable. In ordinary BCS, electrons interact with phonons, the quantized vibrations of the lattice - imagine an electron zipping along, and leaving behind in its wake a lattice vibration that creates a slight excess of positive ionic charge, so that a second electron feels some effective attraction to the first one. Below some critical temperature \(T_{c}\), electrons of opposite spin and momenta pair up. As they pair up, the paired electrons essentially condense into a single collective quantum state. There is some energy gap \(\Delta\) and a phase angle \(\phi\) that together make up the "order parameter" that describes the superconducting state. The gap is the energy cost to rip apart a pair; it's the existence of this gap, and the resulting suppression of scattering of individual carriers, that leads to zero electrical resistance. The collective response of the condensed state leads to the expulsion of magnetic flux from the material (Meissner effect) and other remarkable properties of superconductors. In a clean, homogeneous traditional superconductor, pairing of carriers and condensation into the superconducting state are basically coincident.
In 1986, Bednorz and Muller discovered a new family of materials, the copper oxide superconductors. These materials are ceramics rather than traditional metals, and they show superconductivity often at much higher temperatures than what had been seen before. The excitement of the discovery is hard to overstate, because it was a surprise and because the prospect of room temperature superconductivity loomed large. Practically overnight, "high-Tc" became the hottest problem in condensed matter physics, with many competing teams jumping into action on the experimental side, and many theorists offering competing possible mechanisms. Competition was fierce, and emotions ran high. There are stories about authors deliberately mis-stating chemical formulas in submitted manuscripts and then correcting at the proof stage to avoid being scooped by referees. The level of passion involved has been substantial. Compared to the cozy, friendly confines of the ultralow temperature physics community of my grad days, the high Tc world did not have a reputation for being warm and inviting.
As I'd mentioned in the posts linked above, the cuprates are complicated. They're based on chemically (by doping) adding charge to or removing charge from materials that are Mott insulators, in which electron-electron interactions are very important. The cuprates have a very rich phase diagram with a lot going on as a function of temperature and doping. Since the earliest days, one of the big mysteries in these materials is the pseudogap (and here), and also from the earliest days, it has been suggested (by people like Anderson) that there may be pairs of charge carriers even in the normal state - so-called "preformed pairs". That is, perhaps pairing and global superconductivity have different associated energy and temperature scales, with pair-like correlations being more robust than the superconducting state. An analogy: Superconductivity requires partners to pair up and for the pairs to dance in synch with each other. In conventional superconductors, the dancing starts as soon as the dancers pair up, while in the cuprates perhaps there are pairs, but they don't dance in synch.
Moreover, the normal state of the cuprates is the mysterious "strange metal". Some argue that it's not even clear that there are well-defined quasiparticles in the strange metal - pushing the analogy too far, perhaps it doesn't make sense to even think about individual dancers at all, and instead the dance floor is more like a mosh pit, a strongly interacting blob.
I've been thinking for a long while about how one might test for this. One natural approach is to look at shot noise (see here). When charge comes in discrete amounts, this can lead to fluctuations in the current. Imagine rain falling on your rooftop. There is some average arrival rate of water, but the fluctuations about that average rate are larger if the rain comes down in big droplets than if the rain comes falls as a fine mist. Mathematically, when charges of magnitude \(e\) arrive at some average rate via a Poisson process (the present charge doesn't know when the last one came or when the next one is coming, but there is just some average rate), the mean square current fluctuations per unit bandwidth are flat in frequency and are given by \(S_{I} = 2 e I\), where \(I\) is the average current. For electrons tunneling from one metal to another, accounting for finite temperature, the expectation is \(S_{I} = 2 e I \coth (eV/(2 k_{\mathrm{B}}T) )\), which reduces to \(2 e I\) in the limit \(eV >> k_{\mathrm{B}}T\), and reduces (assuming an Ohmic system) to Johnson-Nyquist noise \(4 k_{\mathrm{B}}T/R\) in the \(V \rightarrow 0\) limit, where \(R = V/I\).
TLDR: Shot noise is a way to infer the magnitude of the effective charge of the carriers.
In our paper, we use tunnel junctions made from La2-xSrxCuO4 (LSCO), an archetypal cuprate superconductor (superconducting transition around 39 K for x near 0.15), with the tunneling barrier between the LSCO layers being 2 nm of the undoped, Mott-insulating parent compound, La2CuO4. We could only do these measurements because of the fantastic material quality, the result of many years of effort by our collaborators. We looked at shot noise in the tunneling from LSCO through LCO and into LSCO, over a broad temperature and voltage range.
The main result we found was that the noise in the tunneling current exceeded what you'd expect for just individual charges, both at temperatures well above the superconducting transition, and at bias voltages (energy scales) large compared to the superconducting gap energy scale. This strongly suggests that some of the tunneling current involves the transport of two electrons at a time, rather than only individual charges. (I'm trying to be very careful about wording this, because there are different processes whereby charges could move two at a time.) While there have been experimental hints of pairing above Tc for a while, this result really seems to show that pairing happens at a higher energy scale than superconductivity. Understanding how that relates to other observations people have made about the pseudogap and about other kinds of ordered states will be fun. This work has been a great educational experience for me, and hopefully it opens the way to a lot of further progress, by us and others.
In contrast, sometimes you have a specific physics experiment in mind for a long time, aimed at a long-standing problem or question, and getting there takes a while. That's the case with our recent publication in Nature.
I've written a bit about high temperature superconductivity over the years (here, here, here, here). For non-experts, it's hard to convey the long historical arc of the problem of high temperature superconductivity in the copper oxides.
Superconductivity was first discovered in 1911 in elemental mercury, after the liquefaction of helium made it possible to reach very low temperatures. Over the years, many more superconductors were discovered, metallic elements and compounds. Superconductivity is a remarkable state of matter and it took decades of study and contributions by many brilliant people before Bardeen, Cooper, and Schrieffer produced the BCS theory, which does a good job of explaining superconductivity in many systems. Briefly and overly simplified, the idea is that the ordinary metallic state (a Fermi liquid) is often not stable. In ordinary BCS, electrons interact with phonons, the quantized vibrations of the lattice - imagine an electron zipping along, and leaving behind in its wake a lattice vibration that creates a slight excess of positive ionic charge, so that a second electron feels some effective attraction to the first one. Below some critical temperature \(T_{c}\), electrons of opposite spin and momenta pair up. As they pair up, the paired electrons essentially condense into a single collective quantum state. There is some energy gap \(\Delta\) and a phase angle \(\phi\) that together make up the "order parameter" that describes the superconducting state. The gap is the energy cost to rip apart a pair; it's the existence of this gap, and the resulting suppression of scattering of individual carriers, that leads to zero electrical resistance. The collective response of the condensed state leads to the expulsion of magnetic flux from the material (Meissner effect) and other remarkable properties of superconductors. In a clean, homogeneous traditional superconductor, pairing of carriers and condensation into the superconducting state are basically coincident.
In 1986, Bednorz and Muller discovered a new family of materials, the copper oxide superconductors. These materials are ceramics rather than traditional metals, and they show superconductivity often at much higher temperatures than what had been seen before. The excitement of the discovery is hard to overstate, because it was a surprise and because the prospect of room temperature superconductivity loomed large. Practically overnight, "high-Tc" became the hottest problem in condensed matter physics, with many competing teams jumping into action on the experimental side, and many theorists offering competing possible mechanisms. Competition was fierce, and emotions ran high. There are stories about authors deliberately mis-stating chemical formulas in submitted manuscripts and then correcting at the proof stage to avoid being scooped by referees. The level of passion involved has been substantial. Compared to the cozy, friendly confines of the ultralow temperature physics community of my grad days, the high Tc world did not have a reputation for being warm and inviting.
As I'd mentioned in the posts linked above, the cuprates are complicated. They're based on chemically (by doping) adding charge to or removing charge from materials that are Mott insulators, in which electron-electron interactions are very important. The cuprates have a very rich phase diagram with a lot going on as a function of temperature and doping. Since the earliest days, one of the big mysteries in these materials is the pseudogap (and here), and also from the earliest days, it has been suggested (by people like Anderson) that there may be pairs of charge carriers even in the normal state - so-called "preformed pairs". That is, perhaps pairing and global superconductivity have different associated energy and temperature scales, with pair-like correlations being more robust than the superconducting state. An analogy: Superconductivity requires partners to pair up and for the pairs to dance in synch with each other. In conventional superconductors, the dancing starts as soon as the dancers pair up, while in the cuprates perhaps there are pairs, but they don't dance in synch.
Moreover, the normal state of the cuprates is the mysterious "strange metal". Some argue that it's not even clear that there are well-defined quasiparticles in the strange metal - pushing the analogy too far, perhaps it doesn't make sense to even think about individual dancers at all, and instead the dance floor is more like a mosh pit, a strongly interacting blob.
I've been thinking for a long while about how one might test for this. One natural approach is to look at shot noise (see here). When charge comes in discrete amounts, this can lead to fluctuations in the current. Imagine rain falling on your rooftop. There is some average arrival rate of water, but the fluctuations about that average rate are larger if the rain comes down in big droplets than if the rain comes falls as a fine mist. Mathematically, when charges of magnitude \(e\) arrive at some average rate via a Poisson process (the present charge doesn't know when the last one came or when the next one is coming, but there is just some average rate), the mean square current fluctuations per unit bandwidth are flat in frequency and are given by \(S_{I} = 2 e I\), where \(I\) is the average current. For electrons tunneling from one metal to another, accounting for finite temperature, the expectation is \(S_{I} = 2 e I \coth (eV/(2 k_{\mathrm{B}}T) )\), which reduces to \(2 e I\) in the limit \(eV >> k_{\mathrm{B}}T\), and reduces (assuming an Ohmic system) to Johnson-Nyquist noise \(4 k_{\mathrm{B}}T/R\) in the \(V \rightarrow 0\) limit, where \(R = V/I\).
TLDR: Shot noise is a way to infer the magnitude of the effective charge of the carriers.
In our paper, we use tunnel junctions made from La2-xSrxCuO4 (LSCO), an archetypal cuprate superconductor (superconducting transition around 39 K for x near 0.15), with the tunneling barrier between the LSCO layers being 2 nm of the undoped, Mott-insulating parent compound, La2CuO4. We could only do these measurements because of the fantastic material quality, the result of many years of effort by our collaborators. We looked at shot noise in the tunneling from LSCO through LCO and into LSCO, over a broad temperature and voltage range.
The main result we found was that the noise in the tunneling current exceeded what you'd expect for just individual charges, both at temperatures well above the superconducting transition, and at bias voltages (energy scales) large compared to the superconducting gap energy scale. This strongly suggests that some of the tunneling current involves the transport of two electrons at a time, rather than only individual charges. (I'm trying to be very careful about wording this, because there are different processes whereby charges could move two at a time.) While there have been experimental hints of pairing above Tc for a while, this result really seems to show that pairing happens at a higher energy scale than superconductivity. Understanding how that relates to other observations people have made about the pseudogap and about other kinds of ordered states will be fun. This work has been a great educational experience for me, and hopefully it opens the way to a lot of further progress, by us and others.
Friday, August 16, 2019
"Seeing" chemistry - another remarkable result
I have written before (here, here) about the IBM Zurich group that has used atomic force microscopy in ultrahigh vacuum to image molecules on surfaces with amazing resolution. They've done it again. Starting with an elaborate precursor molecule, the group has been able to use voltage pulses to strip off side groups, so that in the end they leave behind a ring of eighteen carbon atoms, each bound to its neighbor on either side. The big question was, would it be better to think of this molecule as having double bonds between each carbon, or would it be energetically favorable to break that symmetry and have the bonds alternate between triple and single. It turns out to be the latter, as the final image shows a nonagon with nine-fold rotational symmetry. Here is a video where the scientists describe the work themselves (the video is non-embeddable for some reason). Great stuff.
Sunday, August 11, 2019
APS Division of Condensed Matter Physics invited symposium nominations
While I've rotated out of my "member-at-large" spot on the APS DCMP executive committee, I still want to pass this on. Having helped evaluate invited symposium proposals for the last three years, I can tell you that the March Meeting benefits greatly when there is a rich palette.
---
---
The DCMP invited speaker program at March Meeting 2020 is dependent on the invited sessions that are nominated by DCMP members. All invited speakers that we host must be nominated in advance by a DCMP member.
The deadline to submit a nomination is coming up soon: August 23, 2019.
Please take a moment to submit an invited session nomination.
Notes regarding nominations:
The deadline to submit a nomination is coming up soon: August 23, 2019.
Please take a moment to submit an invited session nomination.
Notes regarding nominations:
- All nominations must be submitted through ScholarOne by August 23, 2019.
- In ScholarOne, an invited session should be submitted as an "Invited Symposium Nomination".
- An invited session consists of 5 speakers. You may include up to 2 alternate speakers in your nomination, in the event that one of the original 5 does not work out.
- While no invited speakers will be guaranteed placement in the program until after all nominations have been reviewed, please get a tentative confirmation of interest from your nominated speakers. There will be a place in the nomination to indicate this.
- A person cannot give technical invited talks in two consecutive years. A list of people that gave technical invited talks in 2019, and are therefore ineligible for 2020, can be found on this page.
- Nominations of women, members of underrepresented minority groups, and scientists from outside the United States are especially encouraged.
Be sure to select a DCMP Category in your nomination. DCMP categories are:
07.0 TOPOLOGICAL MATERIALS (DCMP)
09.0 SUPERCONDUCTIVITY (DCMP)
11.0 STRONGLY CORRELATED SYSTEMS, INCLUDING QUANTUM FLUIDS AND SOLIDS (DCMP)
12.0 COMPLEX STRUCTURED MATERIALS, INCLUDING GRAPHENE (DCMP)
13.0 SUPERLATTICES, NANOSTRUCTURES, AND OTHER ARTIFICIALLY STRUCTURED MATERIALS (DCMP)
14.0 SURFACES, INTERFACES, AND THIN FILMS (DCMP)
15.0 METALS AND METALLIC ALLOYS (DCMP)
Thank you for your prompt attention to this matter.
Daniel Arovas, DCMP Chair, and Eva Andrei, DCMP Chair-Elect
07.0 TOPOLOGICAL MATERIALS (DCMP)
09.0 SUPERCONDUCTIVITY (DCMP)
11.0 STRONGLY CORRELATED SYSTEMS, INCLUDING QUANTUM FLUIDS AND SOLIDS (DCMP)
12.0 COMPLEX STRUCTURED MATERIALS, INCLUDING GRAPHENE (DCMP)
13.0 SUPERLATTICES, NANOSTRUCTURES, AND OTHER ARTIFICIALLY STRUCTURED MATERIALS (DCMP)
14.0 SURFACES, INTERFACES, AND THIN FILMS (DCMP)
15.0 METALS AND METALLIC ALLOYS (DCMP)
Thank you for your prompt attention to this matter.
Daniel Arovas, DCMP Chair, and Eva Andrei, DCMP Chair-Elect
Wednesday, July 31, 2019
More brief items
Writing writing writing. In the meantime:
- This is a very solid video about physics careers - especially at around the 10 minute mark.
- J. Robert Schrieffer has passed away. Truly a scientific pioneer.
- I came upon this article yesterday in Nat Rev Physics, and it seems like an exceptionally clear review of strong coupling physics between light and matter, beyond the usual Purcell factor cavity QED effects.
- SpaceX really proceeds in testing at a remarkable pace.
- The scanning Josephson microscope is a very impressive piece of kit!
Monday, July 22, 2019
Ferromagnetic droplets
Ferromagnets are solids, in pretty nearly every instance I can recall (though I suppose it's not impossible to imagine an itinerant Stoner magnet that's a liquid below its Curie temperature, and here is one apparent example). There's a neat paper in Science this week, reporting liquid droplets that act like ferromagnets and can be reshaped.
The physics at work here is actually a bit more interesting than just a single homogeneous material that happens to be liquid below its magnetic ordering temperature. The liquid in this case is a suspension of magnetite nanoparticles. Each nanoparticle is magnetic, as the microscopic ordering temperature for Fe3O4 is about 858 K. However, the individual particles are so small (22 nm in diameter) that they are superparamagnetic at room temperature, meaning that thermal fluctuations are energetic enough to reorient how the little north/south poles of the single-domain particles are pointing. Now, if the interface at the surface of the suspension droplet confines the nanoparticles sufficiently, they jam together with such small separations that their magnetic interactions are enough to lock their magnetizations, killing the superparamagnetism and leading to a bulk magnetic response from the aggregate. Pretty cool! (Extra-long-time readers of this blog will note that this hearkens waaaay back to this post.)
The physics at work here is actually a bit more interesting than just a single homogeneous material that happens to be liquid below its magnetic ordering temperature. The liquid in this case is a suspension of magnetite nanoparticles. Each nanoparticle is magnetic, as the microscopic ordering temperature for Fe3O4 is about 858 K. However, the individual particles are so small (22 nm in diameter) that they are superparamagnetic at room temperature, meaning that thermal fluctuations are energetic enough to reorient how the little north/south poles of the single-domain particles are pointing. Now, if the interface at the surface of the suspension droplet confines the nanoparticles sufficiently, they jam together with such small separations that their magnetic interactions are enough to lock their magnetizations, killing the superparamagnetism and leading to a bulk magnetic response from the aggregate. Pretty cool! (Extra-long-time readers of this blog will note that this hearkens waaaay back to this post.)
Saturday, July 13, 2019
Brief items
I just returned from some travel, and I have quite a bit of writing I need to do, but here are a few items of interest:
- No matter how many times I see them (here I discussed a result from ten years ago), I'm still impressed by images taken of molecular orbitals, as in the work by IBM Zurich that has now appeared in Science. Here is the relevant video.
- Speaking of good videos, here is a talk by Tadashi Tokieda, presently at Stanford, titled "Science from a Sheet of Paper". Really nicely done, and it shows a great example of how surprising general behavior can emerge from simple building blocks.
- It's a couple of years old now, but this is a nice overview of the experimental state of the problem of high temperature superconductivity, particularly in the cuprates.
- Along those lines, here is a really nice article from SciAm by Greg Boebinger about achieving the promise of those materials.
- Arguments back and forth continue about the metallization of hydrogen.
- And Sean Carroll shows how remunerative it can be to be a science adviser for a Hollywood production.
Friday, July 05, 2019
Science and a nation of immigrants
It was very distressing to read this news article in Nature about the treatment of scientists of Chinese background (from the point of view of those at MIT). Science is an international enterprise, and an enormous amount of the success that the US has had in science and technology is due to the contributions of immigrants and first-generation children of immigrants. It would be wrong, tragic, and incredibly self-defeating to take on a posture that sends a message to the international community that they are not welcome to come to the US to study, or that tells immigrants in the US that they are suspect and not trusted.
In any large population, there is always the occasional bad actor - the question is, how does a bureaucracy react to that? One example: Clearly some small percentage of medical researchers in the US have behaved unethically, taking money from medical and pharmaceutical companies in ways that set up conflicts of interest which they have hidden. That's wrong, we should try to prevent it from happening, and those who misbehave should be punished. The bureaucratic response to this has been that basically nearly every faculty member at a research university in the US now has to fill out annual disclosure and conflict of interest forms. The number of people affected by the response dwarfs the number of miscreants by probably a factor of 1000, though in this case the response is only at the level of an inconvenience, so the consequences have not been dire.
Reacting to the bad behavior of a tiny number of people by taking wholesale measures that make an entire population feel threatened, unwelcome, and presumed guilty, is wrong and lazy. The risk of long term negative impacts far beyond the scale of any original bad behavior is very real.
In any large population, there is always the occasional bad actor - the question is, how does a bureaucracy react to that? One example: Clearly some small percentage of medical researchers in the US have behaved unethically, taking money from medical and pharmaceutical companies in ways that set up conflicts of interest which they have hidden. That's wrong, we should try to prevent it from happening, and those who misbehave should be punished. The bureaucratic response to this has been that basically nearly every faculty member at a research university in the US now has to fill out annual disclosure and conflict of interest forms. The number of people affected by the response dwarfs the number of miscreants by probably a factor of 1000, though in this case the response is only at the level of an inconvenience, so the consequences have not been dire.
Reacting to the bad behavior of a tiny number of people by taking wholesale measures that make an entire population feel threatened, unwelcome, and presumed guilty, is wrong and lazy. The risk of long term negative impacts far beyond the scale of any original bad behavior is very real.
Friday, June 28, 2019
Magic hands, secret sauce, and tricks of the trade
One aspect of experimental physics that I've always found interesting is the funny, specialized expertise that can be very hard to transcribe into a "Methods" section of a paper - the weird little tricks or detailed ways of doing things that can make some processes work readily in one lab that are difficult to translate to others. This can make some aspects of experimental work more like a craft or an art, and can lead to reputations for "magic hands", or the idea that a group has some "secret sauce".
An innocuous, low-level example: My postdoctoral boss had basically a recipe and routine for doing e-beam lithography on an old (twenty+ years), converted scanning electron microscope, plus thermal evaporation of aluminum, that could produce incredibly fine, interdigitated transducers for surface acoustic waves. He just had it down cold, and others using the same kind of equipment would have had a very tough time doing this at that resolution and with that reliability, even with all the steps written down, because it really was a skill.
Another possible example: I was talking today with an atomic physics colleague, and he mentioned that there is a particular isotope that only one or two AMO groups in the world have really been able to use successfully in their ultracold atom setups. The question was, how were they able to get it to work, and work well, when clearly other groups had tried and decided that it was too difficult?
Any favorite examples out there from readers?
An innocuous, low-level example: My postdoctoral boss had basically a recipe and routine for doing e-beam lithography on an old (twenty+ years), converted scanning electron microscope, plus thermal evaporation of aluminum, that could produce incredibly fine, interdigitated transducers for surface acoustic waves. He just had it down cold, and others using the same kind of equipment would have had a very tough time doing this at that resolution and with that reliability, even with all the steps written down, because it really was a skill.
Another possible example: I was talking today with an atomic physics colleague, and he mentioned that there is a particular isotope that only one or two AMO groups in the world have really been able to use successfully in their ultracold atom setups. The question was, how were they able to get it to work, and work well, when clearly other groups had tried and decided that it was too difficult?
Any favorite examples out there from readers?