- Being chair has a measurable impact on my blogging frequency - it's dropped off appreciably since summer 2016, though fluctuations are not small.
- It's been almost 2.5 years since I did an "Ask me something" post, so please have at it.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, December 27, 2018
Ask me something.
As we approach the end of another year, I realize two things:
Wednesday, December 19, 2018
Short items
The end of the calendar year has been very busy, leading to a slower pace of posting. Just a few brief items:
- I have written a commentary for Physics Today, which is now online here. The topic isn't surprising for regular readers here. If I'm going to keep talking about this, I need to really settle on the correct angle for writing a popular level book about CMP.
- This article in Quanta about this thought experiment is thought-provoking. I need to chew on this for a while to see if I can wrap my brain around this.
- The trapped ion quantum computing approach continually impresses. The big question for me is one that I first heard posed back in 1998 at Stanford by Yoshi Yamamoto: Do these approaches scale without having the number of required optical components grow exponentially in the number of qubits?
- Superconductivity in hydrides under pressure keeps climbing to higher temperatures. While gigapascal pressures are going to be impractical for a long long time to come, progress in this area shows that there does not seem to be any inherent roadblock to having superconductivity as a stable, emergent state at room temperature.
- As written about here during the March Meeting excitement, magic angle graphene superconductivity has been chosen as Physics World's breakthrough of the year.
Tuesday, December 11, 2018
Rice Academy of Fellows, 2019
Just in case....
Rice has a competitive endowed postdoctoral program, the Rice Academy of Fellows. There are five slots for the coming year (application deadline of January 3). It's a very nice program, though like all such things it's challenging to get a slot. If someone is interested in trying this to work with me, I'd be happy to talk - the best approach would be to email me.
Rice has a competitive endowed postdoctoral program, the Rice Academy of Fellows. There are five slots for the coming year (application deadline of January 3). It's a very nice program, though like all such things it's challenging to get a slot. If someone is interested in trying this to work with me, I'd be happy to talk - the best approach would be to email me.
Friday, December 07, 2018
Shoucheng Zhang, 1963-2018
Shocking and saddening news this week about the death of Shoucheng Zhang, Stanford condensed matter theorist who had made extremely high impact contributions to multiple topics in the field. He began his research career looking at rather exotic physics; string theory was all the rage, and this was one of his first papers. His first single-author paper, according to scopus, is this Phys Rev Letter looking at the possibility of an exotic (Higgs-related) form of superconductivity on a type of topological defect in spacetime. Like many high energy theorists of the day, he made the transition to condensed matter physics, where his interests in topology and field theory were present throughout his research career. Zhang made important contributions on the fractional quantum Hall effect (and here and here), the problem of high temperature superconductivity in the copper oxides (here), and most recently and famously, the quantum spin Hall effect (here for example). He'd won a ton of major prizes, and was credibly in the running for a share of a future Nobel regarding topological materials and quantum spin Hall physics.
I had the good fortune to take one quarter of "introduction to many-body physics" (basically quantum field theory from the condensed matter perspective) from him at Stanford. His clear lectures, his excellent penmanship at the whiteboard, and his ever-present white cricket sweater are standout memories even after 24 years. He was always pleasant and enthusiastic when I'd see him. In addition to his own scholarly output, Zhang had a huge, lasting impact on the community through mentorship of his students and postdocs. His loss is deeply felt. Depression is a terrible illness, and it can affect anyone - hopefully increased awareness and treatment will make tragic events like this less likely in the future.
Saturday, December 01, 2018
Late Thanksgiving physics: Split peas and sandcastles
Last week, when I was doing some cooking for the US Thanksgiving holiday, I was making a really good vegetarian side dish (seriously, try it), and I saw something that I thought was pretty remarkable, and it turns out that a Nature paper had been written about it.
The recipe involves green split peas, and the first step is to rinse these little dried lozenge-shaped particles (maybe 4 mm in diameter, maybe 2 mm thick) in water to remove any excess dust or starch. So, I put the dried peas in a wire mesh strainer, rinsed them with running water, and dumped them into a saucepan. Unsurprisingly, the wet split peas remained stuck together in a hemispherical shape that exactly mimicked the contours of the strainer. This is a phenomenon familiar to anyone who has ever built a sandcastle - wet particulates adhere together.
The physics behind this adhesion is surface tension. Because water molecules have an attractive interaction with each other, in the absence of any other interactions, liquid water will settle into a shape that minimizes the area of the water-vapor interface. That's why water forms spherical blobs in microgravity. It costs about 72 mJ/m2 to create some area of water-air interface. It turns out that it is comparatively energetically favored to form a water-split pea interface, because of attractive interactions between the polar water molecules and the mostly cellulose split pea surface.
For a sense of scale, creating water-air interface with the area of one split pea (surface area roughly 2.5e-5 m2) would take about 2 microjoules of energy. The mass of the split pea half I'm considering, assuming a density similar to water, is around 25 mg = 2.5e-5 kg. So, lifting such a split pea by about it's own height requires an energy of \(mgh \sim\) 2.5e-5*9.807*2e-4 = 0.5 microjoules. The fact that this is comparable to (but smaller than) the surface energy of the water-air interface of a wet split pea tells you that you should not be surprised that water coatings can hold wet split peas up against the force of gravity.
What I then saw, which was surprising to me, was that even as I started adding the 3.5 cups of water mentioned in the recipe, the hemispherical split pea "sandcastle" stayed together, even when I prodded it with a cooking spoon. This surprised me. A few minutes of internet search confirmed that this effect is surprising enough to merit its own Nature Materials paper, with its own News and Views article. The transition from cohering wet grains to a flowing slurry turns out to happen at really high water fractions. Neat physics, and the richness of a system as simple as grains/beads, water, and air is impressive.
Sunday, November 25, 2018
Fundamental units and condensed matter
As was discussed in many places over the last two weeks, the official definition of the kilogram has now been changed, to a version directly connected to Planck's constant, \(h\). The NIST description of this is very good, and I am unlikely to do better. Through the use of a special type of balance (a Kibble or Watt balance, the mass can be related back to \(h\) via the dissipation of electrical power in the form of \(V^{2}/R\). A point that I haven't seen anyone emphasize in their coverage: Both the volt and the Ohm are standardized in terms of condensed matter phenomena - there is a deep, profound connection between emergent condensed matter effects and our whole fundamental set of units (a link that needs to be updated to include the new definition of kg).
Voltage \(V\) is standardized in terms of the Josephson effect. In a superconductor, electrons pair up and condense into a quantum state that is described by a complex number called the order parameter, with a magnitude and a phase. The magnitude is related to the density of pairs. The phase is related to the coherent response of all the pairs, and only takes on a well-defined value below the superconducting transition. In a junction between superconductors (say a thin tunneling barrier of insulator), a dc voltage difference between the two sides causes the phase to "wind" as a function of time, leading to an ac current with a frequency of \(2eV/h\). Alternately, applying an ac voltage of known frequency \(f\) can generate a dc voltage at integer multiples of \(h f/2e\). The superconducting phase is an emergent quantity, well defined only when the number of pairs is large.
The Ohm \(\Omega\) is standardized in terms of the integer quantum Hall effect. Electrons confined to a relatively clean 2D layer and placed in a large magnetic field show plateaus in the Hall resistance, the relationship between longitudinal current and transverse voltage, at integer multiples of \(e^{2}/h\). The reason for picking out those particular values is deeply connected to topology, and is independent of the details of the material system. You can see the integer QHE in many systems, one reason why it's good to use as a standard. The existence of the plateaus, and therefore really accurate quantization, in actual measurements of the Hall conductance requires disorder. Precise Hall quantization is likewise also an emergent phenomenon.
Interesting that the fundamental definition of the kilogram is deeply connected to two experimental phenomena that are only quantized to high precision because they emerge in condensed matter.
Tuesday, November 13, 2018
Blog stats weirdness
This blog is hosted on blogger, google's free blogging platform. There are a couple of ways to get statistics about the blog, like rates of visits and where they're from. One approach is to start from the nanoscale views blogger homepage and click "stats", which can tell me an overview of hit rates, traffic sources, etc. The other approach is to go to analytics.google.com and look at the more official information compiled by google's tracking code.
The blogger stats data has always looked weird relative to the analytics information, with "stats" showing far more hits per day - probably tracking every search engine robot that crawls the web, not just real hits. This is a new one, though: On "stats" for referring traffic, number one is google, and number three is Peter Woit's blog. Those both make sense, but in second place there is a site that I didn't recognize, and it appears to be associated with hardcore pornography (!). That site doesn't show up at all on the analytics page, where number one is google, number two is direct linking, and number three is again Woit's blog. Weird. Very likely that this is the result of a script trying to put porn spam in comments on thousands of blogs. Update: As I pointed out on social media to some friends, it's not that this blog is porn - it's just that someone somewhere thinks readers of this blog probably like porn. :-)
Monday, November 12, 2018
Book review: Solid State Insurrection
Apologies for the slow updates. Between administrative responsibilities and trying to get out a couple of important papers, posting has been a bit slower than I would like, and this is probably going to continue for a few weeks.
If you've wondered how condensed matter physics got to where it is, more in terms of the sociology of physics rather than the particular scientific advances themselves, I strongly recommend Solid State Insurrection: How the Science of Substance Made American Physics Matter, by Joseph D. Martin. This book follows the development of condensed matter physics from its beginnings before WWII through to what the author views as the arrival of its modern era, the demise of the Superconducting Supercollider in the early 1990s, an event strongly associated by some with critiques by Phil Anderson.
I got into condensed matter physics starting in the early 1990s, in the post-"More is Different" era, and CMP had strongly taken on its identity as a field dedicated to understanding the states of matter (and their associated structural, electronic, and magnetic orders) that emerge collectively from the interactions of many underlying degrees of freedom. While on some level I'd known some of the history, Prof. Martin's book was eye-opening for me, describing how solid-state physics itself emerged from disparate, fluctuating subfields (metallurgy, in particular).
Martin looks at the battles within the APS and the AIP into the 1940s about whether it's good or bad to have topical groups or divisions; whether it's a good or bad thing that the line between some of solid-state physics and electrical engineering can be blurry; how the societies' publication models could adapt. Some of that reads a bit like the standard bickering that can happen within any professional society, but the undercurrent throughout is interesting, about the sway held in the postwar era by nuclear and later particle physicists.
The story of the founding of the National Magnet Lab (originally at MIT, originally funded by the Air Force before switching to NSF) was new to me. It's an interesting comparison between the struggles to get the NML funded (and how "pure" vs "applied" its mission should be) and the rate at which accelerator and synchrotron and nuclear science facilities were being built. To what extent did the success of the Manhattan Project give the nuclear/particle community carte blanche from government funders to do "pure" science? To what degree did the slant toward applications and away from reductionism reinforce the disdain which some held for solid-state (or should I say squalid state or schmutzphysik)?
Martin also presents the formalization of materials science as a discipline and its relationship to physics, the rise of the antireductionist/emergence view of condensed matter (a rebranding that began in the mid-60s and really took off after Anderson's 1972 paper and a coincident NRC report), and a recap of the fight over the SSC along the lines of condensed matter vs. high energy. (My take: there were many issues behind the SSC's fate. The CM community certainly didn't help, but the nature of government contracting, the state of the economy at the time, and other factors were at least as contributory.)
In summary: Solid State Insurrection is an informative, interesting take on the formation and evolution of condensed matter physics as a discipline. It shows the very human, social aspects of how scientific communities grow, bicker, and change.
In summary: Solid State Insurrection is an informative, interesting take on the formation and evolution of condensed matter physics as a discipline. It shows the very human, social aspects of how scientific communities grow, bicker, and change.
Saturday, November 03, 2018
Timekeeping, or why helium can (temporarily) kill your iphone/ipad
On the day when the US switches clocks back to standard time, here is a post about timekeeping and its impact.
Conventional computers need a clock, some source of a periodic voltage that tells the microprocessor when to execute logic operations, shift bits in registers, store information in or retrieve information from memory.
Historically, clocks in computer systems have been based on quartz oscillators or similar devices. Quartz is an example of a piezoelectric, a material that generates a voltage when strained (or, conversely, deforms when subjected to a properly applied voltage). Because quartz is a nice material with a well-defined composition, its elastic properties are highly reproducible. That means that it's possible to carve it into a mechanical resonator (like a tuning fork), and as long as you can control the dimensions well, you will always get very close to the same mechanical resonance frequency. Pattern electrodes on there, making the quartz into a capacitor, and it's possible to set up an electrical circuit that takes the voltage produced when the quartz is resonantly deforming, amplifies that signal, and feeds it back onto the material, so that the quartz crystal resonator will ring at its natural frequency (just like a microphone pointed at a speaker can lead to a ringing). Because quartz's elastic and electrical properties depend only weakly on temperature, this can act as a very stable clock, either for a computer like your desktop machine or tablet or smartphone, or in an electric wristwatch.
In recent years, though, it's become attractive for companies to start replacing quartz clocks with microelectromechanical resonators. While silicon is not piezoelectric, and so can't be used directly as a substitute for quartz, it does have extremely reproducible elastic properties. Unlike piezoelectric resonators, though, MEMS resonators typically have to be packaged so that the actual paddle or cantilever or tuning fork is in vacuum. Gas molecules can damp the resonator, lowering its quality factor and therefore hurting its frequency stability (or possibly damping its motion enough that it just can't function as part of a stable self-resonating circuit).
The issue that's come up recently (see this neat article) is that too much helium gas in the surrounding air can kill (at least temporarily) iphones and such devices that use these MEMS clocks. In a helium-rich environment like when filling up superconducting magnets, helium molecules can diffuse through the packaging into the resonator environment. Whoops. Assuming the device isn't permanently damaged (I could imagine feedback circuits doing weird things if the damping is way out of whack), the helium has to diffuse out again to resolve the problem. Neat physics, and something for helium-users to keep in mind.
Thursday, November 01, 2018
Imposter syndrome
If you're reading this, you've probably heard of imposter syndrome before - that feeling that, deep down, you don't really deserve praise or recognition for your supposed achievements, because you feel like you're not as good at this stuff as your colleagues/competitors, who must really know what they're doing. As one of my grad school roommates said as a bunch of us were struggling with homework: "Here we are, students in one of the most prestigious graduate programs in the country. I sure hope someone knows what they're doing."
This feeling can be particularly prevalent in fields where there is great currency in the perception of intellectual standing (like academia, especially in science). My impression is that a large majority of physicists at all levels (faculty, postdocs, grad students, undergrads) experience this to greater or lesser degrees and frequencies. We're trained to think critically, and driven people tend to overthink things. If you're fighting with something (some homework set, or some experiment, or getting some paper out, or writing a proposal), and your perception is that others around you are succeeding while you feel like you're struggling, it's not surprising that self-doubt can creep in.
I'm not posting because I've had a great insight into mitigating these feelings (though here are some tips). I'm posting just to say to readers who feel like that sometimes: you're not alone.
Wednesday, October 24, 2018
Scalable materials for quantum information
There is no question that the explosive spread of electronics and optoelectronic technology in the 20th century has its foundation in the growth and preparation of high quality materials - silicon with purity better than parts per billion, single crystals cut and polished to near-atomic flatness, with exquisite control of impurity concentrations; III-V compound semiconductors for high speed transistors, LEDs, and lasers; even ultrapure SiO2 for millions of km of ultralow loss optical fiber.
arXiv:1810.09350 - Nelz et al., Towards wafer-scale diamond nano- and quantum technologies
It is possible to grow single-crystal diamond films on the 100 mm wafer scale, starting with Si substrates coated with iridium/yttria-stabilized zirconia. There are dislocations and stacking faults, but it's getting there. If the native defect density can be controlled and eliminated to a very fine level, and ion implantation can be used to create well-defined defects (NV centers and the like), that would be a big boost to hopes of wide-spread use and mass fabrication of quantum devices based on these systems.
arXiv:1810.06521 - Sabbagh et al., Wafer-scale silicon for quantum computing
Those who want to use electron spins in Si as quantum bits need to worry about whether nuclear spins from naturally abundant 29Si. It has now been shown that it is possible to use isotopically enriched silane made from 28Si to grow epitaxial layers of material almost devoid of 29Si, and that MOS devices made from this stuff can be of high quality. It's worth noting: Isotope separation of different Si isotopic variants of silane by centrifuge is easier than trying the same thing with, e.g, uranium hexafluoride to enrich 235U, because the percentage mass difference is considerably higher in the Si case.
Any new electronics-based technology intended to supplant or supplement now-traditional electronic materials at scale is going to need a material platform that can credibly reach similar quality. Many of the 2d materials have a long way to go in that regard. However, there have been recent advances in a couple of specific systems targeted for particular forms of quantum information devices.
It is possible to grow single-crystal diamond films on the 100 mm wafer scale, starting with Si substrates coated with iridium/yttria-stabilized zirconia. There are dislocations and stacking faults, but it's getting there. If the native defect density can be controlled and eliminated to a very fine level, and ion implantation can be used to create well-defined defects (NV centers and the like), that would be a big boost to hopes of wide-spread use and mass fabrication of quantum devices based on these systems.
arXiv:1810.06521 - Sabbagh et al., Wafer-scale silicon for quantum computing
Those who want to use electron spins in Si as quantum bits need to worry about whether nuclear spins from naturally abundant 29Si. It has now been shown that it is possible to use isotopically enriched silane made from 28Si to grow epitaxial layers of material almost devoid of 29Si, and that MOS devices made from this stuff can be of high quality. It's worth noting: Isotope separation of different Si isotopic variants of silane by centrifuge is easier than trying the same thing with, e.g, uranium hexafluoride to enrich 235U, because the percentage mass difference is considerably higher in the Si case.
Sunday, October 14, 2018
Faculty position at Rice - theoretical biological physics
Faculty position in Theoretical Biological Physics at Rice University
As part of the Vision for the Second Century (V2C2), which is focused on investments in research excellence, Rice University seeks faculty members, preferably at the assistant professor level, starting as early as July 1, 2019, in all areas of Theoretical Biological Physics. Successful candidates will lead dynamic, innovative, and independent research programs supported by external funding, and will excel in teaching at the graduate and undergraduate levels, while embracing Rice’s culture of excellence and diversity. This search will consider applicants from all science and engineering disciplines. Ideal candidates will pursue research with strong intellectual overlap with physics, chemistry, biosciences, bioengineering, chemical and biomolecular engineering, or other related disciplines. Applicants pursuing all styles of theory and computation integrating the physical and life sciences are encouraged to apply.
For full details and to apply, please visit https://jobs.rice.edu/postings/17099. Applicants should please submit the following materials: (1) cover letter (2) curriculum vitae, (3) research statement, (4) statement of teaching philosophy, and the names and contact information for three references. Application review will commence no later than November 30, 2018 and continue until the positions are filled. Candidates must have a PhD or equivalent degree and outstanding potential in research and teaching. We particularly encourage applications from women and members of historically underrepresented groups who bring diverse cultural experiences and who are especially qualified to mentor and advise members of our diverse student population.
Rice University, located in Houston, Texas, is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status.
Friday, October 12, 2018
Short items
A few interesting things I've found this past week:
- The connection between particle spin and quantum statistics (fermions = half-integer spin, bosons = integer spin) is subtle, as I've mentioned before. This week I happened upon a neat set of slides (pdf) by Jonathan Bain on this topic. He looks at how we should think about why a pretty restrictive result from non-interacting relativistic quantum field theories has such profound, general implications. He has a book on this, too.
- There is a new book about the impact of condensed matter physics on the world and why it's the comparatively unsung branch of the discipline. I have a copy on the way; once I read it I'll post a review.
- It's also worth reading about why mathematics as a discipline is viewed the way it is culturally.
- This is a really well-written article about turbulence, and why it's hard even though it's "just \(\mathbf{F} = m\mathbf{a}\)" for little blobs of fluid.
- Humanoid robots are getting more and more impressive. I would really like to know the power consumption of one of those, though, given that the old ones used to have either big external power cables or on-board diesel engines. The robot apocalypse is less scary if they have to recharge every ten minutes of operating time.
- I always wondered if fidget spinners were good for something.
Sunday, October 07, 2018
A modest proposal: Congressional Science and Technology Office, or equivalent
I was in a meeting at the beginning of the week where the topic of science and technology in policy-making came up. One person in the meeting made an off-hand comment that one role for university practitioners could be to "educate policy-makers". Another person in the meeting, with a lot of experience in public policy, pointed out that from the perspective of policy-makers, the previous statement often comes across as condescending and an immediate turn-off (regardless of whether policy-makers actually have expert knowledge relevant to their decisions).
At the same time, with the seemingly ever-quickening pace of technological change, it sure seems like Congress lacks sources of information and resources for getting legislators (and perhaps more importantly their staffs) up to speed on scientific and technological issues. These include issues of climate, election security, artificial intelligence, robots coming to take our jobs, etc. The same could be said for the Judiciary, from the federal district level all the way up to the Supreme Court. Wouldn't it be a good idea for at least the staffs of the federal judges to have some non-partisan way to get needed help in understanding, e.g., encryption? The National Academies do outstanding work in their studies and reports, but I'm thinking of a non-partisan information-gathering and coaching office specifically to support Congress and perhaps the Judiciary. The Congressional Budget Office serves a somewhat similar role in terms of supporting budgeting and appropriations. The executive branch (nominally) has the Office of Science and Technology Policy. I could be convinced that the Academies could launch something analogous, but it's not clear that this is a reasonable expectation.
Realistically, now is not the best time to bring this up in the US, given the level of political dysfunction and the looming financial challenges facing the government. There used to be a congressional Office of Technology Assessment, but that was shut down ostensibly to save money in 1995. Attempts to restart it such as Bill Foster's this past spring have failed. Still, better to keep pushing for something to play this role, rather than simply being content with the status quo level of technical knowledge of Congress (and federal judges). Complex scientific and technological issues are shaping the world around us, and I have to hope that decision-makers want to know more about these topics.
Sunday, September 30, 2018
Can you heat up your coffee by stirring?
A fun question asked by a student in my class: To what extent do you heat up your coffee by stirring it?
It was a huge conceptual advance when James Prescott Joule demonstrated that "heat", as inferred by the increase in the temperature of some system, is a form of energy. In 1876, Joule set up an experiment described here, where a known mass falling a known distance turns a paddle-wheel within a volume of liquid in an insulated container. The paddle-wheel stirs the liquid, and eventually the liquid's viscosity, the frictional transfer of momentum between adjacent layers of fluid moving at slightly different velocities, damps out the paddle-wheel's rotation and, if you wait long enough, the fluid's motion. Joule found that this was accompanied by an increase in the fluid's temperature, an increase directly proportional to the distance fallen by the mass. The viscosity is the means by which the energy of the organized motion of the swirling fluid is transferred to the kinetic energy of the disorganized motion of individual fluid molecules.
Suppose you stir your coffee at a roughly constant stirring speed. This is adding at a steady rate to the (disorganized) energy content of the coffee. If we are content with rough estimates, we can get a sense of the power you are dumping into the coffee by an approach close to dimensional analysis.
The way viscosity \(\mu\) is defined, the frictional shear force per unit area is given by the viscosity times the velocity gradient - that is, the frictional force per area in the \(x\)-direction at some piece of the \(x-y\) plane for fluid flowing in the x direction is going to be given by \(\mu (\partial u/\partial z) \), where \(z\) is the normal direction and \(u\) is the \(x\)-component of the fluid velocity).
Very very roughly (because the actual fluid flow geometry and velocity field are messy and complicated), the power dumped in by stirring is going to be something like (volume of cup)*(viscosity)*(typical velocity gradient)^2. A mug holds about 0.35L = 3.5e-4 m^3 of coffee. The viscosity of coffee is going to be something like that of warm water. Looking that up here, the viscosity is going to be something like 3.54e-4 Pa-s. A really rough velocity gradient is something like the steady maximum stirring velocity (say 20 cm/s) divided by the radius of the mug (say 3 cm). If you put all that together, you get that the effective input power to your coffee from stirring is at the level of a few microwatts. Pretty meager, and unlikely to balance the rate at which energy leaves by thermal conduction through the mug walls and evaporation of the hottest water molecules.
Still, when you stir your coffee, you are veeeerrry slightly heating it! update: As the comments point out, and as I tried to imply above, you are unlikely to produce a net increase in temperature through stirring. When you stir you improve the heat transfer between the coffee and the mug walls (basically short-circuiting the convective processes that would tend to circulate the coffee around if you left the coffee alone).
It was a huge conceptual advance when James Prescott Joule demonstrated that "heat", as inferred by the increase in the temperature of some system, is a form of energy. In 1876, Joule set up an experiment described here, where a known mass falling a known distance turns a paddle-wheel within a volume of liquid in an insulated container. The paddle-wheel stirs the liquid, and eventually the liquid's viscosity, the frictional transfer of momentum between adjacent layers of fluid moving at slightly different velocities, damps out the paddle-wheel's rotation and, if you wait long enough, the fluid's motion. Joule found that this was accompanied by an increase in the fluid's temperature, an increase directly proportional to the distance fallen by the mass. The viscosity is the means by which the energy of the organized motion of the swirling fluid is transferred to the kinetic energy of the disorganized motion of individual fluid molecules.
Suppose you stir your coffee at a roughly constant stirring speed. This is adding at a steady rate to the (disorganized) energy content of the coffee. If we are content with rough estimates, we can get a sense of the power you are dumping into the coffee by an approach close to dimensional analysis.
The way viscosity \(\mu\) is defined, the frictional shear force per unit area is given by the viscosity times the velocity gradient - that is, the frictional force per area in the \(x\)-direction at some piece of the \(x-y\) plane for fluid flowing in the x direction is going to be given by \(\mu (\partial u/\partial z) \), where \(z\) is the normal direction and \(u\) is the \(x\)-component of the fluid velocity).
Very very roughly (because the actual fluid flow geometry and velocity field are messy and complicated), the power dumped in by stirring is going to be something like (volume of cup)*(viscosity)*(typical velocity gradient)^2. A mug holds about 0.35L = 3.5e-4 m^3 of coffee. The viscosity of coffee is going to be something like that of warm water. Looking that up here, the viscosity is going to be something like 3.54e-4 Pa-s. A really rough velocity gradient is something like the steady maximum stirring velocity (say 20 cm/s) divided by the radius of the mug (say 3 cm). If you put all that together, you get that the effective input power to your coffee from stirring is at the level of a few microwatts. Pretty meager, and unlikely to balance the rate at which energy leaves by thermal conduction through the mug walls and evaporation of the hottest water molecules.
Still, when you stir your coffee, you are veeeerrry slightly heating it! update: As the comments point out, and as I tried to imply above, you are unlikely to produce a net increase in temperature through stirring. When you stir you improve the heat transfer between the coffee and the mug walls (basically short-circuiting the convective processes that would tend to circulate the coffee around if you left the coffee alone).
Friday, September 28, 2018
Annual Nobel speculation thread
As my friend DanM pointed out in the comments of a previous post, it's Nobel season again, next Tuesday for physics. Dan puts forward his prediction of Pendry and Smith for metamaterials/negative index of refraction. (You could throw in Yablonovitch for metamaterials.) I will, once again, make my annual (almost certainly wrong) prediction of Aharonov and Berry for geometric phases. Another possibility in this dawning age of quantum information is Aspect, Zeilinger, and Clauser for Bell's inequality tests. Probably not an astrophysics one, since gravitational radiation was the winner last year.
Thursday, September 20, 2018
What’s in a name? CMP
At a recent DCMP meeting, my colleague Erica Carlson raised an important point: Condensed matter physics as a discipline is almost certainly hurt relative to other areas, and in the eye of the public, by having the least interesting, most obscure descriptive name. Seemingly every other branch of physics has a name that either sounds cool, describes the discipline at a level immediately appreciated by the general public, or both. Astrophysics is astro-physics, and just sounds badass. Plasma physics is exciting because, come on, plasma. Biophysics is clearly the physics relevant to biology. High energy or particle physics are descriptive and have no shortage of public promotion. Atomic physics has a certain retro-future vibe.
In contrast, condensed matter, while accurate, really does not conjure any imagery at all for the general public, or sound very interesting. If the first thing you have to do after saying “condensed matter” is use two or three sentences to explain what that means, then the name has failed in one of its essential missions.
So, what would be better alternatives? “Quantum matter” sounds cool, but doesn’t really explain much, and leaves out soft CM. The physics of everything you can touch is interesting, but prosaic. Suggestions in the comments, please!
In contrast, condensed matter, while accurate, really does not conjure any imagery at all for the general public, or sound very interesting. If the first thing you have to do after saying “condensed matter” is use two or three sentences to explain what that means, then the name has failed in one of its essential missions.
So, what would be better alternatives? “Quantum matter” sounds cool, but doesn’t really explain much, and leaves out soft CM. The physics of everything you can touch is interesting, but prosaic. Suggestions in the comments, please!
Friday, September 14, 2018
Recently on the arxiv
While it's been a busy time, a couple of interesting papers caught my eye:
arxiv:1808.07865 - Yankowitz et al., Tuning superconductivity in twisted bilayer graphene
This lengthy paper, a collaboration between the groups of Andrea Young at UCSB and Cory Dean at Columbia, is (as far as I know) the first independent confirmation of the result from Pablo Jarillo-Herrero's group at MIT about superconductivity in twisted bilayer graphene. The new paper also shows how tuning the interlayer coupling via in situ pressure (a capability of the Dean lab) affects the phase diagram. Cool stuff.
arXiv:1809.04637 - Fatemi et al., Electrically Tunable Low Density Superconductivity in a Monolayer Topological Insulator
arxiv:1809.04691 - Sajadi et al., Gate-induced superconductivity in a monolayer topological insulator
While I haven't had a chance to read them in any depth, these two papers report superconductivity in gated monolayer WTe2, a remarkable material already shown to act as a 2D topological insulator (quantum spin Hall insulator).
Seems like there is plenty of interesting physics that is going to keep turning up in these layered systems as material quality and device fabrication processes continue to improve.
arxiv:1808.07865 - Yankowitz et al., Tuning superconductivity in twisted bilayer graphene
This lengthy paper, a collaboration between the groups of Andrea Young at UCSB and Cory Dean at Columbia, is (as far as I know) the first independent confirmation of the result from Pablo Jarillo-Herrero's group at MIT about superconductivity in twisted bilayer graphene. The new paper also shows how tuning the interlayer coupling via in situ pressure (a capability of the Dean lab) affects the phase diagram. Cool stuff.
arXiv:1809.04637 - Fatemi et al., Electrically Tunable Low Density Superconductivity in a Monolayer Topological Insulator
arxiv:1809.04691 - Sajadi et al., Gate-induced superconductivity in a monolayer topological insulator
While I haven't had a chance to read them in any depth, these two papers report superconductivity in gated monolayer WTe2, a remarkable material already shown to act as a 2D topological insulator (quantum spin Hall insulator).
Seems like there is plenty of interesting physics that is going to keep turning up in these layered systems as material quality and device fabrication processes continue to improve.
Tuesday, September 04, 2018
Looking back at the Schön scandal
As I mentioned previously, I've realized in recent weeks that many current students out there may never have heard of Jan Hendrik Schön, and that seems wrong, a missed opportunity for a cautionary tale about responsible conduct of research. It's also a story that gives a flavor of the time and touches on other issues still current today - faddishness and competitiveness in top-level science, the allure of glossy publications, etc. It ended up being too long for a blog post, and it seemed inappropriate to drag out over many posts, so here is a link to a pdf. Any errors are mine and are probably the result of middle-aged memory. After all, this story did start twenty years ago. I'm happy to make corrections if appropriate. update 9/9/18 - corrected typos and added a couple of sentences to clarify things. update, 2020: This write-up is now deposited at Rice's scholarship and has a doi: https://doi.org/10.25611/8P39-3K49 .
Wednesday, August 29, 2018
Unidentified superconducting objects, again.
I've had a number of people ask me why I haven't written anything about the recent news and resulting kerfuffle (here, here, and here for example) in the media regarding possible high temperature superconductivity in Au/Ag nanoparticles. The fact is, I've written before about unidentified superconducting objects (also see here), and so I didn't have much to say. I've exchanged some email with the IIS PI back in late July with some questions, and his responses to my questions are in line with what others have said. Extraordinary claims require extraordinary evidence. The longer this goes on without independent confirmation, the more likely it is that this will fade away.
Various discussions I've had about this have, however, spurred me to try writing down my memories and lessons learned from the Schon scandal, before the inevitable passage of time wipes more of the details from my brain. I'm a bit conflicted about this - it was 18 years ago, there's not much point in rehashing the past, and Eugenie Reich's book covered this very well. At the same time, it's clear that many students today have never even heard of Schon, and I feel like I learned some valuable lessons from the whole situation. It'll take some time to see if I am happy with how this turns out before I post some or all of it. Update: I've got a draft done, and it's too long for a blog post - around 9000 words. I'll probably convert it to pdf when I'm happy with it and link to it somehow.
Various discussions I've had about this have, however, spurred me to try writing down my memories and lessons learned from the Schon scandal, before the inevitable passage of time wipes more of the details from my brain. I'm a bit conflicted about this - it was 18 years ago, there's not much point in rehashing the past, and Eugenie Reich's book covered this very well. At the same time, it's clear that many students today have never even heard of Schon, and I feel like I learned some valuable lessons from the whole situation. It'll take some time to see if I am happy with how this turns out before I post some or all of it. Update: I've got a draft done, and it's too long for a blog post - around 9000 words. I'll probably convert it to pdf when I'm happy with it and link to it somehow.
Friday, August 24, 2018
What is a Tomonaga-Luttinger Liquid?
I've written in the past (say here and here) about how we think about the electrons in a conventional metals as forming a Fermi Liquid. (If the electrons didn't interact at all, then colloquially we call the system a Fermi gas. The word "liquid" is shorthand for saying that the interactions between the particles that make up the liquid are important. You can picture a classical liquid as a bunch of molecules bopping around, experiencing some kind of short-ranged repulsion so that they can't overlap, but with some attraction that favors the molecules to be bumping up against each other - the typical interparticle separation is comparable to the particle size in that classical case.) People like Lev Landau and others had the insight that essential features of the Fermi gas (the Pauli principle being hugely important, for example) tend to remain robust even if one thinks about "dialing up" interactions between the electrons.
A consequence of this is that in a typical metal, while the details may change, the lowest energy excitations of the Fermi liquid (the electronic quasiparticles) should be very much like the excitations of the Fermi gas - free electrons. Fermi liquid quasiparticles each carry the electronic amount of charge, and they each carry "spin", angular momentum that, together with their charge, makes them act like tiny little magnets. These quasiparticles move at a typical speed called the Fermi velocity. This all works even though the like-charge electrons repel each other.
For electrons confined strictly in one dimension, though, the situation is different, and the interactions have a big effect on what takes place. Tomonaga (shared the Nobel prize with Feynman and Schwinger for quantum electrodynamics, the quantum theory of how charges interact with the electromagnetic field) and later Luttinger worked out this case, now called a Tomonaga-Luttinger Liquid (TLL). In one dimension, the electrons literally cannot get out of each other's way - the only kind of excitation you can have is analogous to a (longitudinal) sound wave, where there are regions of enhanced or decreased density of the electrons. One surprising result from this is that charge in 1d propagates at one speed, tuned by the electron-electron interactions, while spin propagates at a different speed (close to the Fermi velocity). This shows how interactions and restricted dimensionality can give collective properties that are surprising, seemingly separating the motion of spin and charge when the two are tied together for free electrons.
These unusual TLL properties show up when you have electrons confined to truly one dimension, as in some semiconductor nanowires and in single-walled carbon nanotubes. Directly probing this physics is actually quite challenging. It's tricky to look at charge and spin responses separately (though some experiments can do that, as here and here) and some signatures of TLL response can be subtle (e.g., power law responses in tunneling with voltage and temperature where the accessible experimentally reasonable ranges can be limited).
The cold atom community can create cold atomic Fermi gases confined to one-dimensional potential channels. In those systems the density of atoms plays the role of charge, and while some internal (hyperfine) state of the atoms plays the role of spin, and the experimentalists can tune the effective interactions. This tunability plus the ability to image the atoms can enable very clean tests of the TLL predictions that aren't readily done with electrons.
So why care about TLLs? They are an example of non-Fermi liquids, and there are other important systems in which interactions seem to lead to surprising, important changes in properties. In the copper oxide high temperature superconductors, for example, the "normal" state out of which superconductivity emerges often seems to be a "strange metal", in which the Fermi Liquid description breaks down. Studying the TLL case can give insights into these other important, outstanding problems.
Saturday, August 18, 2018
Phonons and negative mass
There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational fields and produce gravitational fields of their own).
The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field. Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.
Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all? That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure). In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\delta \rho\) is the change. Linear regime sound usually doesn't transport mass. The same is true for sound in the linear regime in a conventional liquid or a solid.
In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\) changes with pressure for that medium. (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature. There is no explicit pressure dependence, and sound is "massless" in that case.)
I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass than the average density of the surrounding medium. That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium. It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences). That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.
The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field. Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.
Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all? That is, we think of ordinary sound in a gas like air as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure). In the limit of small amplitudes (the "linear regime"), we can consider the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\delta \rho\) is the change. Linear regime sound usually doesn't transport mass. The same is true for sound in the linear regime in a conventional liquid or a solid.
In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\) changes with pressure for that medium. (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature. There is no explicit pressure dependence, and sound is "massless" in that case.)
I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass than the average density of the surrounding medium. That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative gravitational potential compared to their background medium. It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where the density is very high and you could imagine astrophysical consequences). That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be missing something.
Tuesday, August 14, 2018
APS March Meeting 2019 - DCMP invited symposia, DMP focused topics
A reminder to my condensed matter colleagues who go to the APS March Meeting: We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.
Now is the time to put together nominations for these things. The more high quality nominations, the better the content of the meeting.
The APS Division of Condensed Matter Physics is seeking nominations for invited symposia. See here for the details. The online submission deadline is August 24th!
Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions. The list of Focus Topics is here. The online submission deadline for these is August 29th.
Now is the time to put together nominations for these things. The more high quality nominations, the better the content of the meeting.
The APS Division of Condensed Matter Physics is seeking nominations for invited symposia. See here for the details. The online submission deadline is August 24th!
Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions. The list of Focus Topics is here. The online submission deadline for these is August 29th.
Sunday, August 12, 2018
What is (dielectric) polarization?
This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization". I have to say, this has been very educational for me. Before I try to give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.
Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge. A great example is a water molecule, which has a little bit of excess negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.
Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as \(\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}\), where \(\rho\) is the local charge density. Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric field. In that case, \(\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}\), where \(\alpha\) is the polarizability. (In general \(\alpha\) is a tensor, because \(\mathbf{p}\) and \(\mathbf{E}\) don't have to point in the same direction.)
If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the insulator slightly redistribute themselves - the material polarizes. If we imagine dividing the material into little chunks, we can define the polarization \(\mathbf{P}\) as the electric dipole moment per unit volume. For a solid, we can pick some volume and define \(\mathbf{P} = \mathbf{p}/V\), where \(V\) is the volume over which the integral is done for calculating \(\mathbf{p}\).
We can go farther than that. If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization \(\alpha\), then we can do a self-consistent calculation, where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles. Then we can solve for \(\mathbf{P}\) and therefore the relative dielectric constant in terms of \(\alpha\). The result is called the Clausius-Mossotti relation.
In crystalline solids, however, it turns out that there is a serious problem! As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of \(\mathbf{P}\) given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed. This is a big deal.
The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase. First, it's important to remember that polarization is really defined experimentally by how much charge flows when that capacitor described above has the voltage applied across it. So, the problem we're really trying to solve is, find the integrated current that flows when an electric field is ramped up to some value across a periodic solid. We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors \(\mathbf{k}\). For each \(\mathbf{k}\) in a given band, there is a contribution that has to do with how the energy varies with \(\mathbf{k}\) (that's the part that looks roughly like a classical velocity), and there's a second piece that has to do with how the actual electronic wavefunctions vary with \(\mathbf{k}\), which is proportional to the Berry curvature. If you add up all the \(\mathbf{k}\) contributions over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.
Bottom line: In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with \(\mathbf{k}\) within the filled bands. This is a really surprising and deep result, and it was only realized in the 1990s. It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one that foreshadowed the whole topological insulator boom).
Thursday, August 09, 2018
Hydraulic jump: New insights into a very old phenomenon
Ever since I learned about them, I thought that hydraulic jumps were cool. As I wrote here, a hydraulic jump is an analog of a standing shockwave. The key dimensionless parameter in a shockwave in a gas is the Mach number, the ratio between the fluid speed \(v\) and the local speed of sound, \(c_{\mathrm{s}}\). The gas that goes from supersonic (\(\mathrm{Ma} > 1\)) on one side of the shock to subsonic (\(\mathrm{Ma} < 1\)) on the other side.
For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump). That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.
However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin. (Sorry that I can't find a non-pay link to the paper.) They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling". The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case. Instead, the correct analysis is to worry about not just gravity but also surface tension. They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension. The authors do a convincing analysis of experimental data with this model, and it works well. I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.
For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump). That's basically correct for macroscopic jumps that you might see in a canal or in my previous example.
However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the basin. (Sorry that I can't find a non-pay link to the paper.) They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a wall, and vertically onto a "ceiling". The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case. Instead, the correct analysis is to worry about not just gravity but also surface tension. They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension. The authors do a convincing analysis of experimental data with this model, and it works well. I think it's very cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.
Tuesday, August 07, 2018
Faculty position at Rice - experimental atomic/molecular/optical
Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University
The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and optical physics. The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and service missions of the Department and University.
Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019. Further inquiries should be directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).
Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status.
Tuesday, July 31, 2018
What is Berry phase?
On the road to discussing the Modern Theory of Polarization (e.g., pdf), it's necessary to talk about Berry phase - here, unlike many uses of the word on this blog, "phase" actually refers to a phase angle, as in a complex number \(e^{i\phi}\). The Berry phase, named for Michael Berry, is a so-called geometric phase, in that the value of the phase depends on the "space" itself and the trajectory the system takes. (For reference, the original paper is here (pdf), a nice talk about this is here, and reviews on how this shows up in electronic properties are here and here.)
A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces. Imagine taking a walk while holding an arrow, initially pointed east, say. You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right. On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially. On a curved (say spherical) surface, though, something different has happened. As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction. The angle of rotation is a geometric phase analogous to Berry phase. The issue is that the local definition of "east" varies over the surface of the sphere. In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space. If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves. This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.
So what about the actual Berry phase? To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED. The actual math is laid out here. In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around. For a state \(| \psi\rangle \) (an energy eigenstate, one of the "energy levels" of our system) with energy \(E\), we learn in quantum mechanics that the phase accumulates at a rate of \(E/\hbar\), so that the phase angle after some time \(t\) is given by \(\Delta \phi = Et/\hbar\). Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter \(\lambda\). For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates. If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy \(E(\lambda)\). So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of \(E(\lambda)\). You might think that the phase that would be racked up would just be found by adding up the little contributions, \(\Delta \phi = \int (E(\lambda(t))/\hbar) dt\).
However, this misses something! In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path. Here, it turns out that we have to keep track of how the state itself, \(| \psi \rangle\), varies locally with \(\lambda\). To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by \(| \psi \rangle \) ) is actually also varying along the path.
[Mathematically, that second contribution to the phase adds up to be \( \int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda\). Generally \(\lambda\) could be a vectorial thing with multiple components, so that \(\partial_{\lambda}\) would be a gradient operator with respect to \(\lambda\), and the integral would be a line integral along some trajectory of \(\lambda\). It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection. The curl of that vector potential is some effective magnetic field, called the Berry curvature. Then the line integral above, if it's around some closed path in \(\lambda\), is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]
Why is any of this of interest in condensed matter?
Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum \(\mathbf{p} = \hbar \mathbf{k}\). Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity \(\partial E(\mathbf{k})/\partial \mathbf{k}\) (related to how the wavepacket racks up phase as it propagates in space), and we basically write down \(\mathbf{F} = m\mathbf{a}\) using electric and magnetic fields. Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout \(\mathbf{k}\)-space. That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.
A similar-in-spirit angle shows up in the problem of "parallel transport" (jargony wiki) along curved surfaces. Imagine taking a walk while holding an arrow, initially pointed east, say. You walk, always keeping the arrow pointed in the local direction of east, in the closed path shown at right. On a flat surface, when you get back to your starting point, the arrow is pointing in the same direction it did initially. On a curved (say spherical) surface, though, something different has happened. As shown, when you get back to your starting point, the arrow has rotated from its initial position, despite the fact that you always kept it pointed in the local east direction. The angle of rotation is a geometric phase analogous to Berry phase. The issue is that the local definition of "east" varies over the surface of the sphere. In more mathematical language, the basis vectors (that point in the local cardinal directions) vary in space. If you want to keep track of how the arrow vector changes along the path, you have to account for both the changing of the numerical components of the vector along each basis direction, and the change in the basis vectors themselves. This kind of thing crops up in general relativity, where it is calculated using Christoffel symbols.
So what about the actual Berry phase? To deal with this with a minimum of math, it's best to use some of the language that Feynman employed in his popular book QED. The actual math is laid out here. In Feynman's language, we can picture the quantum mechanical phase associated with some quantum state as the hand of a stopwatch, winding around. For a state \(| \psi\rangle \) (an energy eigenstate, one of the "energy levels" of our system) with energy \(E\), we learn in quantum mechanics that the phase accumulates at a rate of \(E/\hbar\), so that the phase angle after some time \(t\) is given by \(\Delta \phi = Et/\hbar\). Now suppose we were able to mess about with our system, so that energy levels varied as a function of some tuning parameter \(\lambda\). For example, maybe we can dial around an externally applied electric field by applying a voltage to some capacitor plates. If we do this slowly (adiabatically), then the system always stays in its instantaneous version of that state with instantaneous energy \(E(\lambda)\). So, in the Feynman watch picture, sometimes the stopwatch is winding fast, sometimes it's winding slow, depending on the instantaneous value of \(E(\lambda)\). You might think that the phase that would be racked up would just be found by adding up the little contributions, \(\Delta \phi = \int (E(\lambda(t))/\hbar) dt\).
However, this misses something! In the parallel transport problem above, to get the right total answer about how the vector rotates globally we have to keep track of how the basis vectors vary along the path. Here, it turns out that we have to keep track of how the state itself, \(| \psi \rangle\), varies locally with \(\lambda\). To stretch the stopwatch analogy, imagine that the hand of the stopwatch can also gain or lose time along the way because the positioning of the numbers on the watch face (determined by \(| \psi \rangle \) ) is actually also varying along the path.
[Mathematically, that second contribution to the phase adds up to be \( \int \langle \psi(\lambda)| \partial_{\lambda}| \psi(\lambda) \rangle d \lambda\). Generally \(\lambda\) could be a vectorial thing with multiple components, so that \(\partial_{\lambda}\) would be a gradient operator with respect to \(\lambda\), and the integral would be a line integral along some trajectory of \(\lambda\). It turns out that if you want to, you can define the integrand to be an effective vector potential called the Berry connection. The curl of that vector potential is some effective magnetic field, called the Berry curvature. Then the line integral above, if it's around some closed path in \(\lambda\), is equal to the flux of that effective magnetic field through the closed path, and the accumulated Berry phase around that closed path is then analogous to the Aharonov-Bohm phase.]
Why is any of this of interest in condensed matter?
Well, one approach to worrying about the electronic properties of conducting (crystalline) materials is to think about starting off some electronic wavepacket, initially centered around some particular Bloch state at an initial (crystal) momentum \(\mathbf{p} = \hbar \mathbf{k}\). Then we let that wavepacket propagate around, following the rules of "semiclassical dynamics" - the idea that there is some Fermi velocity \(\partial E(\mathbf{k})/\partial \mathbf{k}\) (related to how the wavepacket racks up phase as it propagates in space), and we basically write down \(\mathbf{F} = m\mathbf{a}\) using electric and magnetic fields. Here, there is the usual phase that adds up from the wavepacket propagating in space (the Fermi velocity piece), but there can be an additional Berry phase which here comes from how the Bloch states actually vary throughout \(\mathbf{k}\)-space. That can be written in terms of an "anomalous velocity" (anomalous because it's not from the usual Fermi velocity picture), and can lead to things like the anomalous Hall effect and a bunch of other measurable consequences, including topological fun.
Monday, July 23, 2018
Math, beauty, and condensed matter physics
There is a lot of discussion these days about the beauty of mathematics in physics, and whether some ideas about mathematical elegance have led the high energy theory community down the wrong path. And yet, despite that, high energy theory still seems like a very popular professed interest of graduating physics majors. This has led me to identify what I think is another sociological challenge to be overcome by condensed matter in the broader consciousness.
Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it. When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations. You remember back when you took algebra, and you had to solve quadratic equations? We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable. Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers. In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)e^{x} + 3 = 0\). There is no clean, algorithmic way to get an exact analytic solution to this. Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer.
The same sort of thing happens again when we learn calculus. When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions. There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side). However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods. The exact same situation arises in solving differential equations. Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions. It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.
I think condensed matter suffers from a similar culturally acquired bias. Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum. At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations. Condensed matter seems like it must be messy. Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.
Physics is all about using mathematics to model the world around us, and experiments are one way we find or constrain the mathematical rules that govern the universe and everything in it. When we are taught math in school, we end up being strongly biased by the methods we learn, so that we are trained to like exact analytical solutions and feel uncomfortable with approximations. You remember back when you took algebra, and you had to solve quadratic equations? We were taught how to factor polynomials as one way of finding the solution, and somehow if the solution didn’t work out to x being an integer, something felt wrong – the problems we’d been solving up to that point had integer solutions, and it was tempting to label problems that didn’t fit that mold as not really nicely solvable. Then you were taught the quadratic formula, with its square root, and you eventually came to peace with the idea of irrational numbers, and eventually imaginary numbers. In more advanced high school algebra courses, students run across so-called transcendental equations, like \( (x-3)e^{x} + 3 = 0\). There is no clean, algorithmic way to get an exact analytic solution to this. Instead it has to be solved numerically, using some computational approach that can give you an approximate answer, good to as many digits as you care to grind through on your computer.
The same sort of thing happens again when we learn calculus. When we are taught how to differentiate and integrate, we are taught the definitions of those operations (roughly speaking, slope of a function and area under a function, respectively) and algorithmic rules to apply to comparatively simple functions. There are tricks, variable changes, and substitutions, but in the end, we are first taught how to solve problems “in closed form” (with solutions comprising functions that are common enough to have defined names, like \(sin\) and \(cos\) on the simple end, and more specialized examples like error functions and gamma functions on the more exotic side). However, it turns out that there are many, many integrals that don’t have closed form solutions, and instead can only be solved approximately, through numerical methods. The exact same situation arises in solving differential equations. Legendre, Laguerre, and Hermite polynomials, Bessel and Hankel functions, and my all-time favorite, the confluent hypergeometric function, can crop up, but generically, if you want to solve a complicated boundary value problem, you probably need to numerical methods rather than analytic solutions. It can take years for people to become comfortable with the idea that numerical solutions have the same legitimacy as analytical solutions.
I think condensed matter suffers from a similar culturally acquired bias. Somehow there is a subtext impression that high energy is clean and neat, with inherent mathematical elegance, thanks in part to (1) great marketing by high energy theorists, and (2) the fact that it deals with things that seem like they should be simple - fundamental particles and the vacuum. At the same time, even high school chemistry students pick up pretty quickly that we actually can't solve many-electron quantum mechanics problems without a lot of approximations. Condensed matter seems like it must be messy. Our training, with its emphasis on exact analytic results, doesn't lay the groundwork for people to be receptive to condensed matter, even when it contains a lot of mathematical elegance and sometimes emergent exactitude.
Wednesday, July 18, 2018
Items of interest
While trying to write a few things (some for the blog, some not), I wanted to pass along some links of interest:
- APS March Meeting interested parties: The time to submit nominations for invited sessions for the Division of Condensed Matter Physics is now (deadline of August 24). See here. As a member-at-large for DCMP, I've been involved in the process now for a couple of years, and lots of high quality nominations are the best way to get a really good meeting. Please take the time to nominate!
- Similarly, now is the time to nominate people for DCMP offices (deadline of Sept. 1).
- There is a new tool available called Scimeter that is a rather interesting add-on to the arxiv. It has done some textual analysis of all the preprints on the arxiv, so you can construct a word cloud for an author (see at right for mine, which is surprisingly dominated by "field effect transistor" - I guess I use that phrase too often) or group of authors; or you can search for similar authors based on that same word cloud analysis. Additionally, the tool uses that analysis to compare breadth of research topics spanned by an author's papers. Apparently I am 0.3 standard deviations more broad than the mean broadness, whatever that means.
- Thanks to a colleague, I stumbled on Fermat's Library, a great site that stockpiles some truly interesting and foundational papers across many disciplines and allows shared commenting in the margins (hence the Fermat reference).
Sunday, July 08, 2018
Physics in the kitchen: Frying tofu
I was going to title this post "On the emergence of spatial and temporal coherence in frying tofu", or "Frying tofu: Time crystal?", but decided that simplicity has virtues.
I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen. I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light. The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz. Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized. This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.
From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well. Clearly I should have filmed this and called it a classical time crystal. Would've been a cheap and tasty paper. (I kid, I kid.)
What I think happened is this. The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles. There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu). The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop. I'm sure someone far more obsessive than I am could do serious dimensional analysis about this. The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames). This kind of self-organization happens all the time - here is a nice talk about this stuff. This kind of synchronization is an example of universal, emergent physics.
I was doing some cooking yesterday, and I was frying some firm tofu in a large, deep skillet in my kitchen. I'd cut the stuff into roughly 2cm by 2cm by 1 cm blocks, separated by a few mm from each other but mostly covering the whole cooking surface, and was frying them in a little oil (enough to coat the bottom of the skillet) when I noticed something striking, thanks to the oil reflecting the overhead light. The bubbles forming in the oil under/around the tofu were appearing and popping in what looked to my eye like very regular intervals, at around 5 Hz. Moreover (and this was the striking bit), the bubbles across a large part of the whole skillet seemed to be reasonably well synchronized. This went on long enough (a couple of minutes, until I needed to flip the food) that I really should have gone to grab my camera, but I missed my chance to immortalize this on youtube because (a) I was cooking, and (b) I was trying to figure out if this was some optical illusion.
From the physics perspective, here was a driven nonequilibrium system (heated from below by a gas flame and conduction through the pan) that spontaneously picked out a frequency for temporal oscillations, and apparently synchronized the phase across the pan well. Clearly I should have filmed this and called it a classical time crystal. Would've been a cheap and tasty paper. (I kid, I kid.)
What I think happened is this. The bubbles in this case were produced by the moisture inside the tofu boiling into steam (due to the local temperature and heat flux) and escaping from the bottom (hottest) surface of the tofu into the oil to make bubbles. There has to be some rate of steam formation set by the latent heat of vaporization for water, the heat flux (and thus thermal conductivity of the pan, oil, and tofu), and the local temperature (again involving the thermal conductivity and specific heat of the tofu). The surface tension of the oil, its density, and the steam pressure figure into the bubble growth and how big the bubbles get before they pop. I'm sure someone far more obsessive than I am could do serious dimensional analysis about this. The bubbles then couple to each other via the surrounding fluid, and synched up because of that coupling (maybe like this example with flames). This kind of self-organization happens all the time - here is a nice talk about this stuff. This kind of synchronization is an example of universal, emergent physics.
Tuesday, July 03, 2018
A metal superconducting transistor (?!)
A paper was published yesterday in Nature Nanotechnology that is quite surprising, at least to me, and I thought I should point it out.
The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state. [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" \(h/2e^{2}\approx 12.9~\mathrm{k}\Omega\). This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains). Similarly, the inferred resistivity of around 30 \(\mu\Omega\)-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]
The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero. This effect happens symmetrically with either polarity of bias voltage.
This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry.
The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect. In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field.
Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors). I need to think about whether this makes sense to me physically.
The authors make superconducting wires (e-beam evaporated Ti in the main text, Al in the supporting information) that appear to be reasonably "good metals" in the normal state. [For the Ti case, for example, their electrical resistance is about 10 Ohms per square, very far from the "quantum of resistance" \(h/2e^{2}\approx 12.9~\mathrm{k}\Omega\). This suggests that the metal is electrically pretty homogeneous (as opposed to being a bunch of loosely connected grains). Similarly, the inferred resistivity of around 30 \(\mu\Omega\)-cm) is comparable to expectations for bulk Ti (which is actually a bit surprising to me).]
The really surprising thing is that the application of a large voltage between a back-gate (the underlying Si wafer, separated from the wire by 300 nm of SiO2) and the wire can suppress the superconductivity, dialing the critical current all the way down to zero. This effect happens symmetrically with either polarity of bias voltage.
This is potentially exciting because having some field-effect way to manipulate superconductivity could let you do very neat things with superconducting circuitry.
The reason this is startling is that ordinarily field-effect modulation of metals has almost no effect. In a typical metal, a dc electric field only penetrates a fraction of an atomic diameter into the material - the gas of mobile electrons in the metal has such a high density that it can shift itself by a fraction of a nanometer and self-consistently screen out that electric field.
Here, the authors argue (in a model in the supplemental information that I need to read carefully) that the relevant physical scale for the gating of the superconductivity is, empirically, the London penetration depth, a much longer spatial scale (hundreds of nm in typical low temperature superconductors). I need to think about whether this makes sense to me physically.
Sunday, July 01, 2018
Book review: The Secret Life of Science
I recently received a copy of The Secret Life of Science: How It Really Works and Why It Matters, by Jeremy Baumberg of Cambridge University. The book is meant to provide a look at the "science ecosystem", and it seems to be unique, at least in my experience. From the perspective of a practitioner but with a wider eye, Prof. Baumberg tries to explain much of the modern scientific enterprise - what is modern science (with an emphasis on "simplifiers" [often reductionists] vs. "constructors" [closer to engineers, building new syntheses] - this is rather similar to Narayanamurti's take described here), who are the different stakeholders, publication as currency, scientific conferences, science publicizing and reporting, how funding decisions happen, career paths and competition, etc.
I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it. I agree with many of the book's main observations:
It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation. My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields. (I do wonder about some of the numbers that appear. Has the number of scientists in France really grown by a factor of three since 1980? And by a factor of five in Spain over that time?)
If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start.
I haven't seen anyone else try to spell out, for a non-scientist audience, how the scientific enterprise fits together from its many parts, and that alone makes this book important - it would be great if someone could get some policy-makers to read it. I agree with many of the book's main observations:
- The actual scientific enterprise is complicated (as pointed out repeatedly with one particular busy figure that recurs throughout the text), with a bunch of stakeholders, some cooperating, some competing, and we've arrived at the present situation through a complex, emergent history of market forces, not some global optimization of how best to allocate resources or how to choose topics.
- Scientific publishing is pretty bizarre, functioning to disseminate knowledge as well as a way of keeping score; peer review is annoying in many ways but serves a valuable purpose; for-profit publications can distort people's behaviors because of the prestige associated with some.
- Conferences are also pretty weird, serving purposes (networking, researcher presentation training) that are not really what used to be the point (putting out and debating new results).
- Science journalism is difficult, with far more science than can be covered, squeezed resources for real journalism, incentives for PR that can oversimplify or amp up claims and controversy, etc.
It would be very interesting to get the perspective of someone in a very different scientific field (e.g., biochemistry) for their take on Prof. Baumberg's presentation. My own research interests align much w/ his, so it's hard for me to judge whether his point of view on some matters matches up well with other fields. (I do wonder about some of the numbers that appear. Has the number of scientists in France really grown by a factor of three since 1980? And by a factor of five in Spain over that time?)
If you know someone who is interested in a solid take on the state of play in (largely academic) science in the West today, this is a very good place to start.
Monday, June 25, 2018
Don't mince words, John Horgan. What do you really think?
In his review of Sabine Hossenfelder's new book for Scientific American, John Horgan begins by saying:
Does anyone who follows physics doubt it is in trouble? When I say physics, I don’t mean applied physics, material science or what Murray-Gell-Mann called “squalid-state physics.” I mean physics at its grandest, the effort to figure out reality. Where did the universe come from? What is it made of? What laws govern its behavior? And how probable is the universe? Are we here through sheer luck, or was our existence somehow inevitable?
Wow. Way to back-handedly imply that condensed matter physics is not grand or truly important. The frustrating thing is that Horgan knows perfectly well that condensed matter physics has been the root of multiple of profound ideas (Higgs mechanism, anyone?), as well as shaping basically all of the technology he used to write that review. He goes out of his way here to make clear that he doesn't think any of that is really interesting. Why do that as a rhetorical device?
Sunday, June 24, 2018
There is no such thing as a rigid solid.
How's that for a provocative, click-bait headline?
More than any other branch of physics, condensed matter physics highlights universality, the idea that some properties crop up repeatedly, in many physical systems, independent of and despite the differences in the microscopic building blocks of the system. One example that affects you pretty much all the time is emergence of rigid solids from the microscopic building blocks that are atoms and molecules. You may never think about it consciously, but mechanically rigid solids make up much of our environment - our buildings, our furniture, our roads, even ourselves.
A quartz crystal is an example of a rigid solid. By solid, I mean that the material maintains its own shape without confining walls, and by rigid, I mean that it “resists deformation”. Deforming the crystal – stretching it, squeezing it, bending it – involves trying to move some piece of the crystal relative to some other piece of the crystal. If you try to do this, it might flex a little bit, but the crystal pushes back on you. The ratio between the pressure (say) that you apply and the percentage change in the crystal’s size is called an elastic modulus, and it’s a measure of rigidity. Diamond has a big elastic modulus, as does steel. Rubber has a comparatively small elastic modulus – it’s squishier. Rigidity implies solidity. If a hunk of material has rigidity, it can withstand forces acting on it, like gravity. (Note that I'm already assuming that atoms can't pass through each other, which turns out to be a macroscopic manifestation of quantum mechanics, even though people rarely think of it that way. I've discussed this recently here.)
Take away the walls of an aquarium, and the rectangular “block” of water in there can’t resist gravity and splooshes all over the table. In free fall as in the International Space Station, a blob of water will pull itself into a sphere, as it doesn’t have the rigidity to resist surface tension, the tendency of a material to minimize its surface area.
Rigidity is an emergent property. One silicon or oxygen atom isn’t rigid, but somehow, when you put enough of them together under the right conditions, you get a mechanically solid object. A glass, in contrast to a crystal, looks very different if you zoom in to the atomic scale. In the case of silicon dioxide, while the detailed bonding of each silicon to two oxygens looks similar to the case of quartz, there is no long-range pattern to how the atoms are arranged. Indeed, while it would be incredibly difficult to do experimentally, if you could take a snapshot of molten silica glass at the atomic scale, from the positions of the atoms alone, you wouldn’t be able to tell whether it was molten or solidified. However, despite the structural similarities to a liquid, solid glass is mechanically rigid. In fact, some glasses are actually far more stiff than crystalline solids – metallic glasses are highly prized for exactly this property – despite having a microscopic structure that looks like a liquid.
Somehow, these two systems (quartz and silica glass), with very different detailed structures, have very similar mechanical properties on large scales. Maybe this example isn't too convincing. After all, the basic building blocks in both of those materials are really the same. However, mechanical rigidity shows up all the time in materials with comparatively high densities. Water ice is rigid. The bumper on your car is rigid. The interior of a hard-boiled egg is rigid. Concrete is rigid. A block of wood is rigid. A vacuum-packed bag of ground espresso-roasted coffee is rigid. Somehow, mechanical rigidity is a common collective fate of many-particle systems. So where does it originate? What conditions are necessary to have rigidity?
Interestingly, this question remains one that is a subject of research. Despite my click-bait headline, it sure looks like there are materials that are mechanically rigid. However, it can be shown mathematically (!) that "equilibrium states of matter that break spontaneously translational invariance...flow if even an infinitesimal stress is applied". That is, take some crystal or glass, where the constituent particles are sitting in well-defined locations (thus "breaking translational invariance"), and apply even a tiny bit of shear, and the material will flow. It can be shown mathematically that the particles in the bulk of such a material can always rearrange a tiny amount that should end up propagating out to displace the surface of the material, which really is what we mean by "flow". How do we reconcile this statement with what we see every day, for example that you touching your kitchen table really does not cause its surface to flow like a liquid?
Some of this is the kind of hair-splitting/no-true-Scotsman definitional stuff that shows up sometimes in theoretical physics. A true equilibrium state would last forever. To say that "equilibrium states of matter that break spontaneously translational invariance" are unstable under stress just means that the final, flowed rearrangement of atoms is energetically favored once stress is applied, but doesn't say anything on how long it takes the system to get there.
We see other examples of this kind of thing in condensed matter and statistical physics. It is possible to superheat liquid water above its boiling point. Under those conditions, the gas phase is thermodynamically favored, but to get from the homogeneous liquid to the gas requires creating a blob of gas, with an accompanying liquid/gas interface that is energetically expensive. The result is an "activation barrier".
Turns out, that appears to be the right way to think about solids. Solids only appear rigid on any useful timescale because the timescale to create defects and reach the flowed state is very very long. A recent discussion of this is here, with some really good references, in a paper that only appeared this spring in the Proceedings of the National Academy of Sciences of the US. An earlier work (a PRL) trying to quantify how this all works is here, if you're interested.
One could say that this is a bit silly - obviously we know empirically that there are rigid materials, and any analysis saying they don't exist has to be off the mark somehow. However, in science, particularly physics, this kind of study, where observation and some fairly well-defined model seem to contradict each other, is precisely where we tend to gain a lot of insight. (This is something we have to be better at explaining to non-scientists....)
More than any other branch of physics, condensed matter physics highlights universality, the idea that some properties crop up repeatedly, in many physical systems, independent of and despite the differences in the microscopic building blocks of the system. One example that affects you pretty much all the time is emergence of rigid solids from the microscopic building blocks that are atoms and molecules. You may never think about it consciously, but mechanically rigid solids make up much of our environment - our buildings, our furniture, our roads, even ourselves.
A quartz crystal is an example of a rigid solid. By solid, I mean that the material maintains its own shape without confining walls, and by rigid, I mean that it “resists deformation”. Deforming the crystal – stretching it, squeezing it, bending it – involves trying to move some piece of the crystal relative to some other piece of the crystal. If you try to do this, it might flex a little bit, but the crystal pushes back on you. The ratio between the pressure (say) that you apply and the percentage change in the crystal’s size is called an elastic modulus, and it’s a measure of rigidity. Diamond has a big elastic modulus, as does steel. Rubber has a comparatively small elastic modulus – it’s squishier. Rigidity implies solidity. If a hunk of material has rigidity, it can withstand forces acting on it, like gravity. (Note that I'm already assuming that atoms can't pass through each other, which turns out to be a macroscopic manifestation of quantum mechanics, even though people rarely think of it that way. I've discussed this recently here.)
Take away the walls of an aquarium, and the rectangular “block” of water in there can’t resist gravity and splooshes all over the table. In free fall as in the International Space Station, a blob of water will pull itself into a sphere, as it doesn’t have the rigidity to resist surface tension, the tendency of a material to minimize its surface area.
Rigidity is an emergent property. One silicon or oxygen atom isn’t rigid, but somehow, when you put enough of them together under the right conditions, you get a mechanically solid object. A glass, in contrast to a crystal, looks very different if you zoom in to the atomic scale. In the case of silicon dioxide, while the detailed bonding of each silicon to two oxygens looks similar to the case of quartz, there is no long-range pattern to how the atoms are arranged. Indeed, while it would be incredibly difficult to do experimentally, if you could take a snapshot of molten silica glass at the atomic scale, from the positions of the atoms alone, you wouldn’t be able to tell whether it was molten or solidified. However, despite the structural similarities to a liquid, solid glass is mechanically rigid. In fact, some glasses are actually far more stiff than crystalline solids – metallic glasses are highly prized for exactly this property – despite having a microscopic structure that looks like a liquid.
Somehow, these two systems (quartz and silica glass), with very different detailed structures, have very similar mechanical properties on large scales. Maybe this example isn't too convincing. After all, the basic building blocks in both of those materials are really the same. However, mechanical rigidity shows up all the time in materials with comparatively high densities. Water ice is rigid. The bumper on your car is rigid. The interior of a hard-boiled egg is rigid. Concrete is rigid. A block of wood is rigid. A vacuum-packed bag of ground espresso-roasted coffee is rigid. Somehow, mechanical rigidity is a common collective fate of many-particle systems. So where does it originate? What conditions are necessary to have rigidity?
Interestingly, this question remains one that is a subject of research. Despite my click-bait headline, it sure looks like there are materials that are mechanically rigid. However, it can be shown mathematically (!) that "equilibrium states of matter that break spontaneously translational invariance...flow if even an infinitesimal stress is applied". That is, take some crystal or glass, where the constituent particles are sitting in well-defined locations (thus "breaking translational invariance"), and apply even a tiny bit of shear, and the material will flow. It can be shown mathematically that the particles in the bulk of such a material can always rearrange a tiny amount that should end up propagating out to displace the surface of the material, which really is what we mean by "flow". How do we reconcile this statement with what we see every day, for example that you touching your kitchen table really does not cause its surface to flow like a liquid?
Some of this is the kind of hair-splitting/no-true-Scotsman definitional stuff that shows up sometimes in theoretical physics. A true equilibrium state would last forever. To say that "equilibrium states of matter that break spontaneously translational invariance" are unstable under stress just means that the final, flowed rearrangement of atoms is energetically favored once stress is applied, but doesn't say anything on how long it takes the system to get there.
We see other examples of this kind of thing in condensed matter and statistical physics. It is possible to superheat liquid water above its boiling point. Under those conditions, the gas phase is thermodynamically favored, but to get from the homogeneous liquid to the gas requires creating a blob of gas, with an accompanying liquid/gas interface that is energetically expensive. The result is an "activation barrier".
Turns out, that appears to be the right way to think about solids. Solids only appear rigid on any useful timescale because the timescale to create defects and reach the flowed state is very very long. A recent discussion of this is here, with some really good references, in a paper that only appeared this spring in the Proceedings of the National Academy of Sciences of the US. An earlier work (a PRL) trying to quantify how this all works is here, if you're interested.
One could say that this is a bit silly - obviously we know empirically that there are rigid materials, and any analysis saying they don't exist has to be off the mark somehow. However, in science, particularly physics, this kind of study, where observation and some fairly well-defined model seem to contradict each other, is precisely where we tend to gain a lot of insight. (This is something we have to be better at explaining to non-scientists....)