A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, December 31, 2009
Happy New Year
Happy New Year to my readers. Posts will pick up again in 2010. In the mean time, you might be amused by a couple of science-y gifts I got this holiday season. I've got a great science museum-type demo in mind inspired by this desk toy, and no lab should ever be without a sonic screwdriver. Finally, while not strictly science-related, this is very funny, containing such gems as Super Monkey Collider Loses Funding.
Friday, December 25, 2009
Arxiv articles I should read
Some recent arxiv articles that I really should find the time to read in depth:
arxiv:0809.3474 - Affleck, Quantum impurity problems in condensed matter physics
Ian Affleck has revised his (rather mathematical) Les Houches lecture notes about quantum impurity problems (typically a single impurity, such as an unpaired electron, in contact with some kind of quantum environment).
arxiv:0904.1933 - Cubrovic, Zaanan, and Schalm, String theory, quantum phase transitions, and the emergent Fermi liquid
This is a Science paper related to my earlier post about the connection between certain quantum gravity models and condensed matter theories.
arxiv:0912.4868 - Heiblum, Fractional charge determination via quantum shot noise measurements
Heiblum is a consummate experimentalist, and this article in honor of Yoseph Imry looks like a great review of this area, particularly recent insights into the subtleties that happen with temperature and bias.
arxiv:0809.3474 - Affleck, Quantum impurity problems in condensed matter physics
Ian Affleck has revised his (rather mathematical) Les Houches lecture notes about quantum impurity problems (typically a single impurity, such as an unpaired electron, in contact with some kind of quantum environment).
arxiv:0904.1933 - Cubrovic, Zaanan, and Schalm, String theory, quantum phase transitions, and the emergent Fermi liquid
This is a Science paper related to my earlier post about the connection between certain quantum gravity models and condensed matter theories.
arxiv:0912.4868 - Heiblum, Fractional charge determination via quantum shot noise measurements
Heiblum is a consummate experimentalist, and this article in honor of Yoseph Imry looks like a great review of this area, particularly recent insights into the subtleties that happen with temperature and bias.
Sunday, December 20, 2009
Noise IV
The last kind of electrical noise I wanted to discuss is called 1/f or "flicker" noise, and it's something of a special case. It's intrinsic in the sense that it originates with the material whose conductance or resistance is being measured, but it's usually treated as extrinsic, in the sense that its physical mechanism is not what's of interest and in the limit of an "ideal" sample it probably wouldn't be present. Consider a resistance measurement (that is, flowing current through some sample and looking at the resulting voltage drop). As the name implies, the power spectral density of voltage fluctuations, SV, has a component that varies approximately inversely with the frequency. That is, the voltage fluctuates as a function of time, and the slow fluctuations have larger amplitudes than the fast fluctuations. Unlike shot noise, which results from the discrete nature of charge, 1/f noise exists because the actual resistance of the sample itself is varying as a function of time. That is, some fluctuation dV(t) comes from I dR(t), where I is the average DC current. On the bright side, that means there is an obvious test of whether the noise you're seeing is of this type: real 1/f noise power scales like the square of the current (in contrast to shot noise, which is linear in I, and Johnson-Nyquist noise, which is independent of I).
The particular 1/f form is generally thought to result from there being many "fluctuators" with a broad distribution of time scales. A "fluctuator" is some microscopic degree of freedom, usually considered to have two possible states, such that the electrical resistance is different in each state. The ubiquitous two-level systems that I've mentioned before can be fluctuators. Other candidates include localized defect states ("traps") that can either be empty or occupied by an electron. These latter are particularly important in semiconductor devices like transistors. In the limit of a single fluctuator, the resistance toggles back and forth stochastically between two states in what is often called "telegraph noise".
A thorough bibliography of 1/f noise is posted here by a thoughtful person.
I can't leave this subject without talking about one specific instance of 1/f noise that I think is very neat physics. In mesoscopic conductors, where electronic conduction is effectively a quantum interference experiment, changing the disorder seen by the electrons can lead to fluctuations in the conductance (within a quantum coherent volume) by an amount ~ e2/h. In this case, the resulting 1/f noise observed in such a conductor actually grows with decreasing temperature, which is the opposite of, e.g., Johnson-Nyquist noise. The reason is the following. In macroscopic conductors, ensemble averaging of the fluctuations over all the different conducting regions of a sample suppresses the noise; as T decreases, though, the typical quantum coherence length grows, and this kind of ensemble averaging is reduced, since the sample contains fewer coherent regions. My group has done some work on this in the past.
The particular 1/f form is generally thought to result from there being many "fluctuators" with a broad distribution of time scales. A "fluctuator" is some microscopic degree of freedom, usually considered to have two possible states, such that the electrical resistance is different in each state. The ubiquitous two-level systems that I've mentioned before can be fluctuators. Other candidates include localized defect states ("traps") that can either be empty or occupied by an electron. These latter are particularly important in semiconductor devices like transistors. In the limit of a single fluctuator, the resistance toggles back and forth stochastically between two states in what is often called "telegraph noise".
A thorough bibliography of 1/f noise is posted here by a thoughtful person.
I can't leave this subject without talking about one specific instance of 1/f noise that I think is very neat physics. In mesoscopic conductors, where electronic conduction is effectively a quantum interference experiment, changing the disorder seen by the electrons can lead to fluctuations in the conductance (within a quantum coherent volume) by an amount ~ e2/h. In this case, the resulting 1/f noise observed in such a conductor actually grows with decreasing temperature, which is the opposite of, e.g., Johnson-Nyquist noise. The reason is the following. In macroscopic conductors, ensemble averaging of the fluctuations over all the different conducting regions of a sample suppresses the noise; as T decreases, though, the typical quantum coherence length grows, and this kind of ensemble averaging is reduced, since the sample contains fewer coherent regions. My group has done some work on this in the past.
Thursday, December 17, 2009
Physics and Industry
I read this column on the back page of this month's APS News, and I think it hits a lot of the right notes, until this paragraph:
While this is nice in the abstract, I'm trying to imagine how this is any more likely to happen than me getting my own unicorn and a candy-cane tree. How can an academic physicist with a functioning research group possibly take off for two or three years to work in industry? What happens to their students? Their other funding? What university would actually encourage this, given that they have to have the salary line, lab space, and office space still there, and that they have teaching/service needs? In an era when companies are loathe to hire permanent research staff and give them proper facilities and resources (allegedly because such things do not maximize (short-term) profits and therefore dilute shareholder value), why on earth would a company want a revolving door of temporary employees that need the same resources as permanent staff but are in continual need of training and business education?
It seems to me that a more realistic approach, if you really want to encourage an industrial R&D resurgence in the US, would focus on tax and policy incentives to convince companies to invest in this stuff. Discourage ultrashort-term strategies that maximize next quarter's profits rather than ensuring long term health of the company. Give federal loan guarantees to companies that want to establish research efforts. I'm 100% certain that if the industrial R&D jobs were there, we would fill them - the problem is that US companies overall have decided that investing in physics doesn't give them a quick stock price boost. If you want to encourage more interactions between university research faculty and industry, fine. Give tax breaks for industrial consulting or university research funding by industry. (Though biomedical research shows that extremely strong coupling between researchers and their profit-motivated funding sources is not necessarily a good thing.)
Many of the Nation’s physics departments and other departments staffed by physicists should encourage some of their faculty members to take a two or three year sabbatical leave and join the physics staffs of companies wishing to use their skills to strengthen or rebuild their industrial bases. With the expected cutbacks in Federal spending for everything, including scientific research, the physics academic staffs, that already spend far too much of their time writing proposals to compete for Government grants, should help the Nation by joining one of the many companies who really could use their skills to refine their products and introduce the innovations so characteristic of their physics training. In their new industrial positions, the successes of these industrially focused physicists would encourage further enrollments in physics and all related sciences. Meanwhile the Nation’s manufacturing base would be strengthened and rebuilt.
While this is nice in the abstract, I'm trying to imagine how this is any more likely to happen than me getting my own unicorn and a candy-cane tree. How can an academic physicist with a functioning research group possibly take off for two or three years to work in industry? What happens to their students? Their other funding? What university would actually encourage this, given that they have to have the salary line, lab space, and office space still there, and that they have teaching/service needs? In an era when companies are loathe to hire permanent research staff and give them proper facilities and resources (allegedly because such things do not maximize (short-term) profits and therefore dilute shareholder value), why on earth would a company want a revolving door of temporary employees that need the same resources as permanent staff but are in continual need of training and business education?
It seems to me that a more realistic approach, if you really want to encourage an industrial R&D resurgence in the US, would focus on tax and policy incentives to convince companies to invest in this stuff. Discourage ultrashort-term strategies that maximize next quarter's profits rather than ensuring long term health of the company. Give federal loan guarantees to companies that want to establish research efforts. I'm 100% certain that if the industrial R&D jobs were there, we would fill them - the problem is that US companies overall have decided that investing in physics doesn't give them a quick stock price boost. If you want to encourage more interactions between university research faculty and industry, fine. Give tax breaks for industrial consulting or university research funding by industry. (Though biomedical research shows that extremely strong coupling between researchers and their profit-motivated funding sources is not necessarily a good thing.)
Tuesday, December 15, 2009
Noise III
While Johnson-Nyquist noise is an equilibrium phenomenon, shot noise is a nonequilibrium effect, only present when there is a net current being driven through a system. Shot noise is a consequence of the fact that charge comes in discrete chunks. Remember, current noise is the mean-square fluctuations about the average current. If charge was a continuous quantity, then there wouldn't be any fluctuations - the average flow rate would completely describe the situation. However, since charge is quantized, a complete description of charge flow would instead be an itemized list of the arrival times of each electron. With such a list, a theorist could calculate not just the average current, but the fluctuations, and all of the higher statistical moments. This is called "full counting statistics", and is actually achievable under certain very special circumstances.
Schottky, about 90 years ago, worked out the expected current noise power spectral density, SI, for the case of independent electrons traversing a single region with no scattering (as in a vacuum tube diode, for example). If the electrons are truly independent (this electron doesn't know when the last electron came through, or when the next one is going through), and there is just some arrival rate for them, then the electron arrivals are described by Poisson statistics. In this case, Schottky showed that SI = <(I - < I >)2> = 2 e < I > Amps2/Hz. That is, the current noise is proportional to the average current, with a proportionality constant that is twice the electronic charge.
In the general case, when electrons are not necessarily independent of each other, it is more common to write the zero temperature shot noise as SI = F 2 e < I >, where F is called the Fano factor. One can think if F as a correction factor, but under sometimes it's better to think of F as describing the effective charge of the charge carriers. For example, suppose current was carried by pairs of electrons, but the pair arrivals are Poisson distributed. This situation can come up in some experiments involving superconductors. In that case, one would find that F = 2, or you can think of the effective charge carriers being the pairs, which have charge 2e. These deviations away from the classical Schottky result are where all the fun and interesting physics lives. For example, shot noise measurements have been used to show that the effective charge of the quasiparticles in the fractional quantum Hall regime is fractional. Shot noise can also be dramatically modified in highly quantum coherent systems. See here for a great review of all of this, and here for a more technical one.
Nanostructures are particularly relevant for shot noise measurements. It turns out that shot noise is generally suppressed (F approaches zero) in macroscopic conductors. (It's not easy to see this based on what I've said so far. Here's a handwave: the serious derivation of shot noise follows an electron at a particular energy and looks to see whether it's transmitted or reflected from some scattering region. If the electron is instead inelastically scattered with some probability into some other energy state, that's a bit like making the electrons continuous.) To see shot noise clearly, you either need a system where conduction is completely controlled by a single scattering-free region (e.g., a vacuum tube; a thin depletion region in a semiconductor structure; a tunnel barrier), or you need a system small enough and cold enough that inelastic scattering is rare.
The bottom line: shot noise is a result of current flow and the discrete nature of charge, and deviations from the classical Schottky result tell you about correlations between electrons and the quantum transmission properties of your system. Up next: 1/f noise.
Schottky, about 90 years ago, worked out the expected current noise power spectral density, SI, for the case of independent electrons traversing a single region with no scattering (as in a vacuum tube diode, for example). If the electrons are truly independent (this electron doesn't know when the last electron came through, or when the next one is going through), and there is just some arrival rate for them, then the electron arrivals are described by Poisson statistics. In this case, Schottky showed that SI = <(I - < I >)2> = 2 e < I > Amps2/Hz. That is, the current noise is proportional to the average current, with a proportionality constant that is twice the electronic charge.
In the general case, when electrons are not necessarily independent of each other, it is more common to write the zero temperature shot noise as SI = F 2 e < I >, where F is called the Fano factor. One can think if F as a correction factor, but under sometimes it's better to think of F as describing the effective charge of the charge carriers. For example, suppose current was carried by pairs of electrons, but the pair arrivals are Poisson distributed. This situation can come up in some experiments involving superconductors. In that case, one would find that F = 2, or you can think of the effective charge carriers being the pairs, which have charge 2e. These deviations away from the classical Schottky result are where all the fun and interesting physics lives. For example, shot noise measurements have been used to show that the effective charge of the quasiparticles in the fractional quantum Hall regime is fractional. Shot noise can also be dramatically modified in highly quantum coherent systems. See here for a great review of all of this, and here for a more technical one.
Nanostructures are particularly relevant for shot noise measurements. It turns out that shot noise is generally suppressed (F approaches zero) in macroscopic conductors. (It's not easy to see this based on what I've said so far. Here's a handwave: the serious derivation of shot noise follows an electron at a particular energy and looks to see whether it's transmitted or reflected from some scattering region. If the electron is instead inelastically scattered with some probability into some other energy state, that's a bit like making the electrons continuous.) To see shot noise clearly, you either need a system where conduction is completely controlled by a single scattering-free region (e.g., a vacuum tube; a thin depletion region in a semiconductor structure; a tunnel barrier), or you need a system small enough and cold enough that inelastic scattering is rare.
The bottom line: shot noise is a result of current flow and the discrete nature of charge, and deviations from the classical Schottky result tell you about correlations between electrons and the quantum transmission properties of your system. Up next: 1/f noise.
Monday, December 14, 2009
Interesting times.
It's always interesting to read about your institution in the national media. It's a pretty good article that captures both sides of the Rice-Baylor question.
Sunday, December 13, 2009
Noise II
One type of electronic noise that is inescapable is Johnson-Nyquist noise. Roughly a hundred years ago, physicists studying electricity noticed that sensitive measurements showed noise (apparently random time-varying fluctuations in the voltage or current). They found that the power spectral density of (for example) the voltage noise, SV, was larger when the system in question was more resistive, and that higher temperatures seemed to make the problem even worse. Bert Johnson, then at Bell Labs, did a very careful study of this phenomenon in 1927, and showed that this noise appeared to result from statistical fluctuations in the electron "gas". This allowed him to do systematic measurements (with different resistances at fixed temperature, and a fixed resistance at varying temperatures) and determine Boltzmann's constant (though he ends up off by ~ 10% or so). Read the original paper if you want to get a good look at how a careful experimentalist worked eighty years ago.
Very shortly thereafter, Harry Nyquist came up with a very elegant explanation for the precise magnitude of the noise. Imagine a resistor, and think of the electrons in that resistor as a gas at some temperature, T. All the time the electrons are bopping around; at one instant there might be an excess of electrons at one end of the resistor, while later there might be a deficit. This all averages out, since the resistor is overall neutral, but in an open circuit configuration these fluctuations would lead to a fluctuating voltage across the resistor. Nyquist said, imagine a 1d electromagnetic cavity (transmission line), terminated at each end by such a resistor. If the whole system is in thermal equilibrium, we can figure out the energy content of the modes (of various frequencies) of the cavity - it's the black body radiation problem that we know how to solve. Now, any energy in the cavity must come from these fluctuations in the resistors. On the other hand, since the whole system is in steady state and no energy is building up anywhere, the energy in the cavity is also being absorbed by the resistors. This is an example of what we now call the fluctuation-dissipation theorem: the fluctuations (open-circuit voltage or short-circuit current) in the circuit are proportional to how dissipative the circuit is (the resistance). Nyquist ran the numbers and found the result we now take for granted. For open-circuit voltage fluctuations, SV = 4 kBTR V2/Hz, independent of frequency (ignoring quantum effects). For short-circuit current fluctuations, SI = 4 kBT / R A2/Hz.
Johnson-Nyquist noise is an unavoidable consequence of thermodynamic equilibrium. It's a reason many people cool their amplifiers or measurement electronics. It can also be useful. Noise thermometry (here, for example) has become an excellent way of measuring the electronic temperature in many experiments.
Very shortly thereafter, Harry Nyquist came up with a very elegant explanation for the precise magnitude of the noise. Imagine a resistor, and think of the electrons in that resistor as a gas at some temperature, T. All the time the electrons are bopping around; at one instant there might be an excess of electrons at one end of the resistor, while later there might be a deficit. This all averages out, since the resistor is overall neutral, but in an open circuit configuration these fluctuations would lead to a fluctuating voltage across the resistor. Nyquist said, imagine a 1d electromagnetic cavity (transmission line), terminated at each end by such a resistor. If the whole system is in thermal equilibrium, we can figure out the energy content of the modes (of various frequencies) of the cavity - it's the black body radiation problem that we know how to solve. Now, any energy in the cavity must come from these fluctuations in the resistors. On the other hand, since the whole system is in steady state and no energy is building up anywhere, the energy in the cavity is also being absorbed by the resistors. This is an example of what we now call the fluctuation-dissipation theorem: the fluctuations (open-circuit voltage or short-circuit current) in the circuit are proportional to how dissipative the circuit is (the resistance). Nyquist ran the numbers and found the result we now take for granted. For open-circuit voltage fluctuations, SV = 4 kBTR V2/Hz, independent of frequency (ignoring quantum effects). For short-circuit current fluctuations, SI = 4 kBT / R A2/Hz.
Johnson-Nyquist noise is an unavoidable consequence of thermodynamic equilibrium. It's a reason many people cool their amplifiers or measurement electronics. It can also be useful. Noise thermometry (here, for example) has become an excellent way of measuring the electronic temperature in many experiments.
Friday, December 11, 2009
Noise I
For a while now the fraction of condensed matter physicists that think about electronic transport measurements have been interested in noise as a means of learning more about the underlying physics in systems. I thought it would be useful to give a sense of why noise is important. First, what do we mean by noise? As you might imagine from the colloquial meaning of the term, electronic noise manifests itself as fluctuations as a function of time in either the current through a system (current noise) or the voltage difference across a system (voltage noise). These fluctuations are distributed about some mean value of current or voltage, so the smart way to characterize them is by taking the average of the square of the deviation from the mean (e.g., <(I - < I >)2>, where the angle brackets denote averaging over time, and I is the current.). You can imagine that these fluctuations are distributed over all sort of time scales - some might be fast and some might be slow. The natural thing to do is work in the frequency domain (Fourier transforming the fluctuations), and then you can worry about the power spectral density of the fluctuations. For current noise, this is usually written SI, which has units of Amps2/Hz. If you evaluate SI at a particular frequency, then that tells you the size of the mean square current fluctuations within a 1 Hz bandwidth about that frequency. There is an analogous quantity SV [V2/Hz] for voltage noise. If the power spectral density is constant over a broad range of frequencies (up to some eventual high frequency cutoff), the noise is said to be "white". If, instead, there is a systematic trend with a larger power spectral density at low frequencies, the noise is sometimes called "pink".
In any measurement, there might be several kinds of noise that one must worry about. For example, your measuring equipment might show that the apparent SI or SV has several sharp peaks at particular frequencies. This is narrow band noise, and might be extrinsic, resulting from unintentional pickup. The classic examples include 60 Hz (50 Hz in many places outside the US) and its multiples, due to power lines, ~ 30 kHz from fluorescent lights, 540-1700 kHz from AM radio, 85-108 MHz from FM radio, etc. Extrinsic noise is, in physicist parlance, uninteresting, though it may be a major practical annoyance. There are sometimes intrinsic sources of narrow band noise, however, that can be very interesting indeed, since they indicate something going on inside the sample/system in question that has a very particular time scale.
There are three specific types of noise that are often of physical interest, particularly in nanostructures: thermal (Johnson-Nyquist) noise, shot ("partition") noise, and 1/f ("flicker") noise. I'll write a bit about each of these soon.
In any measurement, there might be several kinds of noise that one must worry about. For example, your measuring equipment might show that the apparent SI or SV has several sharp peaks at particular frequencies. This is narrow band noise, and might be extrinsic, resulting from unintentional pickup. The classic examples include 60 Hz (50 Hz in many places outside the US) and its multiples, due to power lines, ~ 30 kHz from fluorescent lights, 540-1700 kHz from AM radio, 85-108 MHz from FM radio, etc. Extrinsic noise is, in physicist parlance, uninteresting, though it may be a major practical annoyance. There are sometimes intrinsic sources of narrow band noise, however, that can be very interesting indeed, since they indicate something going on inside the sample/system in question that has a very particular time scale.
There are three specific types of noise that are often of physical interest, particularly in nanostructures: thermal (Johnson-Nyquist) noise, shot ("partition") noise, and 1/f ("flicker") noise. I'll write a bit about each of these soon.
Wednesday, December 09, 2009
Fun new CD
They Might Be Giants has a new CD that my readers with kids might enjoy: Science is Real. Fun stuff. This song has long been a favorite of mine.
Tuesday, December 08, 2009
Cryogenic dark matter detection
Whether this rumor turns out to be accurate or not, the technology used in the CDMS collaboration's dark matter search is quite interesting. Working down the hall from these folks in graduate school definitely gave me an appreciation for the challenges they face, as well as teaching me some neat condensed matter physics and experimental knowledge.
The basic challenge in dark matter detection is that weakly interacting particles are, well, very weakly interacting. We have all kinds of circumstantial evidence (rotation curves of galaxies; gravitational lensing measurements of mass distributions; particular angular anisotropies in the cosmic microwave background) that there is a lot of mass out there in the universe that is not ordinary baryonic matter (that is, made from protons and neutrons). The dark matter hypothesis is that there are additional (neutral) particles out there that couple only very weakly to normal matter, certainly through gravity, and presumably through other particle physics interactions with very small cross-sections. A reasonable approach to looking for these particles would involve watching for them to recoil off the nuclei of normal matter somehow. These recoils would dump energy into the normal matter, but you'd need to distinguish between these events and all sorts of others. For example, if any atoms in your detector undergo radioactive decay, that would also dump energy into the detector material's lattice. Similarly, if a cosmic ray came in and banged around, that would deposit energy, too. Those two possibilities also deposit charge into the detector, though, so the ability to identify and discount recoil events associated with charged particles would be essential. Neutrons hitting the detector material would be much more annoying.
The CDMS detectors consist of ~ cm-thick slabs of Si (ok) and Ge (better, because Ge is heavier and therefore has more nuclear material), each with an electrical ground plane (very thin low-Z metal film) on one side and an array of meandering tungsten micro-scale wires on the other side. The tungsten meanders are "superconducting transition edge bolometers". The specially deposited tungsten films have a superconducting transition somewhere near 75 mK. By properly biasing them electrically (using "electrothermal feedback"), they sit right on the edge of their transition. If any extra thermal energy gets dumped into the meander, a section of it is driven "normal". This leads to a detectable voltage pulse. At the same time, because that section now has higher resistance, current flow through there decreases, allowing the section to cool back down and go superconducting again. By having very thin W lines, their heat capacity is very small, and this feedback process (recovery time) is fast. A nuclear recoil produces a bunch of phonons which propagate in the crystal with slightly varying sound speeds depending on direction. By having an array of such meanders and correlating their responses, it's possible to back out roughly where the recoil event took place. (They had an image on the cover of Physics Today back in the 90s some time showing beautiful ballistic phonon propagation in Si with this technique.) Moreover, there is a small DC voltage difference between the transition edge detectors and the ground plane. That means that any charge dumped into the detector will drift. By looking for current pulses, it is possible to determine which recoil events came along with charge deposition in the crystal. The CDMS folks have a bunch of these slabs attached via a cold finger to a great big dilution refrigerator (something like 4 mW cooling power at 100 mK, for those cryo experts out there) up in an old salt mine in Minnesota, and they've been measuring for several years now, trying to get good statistics.
To get a flavor for how challenging this stuff is, realize that they can't use ordinary Pb-Sn solder (which often comes pre-tinned on standard electronic components) anywhere near the detector. There's too high an abundance of a radioisotope of Pb that is produced by cosmic rays. They have to use special solder based on "galley lead", which gets its name because it comes from Roman galleys that have been sunk on the bottom of the Mediterranean for 2000 years (and thus not exposed to cosmic rays). I remember as a grad student hearing an anecdote about how they deduced that someone had screwed up and used a commercial pre-tinned LED because they could use the detector itself to see clear as day the location of a local source of events. I also remember watching the challenge of finding a wire-bonder that didn't blow up the meanders due to electrostatic discharge problems. There are competing techniques out there now, of course.
Well, it'll be interesting to see what comes out of this excitement. These are some really careful people. If they claim there's something there, they're probably right.
The basic challenge in dark matter detection is that weakly interacting particles are, well, very weakly interacting. We have all kinds of circumstantial evidence (rotation curves of galaxies; gravitational lensing measurements of mass distributions; particular angular anisotropies in the cosmic microwave background) that there is a lot of mass out there in the universe that is not ordinary baryonic matter (that is, made from protons and neutrons). The dark matter hypothesis is that there are additional (neutral) particles out there that couple only very weakly to normal matter, certainly through gravity, and presumably through other particle physics interactions with very small cross-sections. A reasonable approach to looking for these particles would involve watching for them to recoil off the nuclei of normal matter somehow. These recoils would dump energy into the normal matter, but you'd need to distinguish between these events and all sorts of others. For example, if any atoms in your detector undergo radioactive decay, that would also dump energy into the detector material's lattice. Similarly, if a cosmic ray came in and banged around, that would deposit energy, too. Those two possibilities also deposit charge into the detector, though, so the ability to identify and discount recoil events associated with charged particles would be essential. Neutrons hitting the detector material would be much more annoying.
The CDMS detectors consist of ~ cm-thick slabs of Si (ok) and Ge (better, because Ge is heavier and therefore has more nuclear material), each with an electrical ground plane (very thin low-Z metal film) on one side and an array of meandering tungsten micro-scale wires on the other side. The tungsten meanders are "superconducting transition edge bolometers". The specially deposited tungsten films have a superconducting transition somewhere near 75 mK. By properly biasing them electrically (using "electrothermal feedback"), they sit right on the edge of their transition. If any extra thermal energy gets dumped into the meander, a section of it is driven "normal". This leads to a detectable voltage pulse. At the same time, because that section now has higher resistance, current flow through there decreases, allowing the section to cool back down and go superconducting again. By having very thin W lines, their heat capacity is very small, and this feedback process (recovery time) is fast. A nuclear recoil produces a bunch of phonons which propagate in the crystal with slightly varying sound speeds depending on direction. By having an array of such meanders and correlating their responses, it's possible to back out roughly where the recoil event took place. (They had an image on the cover of Physics Today back in the 90s some time showing beautiful ballistic phonon propagation in Si with this technique.) Moreover, there is a small DC voltage difference between the transition edge detectors and the ground plane. That means that any charge dumped into the detector will drift. By looking for current pulses, it is possible to determine which recoil events came along with charge deposition in the crystal. The CDMS folks have a bunch of these slabs attached via a cold finger to a great big dilution refrigerator (something like 4 mW cooling power at 100 mK, for those cryo experts out there) up in an old salt mine in Minnesota, and they've been measuring for several years now, trying to get good statistics.
To get a flavor for how challenging this stuff is, realize that they can't use ordinary Pb-Sn solder (which often comes pre-tinned on standard electronic components) anywhere near the detector. There's too high an abundance of a radioisotope of Pb that is produced by cosmic rays. They have to use special solder based on "galley lead", which gets its name because it comes from Roman galleys that have been sunk on the bottom of the Mediterranean for 2000 years (and thus not exposed to cosmic rays). I remember as a grad student hearing an anecdote about how they deduced that someone had screwed up and used a commercial pre-tinned LED because they could use the detector itself to see clear as day the location of a local source of events. I also remember watching the challenge of finding a wire-bonder that didn't blow up the meanders due to electrostatic discharge problems. There are competing techniques out there now, of course.
Well, it'll be interesting to see what comes out of this excitement. These are some really careful people. If they claim there's something there, they're probably right.
Tuesday, December 01, 2009
Scale and perspective
Well, my old friends at AIG now owe $25B less to the US government. For those keeping score at home, that's about 4 National Science Foundation annual budgets, or 0.8 NIH annual budgets. AIG still owes the US government an additional $62B.
Monday, November 30, 2009
Lab conditions
I disagree with this comic, though I can never get my students or our facilities people to back my idea of converting the entire lab into ultrahigh vacuum space. Sure, spacesuits would be required for lab work, but think of all the time we would save swapping samples and pumping out our evaporator.
Wednesday, November 25, 2009
Referees
In the world of scientific peer review, I think that there are three kinds of referees: those that help, those than hinder, and those that are, umm, ineffective. Referees that are ineffective do an adequate surface job, looking over papers to make sure that there are no glaring problems and that the manuscript is appropriate for the journal in question, but that's it. Referees that hinder are the annoying ones we all complain about. You know - they're the ones that send in a twelve word review for your groundbreaking submission to Science or Nature after sitting on it for 6 weeks; the review says little except "Meh." and may even indicate that they didn't really read the paper. They're the ones that say work is nice but not really original, with no evidence to back up that statement. They're the ones who sit on papers because they're working on something similar.
Referees that help are the best kind, of course. These are the people who read manuscripts carefully and write reports that end up dramatically improving the paper. They point out better ways to plot the data, or ask for clarification of a point that really does need clarification or improved presentation. They offer constructive criticism. These folks deserve our thanks. They're an important and poorly recognized component of the scientific process.
Referees that help are the best kind, of course. These are the people who read manuscripts carefully and write reports that end up dramatically improving the paper. They point out better ways to plot the data, or ask for clarification of a point that really does need clarification or improved presentation. They offer constructive criticism. These folks deserve our thanks. They're an important and poorly recognized component of the scientific process.
Monday, November 23, 2009
Sunday, November 22, 2009
Graphene, part II
One reason that graphene has comparatively remarkable conduction properties is its band structure, and in particular the idea that single-particle states carry a pseudospin. This sounds like jargon, and until I'd heard Philip Kim talk about this, I hadn't fully appreciated how this works. The idea is as follows. One way to think about the graphene lattice is that it consists of two triangular lattices offset from each other by one carbon-carbon bond length. If we had just one of those lattices, you could describe the single-particle electronic states as Bloch waves - these look like plane waves multiplied by functions that are spatially periodic with reference to that particular lattice. Since we have two such lattices, one way to describe each electronic state is as a linear combination of Bloch states from lattice A and lattice B. (The spatial periodicity associated with lattice A (B) is described by a set of reciprocal lattice vectors that are labeled K (K'))
Here is where things get tricky. The particular linear combinations that are the real single-particle eigenstates can be written using the same Pauli matrices that are used to describe the spin angular momentum of spin-1/2 particles. In fact, if you pick a single-particle eigenstate with a crystal momentum \hbar k, the correct combination of Pauli matrices to use would be the same as if you were describing a spin-1/2 particle oriented along the same direction as k. This property of the electronic states is called pseudospin. It does not correspond to a real spin in the sense of a real intrinsic angular momentum. It is, however, a compact way of keeping track of the role of the two sublattices in determining the properties of particular electronic states.
The consequences of this pseudospin description are very interesting. For example, this is related to why back-scattering is disfavored in clean graphene. In pseudospin language, a scattering event that flips the momentum of a particle from +k to -k would have to flip the pseudospin, too, and that's not easy. In non-pseudospin language, that kind of scattering would have to change the phase relationship between the A and B sublattice Bloch state components of the single-particle state. From that way of phrasing it, it's more clear (at least to me) why this is not easy - it requires rather deep changes to the whole extended wavefunction that distinguish between the different sublattices, and in a clean sample at T = 0, that shouldn't happen.
A good overview of this stuff can be found here (pdf) in this article from Physics Today, as well as this review article. Finally, Michael Fuhrer at the University of Maryland has a nice powerpoint slide show (here) that discusses how to think about the pseudospin. He does a much more thorough and informative job than I do here.
Here is where things get tricky. The particular linear combinations that are the real single-particle eigenstates can be written using the same Pauli matrices that are used to describe the spin angular momentum of spin-1/2 particles. In fact, if you pick a single-particle eigenstate with a crystal momentum \hbar k, the correct combination of Pauli matrices to use would be the same as if you were describing a spin-1/2 particle oriented along the same direction as k. This property of the electronic states is called pseudospin. It does not correspond to a real spin in the sense of a real intrinsic angular momentum. It is, however, a compact way of keeping track of the role of the two sublattices in determining the properties of particular electronic states.
The consequences of this pseudospin description are very interesting. For example, this is related to why back-scattering is disfavored in clean graphene. In pseudospin language, a scattering event that flips the momentum of a particle from +k to -k would have to flip the pseudospin, too, and that's not easy. In non-pseudospin language, that kind of scattering would have to change the phase relationship between the A and B sublattice Bloch state components of the single-particle state. From that way of phrasing it, it's more clear (at least to me) why this is not easy - it requires rather deep changes to the whole extended wavefunction that distinguish between the different sublattices, and in a clean sample at T = 0, that shouldn't happen.
A good overview of this stuff can be found here (pdf) in this article from Physics Today, as well as this review article. Finally, Michael Fuhrer at the University of Maryland has a nice powerpoint slide show (here) that discusses how to think about the pseudospin. He does a much more thorough and informative job than I do here.
Wednesday, November 18, 2009
Not even wrong.
No, this is not a reference to Peter Woit's blog. Rather, it's my reaction to reading this and the other pages at that domain. Wow. Some audiophiles must really be gullible.
Monday, November 16, 2009
Graphene, part I
Graphene is one of the hottest materials out there right now in condensed matter physics, and I'm trying to figure out what tactic to take in making some blog postings about it. One good place to start is the remarkably fast rise in the popularity of graphene. Why did it catch on so quickly? As far as I can tell, there are several reasons.
- Graphene has a comparatively simple electronic structure. It's a single sheet of hexagonally arranged carbon atoms. The well-defined geometry makes it extremely amenable to simple calculational techniques, and the basic single-particle band structure (where we ignore the fact that electrons repel each other) was calculated decades ago.
- That electronic structure is actually pretty interesting, for three reasons. Remember that a spatially periodic arrangement of atoms "picks out" special values of the electron (crystal) momentum. In some sense, electrons with just the right (effective) wavelength (corresponding to particular momenta) diffract off the lattice. You can think of the hexagonal graphene lattice as a superposition of two identical sublattices off-set by one carbon-carbon bond length. So, the first interesting feature is that there are two sets of momenta ("sets of points in reciprocal space") that are special - picked out by the lattice, inequivalent (since the two sublattices really are distinct) but otherwise identical (since it's semantics to say which sublattice is primary and which is secondary). This is called "valley degeneracy", and while it crops up in other materials, the lattice symmetry of graphene ends up giving it added significance. Second, when you count electrons and try filling up the allowed electronic states starting at the lowest energy, you find that there are exactly two highest energy filled spatial states, one at each of the two lowest-momentum inequivalent momentum points. All lower energy states are filled; all higher energy states are empty. That means that graphene is exactly at the border between being a metal (many many states forming the "Fermi surface" between filled and empty states) and a semiconductor (filled states and empty states separated by a "gap" of energies for which there are no allowed electronic states). Third and most importantly, the energy of the allowed states near those Fermi points varies linearly with (crystal) momentum, much like the case of an ultrarelativistic classical particle, rather than quadratically as usual. So, graphene is in some ways a playground for thinking about two-dimensional relativistic Fermi gases.
- The material is comparatively easy to get and make. That means its accessible, while other high quality two-dimensional electron systems (e.g., at a GaAs/AlGaAs interface) require sophisticated crystal growth techniques.
- There is a whole literature of 2d electron physics in Si and GaAs/AlGaAs, which means there is a laundry list of techniques and experiments just waiting to be applied, in a system that theorists can actually calculate.
- Moreover, graphene band structure and materials issues are close to that of nanotubes, meaning that there's another whole community of people ready to apply what they've learned.
- Graphene may actually be useful for technologies!
Friday, November 13, 2009
Tuesday, November 10, 2009
Philip Kim visits Rice; I visit MSU.
Philip Kim visited Rice last week as one of our nanoscience-themed Chapman Lecturers, and it was great fun to talk science with him. He gave two talks, the first a public lecture about graphene and the second a physics colloquium at a more technical level about how electrons in graphene act in many ways, like ultrarelativistic particles. It was in this second talk that he gave the first truly clear explanation I've ever heard of the microscopic origin of the "pseudospin" description of carriers in graphene and what it means physically. It got me thinking hard about the physics, that's for sure.
In the mean time, I spent yesterday visiting the Department of Physics and Astronomy at Michigan State. They have a very good, enthusiastic condensed matter group there, with three hires in the last couple of years. It was very educational for me, particularly learning about some of the experimental techniques that are being developed and used there. Anyone who can measure resistances of 10-8 Ohms to parts in 105 gets respect! Thanks to everyone who made the visit so nice.
In the mean time, I spent yesterday visiting the Department of Physics and Astronomy at Michigan State. They have a very good, enthusiastic condensed matter group there, with three hires in the last couple of years. It was very educational for me, particularly learning about some of the experimental techniques that are being developed and used there. Anyone who can measure resistances of 10-8 Ohms to parts in 105 gets respect! Thanks to everyone who made the visit so nice.
Sunday, November 08, 2009
Thursday, November 05, 2009
Inspirational speech
I can't recall if I've posted this before. If you're feeling down (e.g., because just about every story in the news today is horrifying to some degree), this might cheer you up.
Wednesday, November 04, 2009
3He
The lighter helium isotope, 3He, is not something that most people have ever heard of. 3He is one neutron shy of the typical helium atom, and is present at a level of around 13 atoms per 10 million atoms of regular helium. Every now and then there is some discussion out there in the sci-fi/futurist part of the world that we should mine the moon for 3He as a potential fuel for fusion reactors. However, it turns out that 3He has uses that are much more down to earth.
For example, in its pure form it can be used as the working fluid in an evaporative refrigerator. Just as you cool off your tea by blowing across the top and allowing the most energetic water molecules to be carried away, it is possible to cool liquid helium by pumping away the gas above it. In the case of regular 4He, the lowest temperature that you can reach this way ends up being about 1.1 K. (Remember, helium is special in that at low pressures in bulk it remains a liquid all the way down as far as you care to go.) This limit happens because the vapor pressure of 4He drops exponentially at very low temperatures - it doesn't matter how big a vacuum pump you have; you simply can't pull any more gas molecules away. In contrast, 3He is lighter, as well as being a fermion (and thus obeying different quantum statistics than its heavier sibling). This difference in properties means that it can get down to more like 0.26 K before its vapor pressure is so low that further pumping is useless. (You don't throw away the pumped 3He. You recycle it.) This is the principle behind the 3He refrigerator.
You can do even better than that. If you cool a mixture of 3He and 4He down well below 1 K, it will spontaneously separate into a 3He-rich phase (the concentrated phase, nearly pure), and a dilute phase of 6% 3He dissolved in 94% 4He. At these temperatures the 4He is a superfluid, meaning that in many ways it acts like vacuum as far as the 3He atoms are concerned. If you pump away the (nearly pure 3He) gas above the dilute phase, more 3He atoms are pulled out of the concentrated phase and into the dilute phase to maintain the 6% solubility. This lets you evaporatively cool the concentrated phase much further, all the way down to milliKelvin temperatures. (The trick is to run this in closed-cycle, so that the 3He atoms eventually end up back in the concentrated phase.) This is the principle behind the dilution refrigerator, or "dil fridge".
Unfortunately, right now there is a major shortage of 3He. Its price has shot up by something like a factor of 20 in the last year, and it's hard to get any at all. This is a huge problem for a large number of (mostly) condensed matter physicists, as reported in the October issue of Physics Today (reprinted here (pdf)). The reasons are complicated, but the proximate causes are an increase in demand (it's great for neutron detectors, which are handy if you're looking for nuclear weapons) and a decrease in supply (it comes from decay of tritium, mostly from triggers for nuclear warheads). There are ways to fix this issue, but it will take time and cost money. In the meantime, my sympathies go out to experimentalists who have spent their startups on fridges that they can't get running.
For example, in its pure form it can be used as the working fluid in an evaporative refrigerator. Just as you cool off your tea by blowing across the top and allowing the most energetic water molecules to be carried away, it is possible to cool liquid helium by pumping away the gas above it. In the case of regular 4He, the lowest temperature that you can reach this way ends up being about 1.1 K. (Remember, helium is special in that at low pressures in bulk it remains a liquid all the way down as far as you care to go.) This limit happens because the vapor pressure of 4He drops exponentially at very low temperatures - it doesn't matter how big a vacuum pump you have; you simply can't pull any more gas molecules away. In contrast, 3He is lighter, as well as being a fermion (and thus obeying different quantum statistics than its heavier sibling). This difference in properties means that it can get down to more like 0.26 K before its vapor pressure is so low that further pumping is useless. (You don't throw away the pumped 3He. You recycle it.) This is the principle behind the 3He refrigerator.
You can do even better than that. If you cool a mixture of 3He and 4He down well below 1 K, it will spontaneously separate into a 3He-rich phase (the concentrated phase, nearly pure), and a dilute phase of 6% 3He dissolved in 94% 4He. At these temperatures the 4He is a superfluid, meaning that in many ways it acts like vacuum as far as the 3He atoms are concerned. If you pump away the (nearly pure 3He) gas above the dilute phase, more 3He atoms are pulled out of the concentrated phase and into the dilute phase to maintain the 6% solubility. This lets you evaporatively cool the concentrated phase much further, all the way down to milliKelvin temperatures. (The trick is to run this in closed-cycle, so that the 3He atoms eventually end up back in the concentrated phase.) This is the principle behind the dilution refrigerator, or "dil fridge".
Unfortunately, right now there is a major shortage of 3He. Its price has shot up by something like a factor of 20 in the last year, and it's hard to get any at all. This is a huge problem for a large number of (mostly) condensed matter physicists, as reported in the October issue of Physics Today (reprinted here (pdf)). The reasons are complicated, but the proximate causes are an increase in demand (it's great for neutron detectors, which are handy if you're looking for nuclear weapons) and a decrease in supply (it comes from decay of tritium, mostly from triggers for nuclear warheads). There are ways to fix this issue, but it will take time and cost money. In the meantime, my sympathies go out to experimentalists who have spent their startups on fridges that they can't get running.
Thursday, October 29, 2009
The unreasonable effectiveness of a toy model
As I've mentioned before, often theoretical physicists like to use "toy models" - mathematical representations of physical systems that are knowingly extremely simple, but are thought to contain the essential physics ingredients of interest. One example of this that I've always found particularly impressive also happens to be closely related to my graduate work. Undergraduate physicists that take a solid state class or a statistical physics class are usually taught about the Debye theory of heat capacity. The Debye model counts up the allowed vibrational modes in a solid, and assumes that each one acts like an independent (quantum) harmonic oscillator. It ends up predicting that the heat capacity of crystalline (insulating) solids should scale like T3 at low temperatures, independent of the details of the material, and this does seem to be a very good description of those systems. Likewise, undergrads learn about Bloch waves and the single-particle picture of electrons in crystalline solids, which ends up predicting the existence of energy bands. What most undergrads are not taught, however, is how to think about the vast majority of other solids, which are not perfect single crystals. Glass, for example.
You might imagine that all such messy, disordered materials would be very different - after all, there's no obvious reason why glass (e.g., amorphous SiO2) should have anything in common with a disordered polymer (e.g., photoresist). They're very different systems. Yet, amazingly, many, many disordered insulators do share common low temperature properties, including heat capacities that scale roughly like T1.1, thermal conductivities that scale roughly like T1.8, and particular temperature dependences of the speed of sound and the dielectric function. To give you a flavor for how weird this is, think about a piece of crystalline quartz. If you cool it down you'll find a heat capacity and a thermal conductivity that both obey the Debye expectations, varying like T3. If you take that quartz, warm it up, melt it, and then cool it rapidly so that it forms a glass, if you remeasure the low temperature properties, you'll find the glassy power laws (!), and the heat capacity at 10 mK could be 500 times what it was when the material was a crystal (!!), and you haven't even broken any chemical bonds (!!!).
Back in the early 1970s, Anderson, Halperin, and Varma postulated a toy model to try and tackle this mysterious universality of disordered materials. They assumed that, regardless of the details of the disorder, there must be lots of local, low-energy excitations in the material to give the increased heat capacity. Further, since they didn't know the details, they assumed that these excitations could be approximated as two-level systems (TLSs), with an energy difference between the two levels that could range from zero up to some high energy cutoff with equal probability. Such a distribution of splittings naturally gives you a heat capacity that goes like T1. Moreover, if you assume that these TLSs have some dipole-like coupling to phonons, you find a thermal conductivity that scales like T2. A few additional assumptions give you a pretty accurate description of the sound speed and dielectric function as well. This is pretty damned amazing, and it seems to be a remarkably good description of a huge class of materials, ranging from real glasses to polycrystalline materials to polymers.
The big mystery is, why is this toy model so good?! Tony Leggett and Clare Yu worked on this back in the late 1980s, suggesting that perhaps it didn't matter what complicated microscopic degrees of freedom you started with. Perhaps somehow when interactions between those degrees of freedom are accounted for, the final spectrum of (collective) excitations that results looks like the universal AHV result. I did experiments as a grad student that seemed consistent with these ideas. Most recently, I saw this paper on the arxiv, in which Moshe Schechter and P. C. E. Stamp summarizes the situation and seems to have made some very nice progress on these ideas, complete with some predictions that ought to be testable. This kind of emergence of universality is pretty cool.
By the way, in case you were wondering, TLSs are also a major concern to the folks trying to do quantum computing, since they can lead to noise and decoherence, but that's a topic for another time....
You might imagine that all such messy, disordered materials would be very different - after all, there's no obvious reason why glass (e.g., amorphous SiO2) should have anything in common with a disordered polymer (e.g., photoresist). They're very different systems. Yet, amazingly, many, many disordered insulators do share common low temperature properties, including heat capacities that scale roughly like T1.1, thermal conductivities that scale roughly like T1.8, and particular temperature dependences of the speed of sound and the dielectric function. To give you a flavor for how weird this is, think about a piece of crystalline quartz. If you cool it down you'll find a heat capacity and a thermal conductivity that both obey the Debye expectations, varying like T3. If you take that quartz, warm it up, melt it, and then cool it rapidly so that it forms a glass, if you remeasure the low temperature properties, you'll find the glassy power laws (!), and the heat capacity at 10 mK could be 500 times what it was when the material was a crystal (!!), and you haven't even broken any chemical bonds (!!!).
Back in the early 1970s, Anderson, Halperin, and Varma postulated a toy model to try and tackle this mysterious universality of disordered materials. They assumed that, regardless of the details of the disorder, there must be lots of local, low-energy excitations in the material to give the increased heat capacity. Further, since they didn't know the details, they assumed that these excitations could be approximated as two-level systems (TLSs), with an energy difference between the two levels that could range from zero up to some high energy cutoff with equal probability. Such a distribution of splittings naturally gives you a heat capacity that goes like T1. Moreover, if you assume that these TLSs have some dipole-like coupling to phonons, you find a thermal conductivity that scales like T2. A few additional assumptions give you a pretty accurate description of the sound speed and dielectric function as well. This is pretty damned amazing, and it seems to be a remarkably good description of a huge class of materials, ranging from real glasses to polycrystalline materials to polymers.
The big mystery is, why is this toy model so good?! Tony Leggett and Clare Yu worked on this back in the late 1980s, suggesting that perhaps it didn't matter what complicated microscopic degrees of freedom you started with. Perhaps somehow when interactions between those degrees of freedom are accounted for, the final spectrum of (collective) excitations that results looks like the universal AHV result. I did experiments as a grad student that seemed consistent with these ideas. Most recently, I saw this paper on the arxiv, in which Moshe Schechter and P. C. E. Stamp summarizes the situation and seems to have made some very nice progress on these ideas, complete with some predictions that ought to be testable. This kind of emergence of universality is pretty cool.
By the way, in case you were wondering, TLSs are also a major concern to the folks trying to do quantum computing, since they can lead to noise and decoherence, but that's a topic for another time....
Thursday, October 22, 2009
String theory (!) and "bad metals"
I saw a remarkable talk today by Hong Liu from MIT, about quantum gravity and what it has to say about high temperature superconductivity. Yes, you read that correctly. It was (at least for a nonexpert) a reasonably accessible look at a genuinely useful physics result to come from string theory. I doubt I can do it justice, so I'll just give the bare-bones idea. Within string theory, Maldacena (and others following) showed that there is a duality (that is, a precise mathematical correspondence) between some [quantum theories of gravity in some volume of d+1 dimensions] and some [quantum field theories w/o gravity on the d-dimensional boundary of that volume]. This sounds esoteric - what could it be good for? Well, we know what we think the classical limit of quantum gravity should be: Einstein's general relativity, and we know a decent number of solutions to the Einstein equations. The duality means that it is possible to take what could be a very painful interacting many-body quantum mechanics problem (say, the quantum field theory approach to dealing with a large number of interacting electrons), and instead of solving it directly, we could convert it into a (mathematically equivalent) general relativity problem that might be much simpler with a known solution. People have already used this approach to make predictions about the strongly-interacting quark-gluon plasma produced at RHIC, for example.
I'd known about this basic idea, but I always assumed that it would be of very limited utility in general. After all, there are a whole lot of possible hard many-body problems in solid state physics, and it seemed like we'd have to be very lucky for the duals of those problems to turn out to be easy to find or solve. Well, perhaps I was wrong. Prof. Liu showed an example (or at least the results), in which a particular general relativity solution (an extremal charged blackhole) turns out to give deep insights into a long-standing issue in the strongly-correlated electron community. Some conducting materials are said to be "bad metals". While they conduct electricity moderately well, and their conductivity improves as temperature goes down (one definition of metal), the way that the conductivity improves is weird. Copper, a good metal, has an electrical resistance that scales like T2 at low temperatures. This is well understood, and is a consequence of the fact that the low-energy excitations of the electrons in copper act basically like noninteracting electrons. A bad metal, in contrast, has a resistance that scales like T, which implies that the low energy excitations in the bad metal are very complex, rather than electron-like. Well, looking at the dual to the extremal black hole problem actually seems to explain the properties of this funny metallic state. A version of Prof. Liu's talk is online at the KITP. Wild stuff! It's amazing to me that we're so fortunate that this particular correspondence exists.
I'd known about this basic idea, but I always assumed that it would be of very limited utility in general. After all, there are a whole lot of possible hard many-body problems in solid state physics, and it seemed like we'd have to be very lucky for the duals of those problems to turn out to be easy to find or solve. Well, perhaps I was wrong. Prof. Liu showed an example (or at least the results), in which a particular general relativity solution (an extremal charged blackhole) turns out to give deep insights into a long-standing issue in the strongly-correlated electron community. Some conducting materials are said to be "bad metals". While they conduct electricity moderately well, and their conductivity improves as temperature goes down (one definition of metal), the way that the conductivity improves is weird. Copper, a good metal, has an electrical resistance that scales like T2 at low temperatures. This is well understood, and is a consequence of the fact that the low-energy excitations of the electrons in copper act basically like noninteracting electrons. A bad metal, in contrast, has a resistance that scales like T, which implies that the low energy excitations in the bad metal are very complex, rather than electron-like. Well, looking at the dual to the extremal black hole problem actually seems to explain the properties of this funny metallic state. A version of Prof. Liu's talk is online at the KITP. Wild stuff! It's amazing to me that we're so fortunate that this particular correspondence exists.
Tuesday, October 20, 2009
Climate change talk
This afternoon we were fortunate enough to have our annual Rorschach Lecture, delivered by Ralph Cicerone, president of the US National Academy of Sciences. The subject was climate change and its interaction with energy policy, and unsurprisingly to anyone who isn't willfully ignorant, this was a scary talk. The atmospheric CO2 data, the satellite-based measurements of accelerating Greenland and Antarctic ice loss, the amazing pace at which China is building coal-fire power plants (roughly 1 GW of electric generating capacity from coal coming on line every 10 days), are all very sobering. The planet doesn't care, of course, but it sure looks like the human species had better get its act together, and the only way that's going to happen is if we come up with an energy approach that is cheap compared to coal (that includes the possibility of making coal more expensive, of course, but how do you persuade China and India not to burn their cheap, abundant coal?).
Friday, October 16, 2009
Ahh, Air China
Posting from International Check-in at Beijing International Airport....
I was actually supposed to get home last night, but Air China had other plans. At least I have quite the story out of it. I'd originally booked a 2 hour 45 min layover in Beijing, figuring that would be plenty of time. However, our Hangzhou-Beijing flight was delayed 2 hours. Then, the pilot made two go-arounds at Beijing, very bumpy (cue the airsick bags and retching noises from fellow passengers), each time getting w/in about 30 feet of the ground, before giving up (due to high winds, I guess), and we diverted to Tianjin. In Tianjin they kept us on the plane on the tarmac out at the end of their runway for close to 4 hours. At least the AC worked there. They ran out of water, and then orange juice. Finally, they refueled and flew the plane back to Beijing, arriving only 8 hours late. At least I wasn't alone (two other americans on the flight in the same situation as me), and Air China did, after some convincing, spring for a hotel for the night.
Clearly the simplest possible explanation for this is that I'm destined to make some universe-shattering discovery in the future, the echoes of which are rippling backward in time to try to prevent my return to the US.
I was actually supposed to get home last night, but Air China had other plans. At least I have quite the story out of it. I'd originally booked a 2 hour 45 min layover in Beijing, figuring that would be plenty of time. However, our Hangzhou-Beijing flight was delayed 2 hours. Then, the pilot made two go-arounds at Beijing, very bumpy (cue the airsick bags and retching noises from fellow passengers), each time getting w/in about 30 feet of the ground, before giving up (due to high winds, I guess), and we diverted to Tianjin. In Tianjin they kept us on the plane on the tarmac out at the end of their runway for close to 4 hours. At least the AC worked there. They ran out of water, and then orange juice. Finally, they refueled and flew the plane back to Beijing, arriving only 8 hours late. At least I wasn't alone (two other americans on the flight in the same situation as me), and Air China did, after some convincing, spring for a hotel for the night.
Clearly the simplest possible explanation for this is that I'm destined to make some universe-shattering discovery in the future, the echoes of which are rippling backward in time to try to prevent my return to the US.
Monday, October 12, 2009
Conference observations so far
This is a nice gathering of people, and the organizers have done a very good job. More discussion would be nice - the program is very dense. A few (not very serious) observations:
- I used to think that I was the only condensed matter physicist not working on graphene. Now I realize I'm the only condensed matter physicist not working on graphene, iron pnictide superconductors, or topological insulators.
- Chinese ring tones are different than US or European ringtones.
- One speaker inadvertently stumbled on a great, subtle psychological trick: he used a font for most of his talk that is identical to the font (some Helvetica variant) used by the Nature publishing group for their titles and subtitles. That font makes everything seem important :-). He blew this aura of profundity it at the end, though, by switching to comic sans.
- The Chinese groups that have been charging on the iron pnictides must have enormous resources in terms of people and equipment - the rate at which they are cranking out material and data is remarkable. US materials growers seem very undersupported by comparison.
- Laser-based angle-resolved photoemission, in its appropriate regime, is damned impressive.
Friday, October 09, 2009
In China this week
I'm off tomorrow for a week-long trip to China, to go to this workshop. I've never been to China before, so this should be an interesting experience! I may try to blog a little, but I don't know how internet access will work during the conference. Hopefully the trip will go more smoothly than the travel arrangements beforehand. If I ever hear Expedia's "on hold" music again, I may snap.
Update: The trip in was long but problem-free. Blogger access only works through VPN, thanks to the Great Firewall....
Update: The trip in was long but problem-free. Blogger access only works through VPN, thanks to the Great Firewall....
Tuesday, October 06, 2009
Fiber and CCDs
As you've all no doubt read by now, the 2009 Nobel in Physics was awarded to Charles K. Kao, for the development of truly low loss fiber optics (a technology that you're all using right now, unless the internet backbone in your country consists of smoke signals or semaphore flags), and Willard Boyle + George Smith for the invention of the CCD (charge-coupled device, which is the basis for all digital cameras, and has revolutionized spectroscopy).
The CCD portion makes a tremendous amount of sense. CCDs work by using local gates on a doped semiconductor wafer to capture charge generated by the absorption of light. The charge is then shifted to an amplifier and the resulting voltage pulses are converted into a digital signal that can be interpreted by a computer. The description given in the supporting document (pdf) on the Nobel website is very good. CCDs have revolutionized astronomy and spectroscopy as well as photography, and the physics that must be understood and controlled in order to get these things to work well is quite rich (not just the charge generation process, but the solid state physics of screening, transport, and carrier trapping).
The fiber optic portion is more tricky, since many people have worked on the development of fiber optic communications. Still, Kao had the insight that the real limitation on light propagation in fiber came from particular types of impurities, understood the physics of those impurities, guided a program toward clean material, and had the vision to see where this could all lead.
Certainly there will be grumbling from some that these are <sneer>engineering</sneer> accomplishments rather than essential physics, as if having a practical impact with your science that leads to technology and helps society is somehow dirty, second-rate, or a sign of intellectual inferiority. That is a terrible attitude, and I'm not just saying that because my bachelor's degree is in engineering. Trust me: some engineers have just as much raw intellectual horsepower as high energy theoretical physicists. Finding intellectual fulfillment in engineering is not some corruption of pure science - it's just how some very smart people prefer to spend their time. Oh, by the way, the actual will of Alfred Nobel refers to accomplishments that "shall have conferred the greatest benefit on mankind", and specifically mentions "the person who shall have made the most important discovery or invention [my emphasis] within the field of physics".
Finally, this provides yet another data point on just how transformative Bell Labs (and other remarkable industrial R&D labs, including IBM, GE, and others) really was in the physical sciences. The withering of long-term industrial research will be felt for a long, long time to come.
The CCD portion makes a tremendous amount of sense. CCDs work by using local gates on a doped semiconductor wafer to capture charge generated by the absorption of light. The charge is then shifted to an amplifier and the resulting voltage pulses are converted into a digital signal that can be interpreted by a computer. The description given in the supporting document (pdf) on the Nobel website is very good. CCDs have revolutionized astronomy and spectroscopy as well as photography, and the physics that must be understood and controlled in order to get these things to work well is quite rich (not just the charge generation process, but the solid state physics of screening, transport, and carrier trapping).
The fiber optic portion is more tricky, since many people have worked on the development of fiber optic communications. Still, Kao had the insight that the real limitation on light propagation in fiber came from particular types of impurities, understood the physics of those impurities, guided a program toward clean material, and had the vision to see where this could all lead.
Certainly there will be grumbling from some that these are <sneer>engineering</sneer> accomplishments rather than essential physics, as if having a practical impact with your science that leads to technology and helps society is somehow dirty, second-rate, or a sign of intellectual inferiority. That is a terrible attitude, and I'm not just saying that because my bachelor's degree is in engineering. Trust me: some engineers have just as much raw intellectual horsepower as high energy theoretical physicists. Finding intellectual fulfillment in engineering is not some corruption of pure science - it's just how some very smart people prefer to spend their time. Oh, by the way, the actual will of Alfred Nobel refers to accomplishments that "shall have conferred the greatest benefit on mankind", and specifically mentions "the person who shall have made the most important discovery or invention [my emphasis] within the field of physics".
Finally, this provides yet another data point on just how transformative Bell Labs (and other remarkable industrial R&D labs, including IBM, GE, and others) really was in the physical sciences. The withering of long-term industrial research will be felt for a long, long time to come.
Monday, October 05, 2009
Single atoms in semiconductors
One last post before the obligatory Nobel post tomorrow.
Recently, there has been progress in examining the electronic transport properties of individual dopant atoms in semiconductors. There are several motivations for this. First and probably foremost, with increasing miniaturization we are rapidly approaching the limit when the active channel in semiconductor devices will contain, statistically, only a small number of dopants; it makes sense to figure out how these systems work and whether they have any intrinsically useful properties. Second, these systems are the ultimate small-size limit of quantum dots, even smaller than single-molecule transistors. Third, since the host materials are extremely well-studied, and quantum chemistry calculations can handle the relevant volumes of material, there is the possibility of realistic, detailed theoretical treatments. This paper is a great example of treating an individual phosphorus donor in Si as a quantum dot. This other paper looks at a single arsenic donor, and can see Kondo physics involving the unpaired electron on the donor site interacting with the (valley degenerate) Si conduction electrons. Very cool stuff!
Tuesday, September 29, 2009
The return of the embarassing news story.
As mentioned previously, the news story about NSF upper level staff surfing for porn while on the job is back. This would be funny if it weren't so pathetic and sad. Obviously this is inappropriate behavior, and NSF clearly needs to get their IT staff up to snuff, since it's certainly possible in a corporate environment to detect and stop this kind of activity. Still, it seems unfair to single out NSF like this. I'd be surprised if this didn't go on in all large, computer-heavy organizations at some rate.
First principles vs. toy models
One of the hot topics at the workshop I attended was the proper role of "first principles" calculations in trying to understand electronic conduction at the atomic and molecular scale. In this business, there tend to be two approaches. The first, which I call for lack of a better term the "toy model" paradigm, constructs models that are highly idealized and minimalistic, and you hope that they contain the essential physics needed to describe real systems. An example of such a model would be the single-level Anderson-Holstein model of transport through a molecule. Instead of worrying about all of the detailed electronic levels of a molecule and the many-electron physics there, you would concentrate on a single electronic level that can either be empty, singly occupied, or doubly occupied. Instead of worrying about the detailed band structure of the electrodes, you would treat them as ideal electronic reservoirs, and there would be some couplings that allows electrons to hop between the level and the reservoirs. Instead of considering all of the possible molecular vibrations, you would assume a single characteristic vibrational mode that "lives" on the molecule, and there would be some additional energy cost for having that vibration excited while there is an electron occupying the level. While this sounds complicated, it is still a comparatively idealized situation that can be described by a handful of characteristic energies, and it contains rich physics.
On the other hand, one can consider trying to model a specific molecule in detail, worrying about the precise electronic and vibrational levels appropriate for exactly that molecule bonded in a particular configuration to a specific kind of metal electrode surface. While this sounds in some ways like it's what you "really" ought to do, this "first principles" approach is fraught with challenges. For example, just solving for the electronic levels of the molecule and their relative alignment with the electronic levels in the electrodes is extremely difficult in general. While there are impressive techniques that can work well in certain situations (e.g., density functional theory), very often the circumstances where those methods work best (quasi-equilibrium, far away from resonances, in situations where electron correlation effects are minimal) are often not too interesting.
It's interesting to watch the gradual convergence of these approaches. As computing power grows and increasingly sophisticated treatments are developed, it looks like first-principles calculations are getting better. One direction that seems popular now, as our condensed matter seminar speaker yesterday pointed out, is using such calculations as guidelines for correctly estimating the parameters that should be fed into the essential physics toy models. Interesting times are on the horizon.
On the other hand, one can consider trying to model a specific molecule in detail, worrying about the precise electronic and vibrational levels appropriate for exactly that molecule bonded in a particular configuration to a specific kind of metal electrode surface. While this sounds in some ways like it's what you "really" ought to do, this "first principles" approach is fraught with challenges. For example, just solving for the electronic levels of the molecule and their relative alignment with the electronic levels in the electrodes is extremely difficult in general. While there are impressive techniques that can work well in certain situations (e.g., density functional theory), very often the circumstances where those methods work best (quasi-equilibrium, far away from resonances, in situations where electron correlation effects are minimal) are often not too interesting.
It's interesting to watch the gradual convergence of these approaches. As computing power grows and increasingly sophisticated treatments are developed, it looks like first-principles calculations are getting better. One direction that seems popular now, as our condensed matter seminar speaker yesterday pointed out, is using such calculations as guidelines for correctly estimating the parameters that should be fed into the essential physics toy models. Interesting times are on the horizon.
Friday, September 25, 2009
AAAS and advertising
I've received three pieces of fundraising advertising from AAAS in the last two days via US Mail. This makes me wonder about a few things. First, in this day and age, why can't they get a mailing database set up that can tell that Douglas Natelson and Dr. Douglas Natelson at the same address are actually the same person? Second, do they really think that I pay a lot of attention to bulk-mailed fundraising appeals? Third, how much money are they spending, how much energy is consumed, and how much pollution is generated in sending out these tree-killing mailings, when they claim to be environmentally conscious and already have my email address as a subscriber to Science? Fourth, this many appeals in one week smacks of desperation - is there something we should know?
Tuesday, September 22, 2009
Curve fitting
Very often in experimental physics, we're interested in comparing some data to a physical model that may involve a number of unknown parameters, and we want to find the set of parameters that gives the best fit. Typically "best fit" means minimizing a "cost" function, often the sum of the squares of the deviations between the model and the data. The challenge is that many models can be very complicated, with nonlinear dependences on the parameters. This often means that finding the optimal parameters can be very difficult - the cost function in parameter-space can have lots of shallow, local minima, for example. The cost function may also be extremely sensitive to some parameters (the "stiff" ones) and comparatively insensitive to others (the "sloppy" ones). In arxiv:0909.3884, James Sethna and Cornell colleagues take a look at this dilemma using the tools of differential geometry, and they propose an improvement to standard techniques based on geodesics on the relevant hypersurface in parameter space. This looks really cool (if mathematical!), and I wish they'd included an example of an actual minimization problem that they'd done with this (instead of leaving it for an "in preparation" reference). Any prospect for real improvements in nonlinear fitting is exciting.
Friday, September 18, 2009
Ahh, KLM.
Stuck in Schipol, forced to fly back to Houston via BRE-AMS-DET-IAH, since mechanical difficulties cancelled my early BRE-AMS flight (thus causing me to miss my AMS-IAH direct flight). The other AMS-IAH direct flight on their schedule is really just a psychological torture device, since it's really a charter that's 100% business class and un-bookable except as a cash purchase (which would set me back $4K on top of everything I've already paid).
Could be worse. There was another guy on the original BRE-AMS flight that got involuntarily rebooked through Paris. After hanging out at the Bremen airport for four hours, he got to have his BRE-Paris flight also cancelled due to mechanical difficulties.
At least the workshop was extremely good.
Monday, September 14, 2009
Draconian ISP.
The ISP (netopsie) for my hotel here in Bremen, Germany has apparently decided to block access to all "blogspot.com" domains. If I try to view my blog, I get redirected to a page that says "Banned Site. You are seeing this error because what you attempted to access appears to contain, or is labeled as containing, material that has been deemed inappropriate." Ironically, I can post new entries since that is done from a blogger.com page. I can't view the blog, however, or see comments. Idiots. Makes me wonder what they find objectionable on blogs in particular, or whether they are complete puritans and block lots of stuff.
Tuesday, September 08, 2009
This week in cond-mat
Three quick blurbs from the arxiv this week. I'm going to a workshop in Germany next week and have a bunch to do in the meantime, so blogging will likely be light.
arxiv:0909.0628 - Bocquet and Charlaix, Nanofluidics, from bulk to interfaces
This paper is an outstanding overview of fluids confined to the nanoscale. I will definitely be referring to this the next time I teach my graduate course that touches on this topic. Two of the central questions that comes up when thinking about fluids at the nanoscale are, when do large-scale assumptions about hydrodynamics (e.g., that fluid right at the walls of a container is at rest relative to the walls, even when the fluid away from the walls is flowing - the so-called "no slip" boundary condition) break down, and when does the continuum picture of the fluid (i.e., that fluid may be modeled as a homogeneous medium with some density, rather than a collection of strongly coupled particles) fall apart? This article looks at these issues in detail, with many useful references.
arxiv:0909.0951 - Saikin et al., On the chemical bonding effects in the Raman response: Benzenethiol adsorbed on silver clusters
This one is of interest to me because of its relevance to some of the research done in my group. Raman scattering is inelastic light scattering, where light can lose (or gain) energy to a molecule by exciting (or de-exciting) molecular vibrations. It's been known for more than 30 years that the Raman scattering process can be greatly (many orders of magnitude) enhanced on nanostructured metal surfaces. This happens for two reasons. First, nanostructured metals support local plasmon modes, so that the metal acts like a little optical antenna, helping the molecule to "receive" (and "transmit") light. This is called electromagnetic enhancement. Second, there can be additional enhancing effects due to resonances involving charge transfer between the molecule and the nearby metal. This latter effect is called chemical enhancement, and this paper takes a detailed look at how this can arise, considering specific configurations of molecules on Ag clusters. It is very challenging to do calculations like this and get realistic results!
arxiv:0909.1205 - Martineau et al, High crystalline quality single crystal CVD diamond
I picked this one because (a) the fact that it is possible to grow high quality single crystal diamond by chemical vapor deposition is just plain cool, as well as of great technological potential; and (b) the x-ray topographs in this paper showing crystallographic defects in the crystals are very pretty.
arxiv:0909.0628 - Bocquet and Charlaix, Nanofluidics, from bulk to interfaces
This paper is an outstanding overview of fluids confined to the nanoscale. I will definitely be referring to this the next time I teach my graduate course that touches on this topic. Two of the central questions that comes up when thinking about fluids at the nanoscale are, when do large-scale assumptions about hydrodynamics (e.g., that fluid right at the walls of a container is at rest relative to the walls, even when the fluid away from the walls is flowing - the so-called "no slip" boundary condition) break down, and when does the continuum picture of the fluid (i.e., that fluid may be modeled as a homogeneous medium with some density, rather than a collection of strongly coupled particles) fall apart? This article looks at these issues in detail, with many useful references.
arxiv:0909.0951 - Saikin et al., On the chemical bonding effects in the Raman response: Benzenethiol adsorbed on silver clusters
This one is of interest to me because of its relevance to some of the research done in my group. Raman scattering is inelastic light scattering, where light can lose (or gain) energy to a molecule by exciting (or de-exciting) molecular vibrations. It's been known for more than 30 years that the Raman scattering process can be greatly (many orders of magnitude) enhanced on nanostructured metal surfaces. This happens for two reasons. First, nanostructured metals support local plasmon modes, so that the metal acts like a little optical antenna, helping the molecule to "receive" (and "transmit") light. This is called electromagnetic enhancement. Second, there can be additional enhancing effects due to resonances involving charge transfer between the molecule and the nearby metal. This latter effect is called chemical enhancement, and this paper takes a detailed look at how this can arise, considering specific configurations of molecules on Ag clusters. It is very challenging to do calculations like this and get realistic results!
arxiv:0909.1205 - Martineau et al, High crystalline quality single crystal CVD diamond
I picked this one because (a) the fact that it is possible to grow high quality single crystal diamond by chemical vapor deposition is just plain cool, as well as of great technological potential; and (b) the x-ray topographs in this paper showing crystallographic defects in the crystals are very pretty.
Thursday, September 03, 2009
If you're reading this, you're probably pretty net-savvy.
Perhaps this feature has always been available, but I just noticed the other night that Google Analytics can tell me stats about what kind of web browsers people use to access this page. Far and away the number one browser used was Firefox (57%), followed by Safari (16%), Internet Explorer (15%), and Chrome (8%). Interestingly, the breakdown for those accessing my group webpage was quite different, with IE having more like 30% of the total. Very educational. No one using lynx, though.
Monday, August 31, 2009
Two nanoscience tidbits
Since nearly everyone else in the science blogging world has touched on this (see here, here, here, here, and here, to name a few), I might as well do so, also. Leo Gross and coworkers at IBM Zurich have used an atomic force microscope to do something incredibly impressive: They have been able to image the bonding orbitals in individual pentacene molecules with better than atomic resolution, using the very short-range forces that contribute to the "hard core repulsion" between atoms. Atoms tend to be attracted to each other on nanometer scales, even in the absence of strong chemical interactions, due to the van der Waals interaction, which comes from the fluctuating motion of their electron clouds. At very short distances, though, atoms effectively repel each other extremely strongly, both from the Coulomb interaction (electrons don't like each other overly much) and the effects of the Pauli Exclusion Principle. Gross and colleagues accomplished this feat by working in ultrahigh vacuum (around 10-15 of atmospheric pressure) and at 5 K, and by deliberately attaching a single CO molecule to their conducting atomic force microscope (AFM) tip. It's a heck of a technical achievement for AFM. Atomic resolution has been demonstrated before, but this kind of sensitivity is remarkable. (FWIW, I once heard one of the major coauthors, Gerhard Meyer, speak at a meeting about the same group's ultrahigh resolution STM work. He seemed very low key about their obviously impressive achievements - amazingly so. I hope he got excited about this!)
Also, a group at Berkeley has made a laser based on a CdS nanowire, and like the result mentioned last week, this gadget uses plasmons (this time in a Ag film) to act as an effective cavity. Clearly using the extreme confinement of some plasmon modes to do photonics is going to be a growth industry.
Also, a group at Berkeley has made a laser based on a CdS nanowire, and like the result mentioned last week, this gadget uses plasmons (this time in a Ag film) to act as an effective cavity. Clearly using the extreme confinement of some plasmon modes to do photonics is going to be a growth industry.
Sunday, August 30, 2009
Industrial R&D
I've felt for a long time that the current business climate, which punishes rather than rewards long-term research investments by companies, is misguided. When most stock is owned and traded by institutional investors and large funds who don't have any interest in holding particular companies for the long term, and when executive compensation massively overvalues year-over-year growth (because we all know that 40% annual growth in cell phone sales is sustainable forever, right? There's no such thing as market saturation, is there?), you end up where long-term investment is viewed by company boards as a misuse of resources. This article in Business Week makes some interesting arguments on ways to try and fix this. Unfortunately I think most of these ideas are not very compelling or likely. Norm Augustine had an interesting suggestion: scale the capital gains tax rate inversely with the amount of time one owns a stock. If someone holds a stock less than a year, tax the capital gains at 90%. If they own the stock 10 years or more, tax the capital gains at nearly 0%. Interpolate appropriately. The idea here is to set up a system that incentivizes long-term investment, which in turn is more likely to support industrial research. Hard to see how such an overhaul would ever get passed in Congress, though. I imagine the financial industry would crush it like a bug, since anything that slows down trading is viewed as interference in the free market, or, more cynically, interference in their enormous transaction fee profits.
Wednesday, August 26, 2009
How we fund grad students
As new grad students flood onto campuses across the US, I just got around to reading this piece in Science from a few of weeks ago about Roald Hoffman's idea for changing the way we support grad students in the sciences and engineering. Most S&E grad students in the US are supported by a mix of teaching assistantships (TAs), research assistanceships (RAs), and fellowships. A typical S&E grad student at an American university shows up and is supported during their their first year by a mix of university funds and pay for teaching. They then often make the transition to being supported as an RA by research funds obtained by their advisor through research grants. (Some remain as TAs - this is more common at large, public institutions with large undergraduate teaching needs.) Some relatively small fraction of S&E grad students are supported instead by fellowships, awarded competitively by agencies like NSF, DOE, DOD, NIH, etc. or by private foundations such as the Hertz Foundation.
Prof. Hoffman suggests that we should move to a system where all grad student support is fellowship-based. The idea is that this will (a) fund only the best students; (b) allow students much greater independence since an advisor will no longer be able to say "You have to do boring experiment #23 because that's what the grant that's paying your salary says we're going to do"; (c) result in better mentoring b/c faculty will no longer view students as "hands". Now, there's basically no way to see how such a drastic change in the system would ever happen, but it's worth looking at the idea.
As someone lucky enough to have a fellowship in grad school, I understand the appeal from the student side. Independence is great - it means that you and your advisor are freed from the stress of worrying that your grant won't get renewed when you're in year 3 of your program. It means that you are a free agent.
However, I think Hoffman's idea would be a disaster, for two main research-related reasons (not to mention the challenge of how you'd handle TA duties at large places that suddenly had many fewer grad students). First, there is little doubt that this would skew an already tilted system even further in favor of the top, say, 20 institutions in the country. Right now it's possible for good researchers at second tier universities to write grants, hire students, and do research. Imagine instead if the only source of student support were competitive external fellowships. It's all well and good to talk about overproduction of PhDs, and say that drastically reducing the number of grad students would be good for employment and salaries. There is a point to that. However, you would effectively end research as an enterprise at many second and third-tier schools, and there are a fair number of really good programs that would go away. Second, since federally funded fellowships would presumably only go to US citizens, this idea would drastically reduce international PhD students in S&E. That, too, would be a mess. Some of our best students are international students, and whether or not they stay in the US after their degrees, training these people is a valuable service that the current US system provides.
It is worth considering other funding schemes, though. I know that in the UK students are supported through their PhD, rather than on a schedule set by external grant deadlines. Perhaps some of my UK readers could comment on the pluses and minuses of this approach.
Prof. Hoffman suggests that we should move to a system where all grad student support is fellowship-based. The idea is that this will (a) fund only the best students; (b) allow students much greater independence since an advisor will no longer be able to say "You have to do boring experiment #23 because that's what the grant that's paying your salary says we're going to do"; (c) result in better mentoring b/c faculty will no longer view students as "hands". Now, there's basically no way to see how such a drastic change in the system would ever happen, but it's worth looking at the idea.
As someone lucky enough to have a fellowship in grad school, I understand the appeal from the student side. Independence is great - it means that you and your advisor are freed from the stress of worrying that your grant won't get renewed when you're in year 3 of your program. It means that you are a free agent.
However, I think Hoffman's idea would be a disaster, for two main research-related reasons (not to mention the challenge of how you'd handle TA duties at large places that suddenly had many fewer grad students). First, there is little doubt that this would skew an already tilted system even further in favor of the top, say, 20 institutions in the country. Right now it's possible for good researchers at second tier universities to write grants, hire students, and do research. Imagine instead if the only source of student support were competitive external fellowships. It's all well and good to talk about overproduction of PhDs, and say that drastically reducing the number of grad students would be good for employment and salaries. There is a point to that. However, you would effectively end research as an enterprise at many second and third-tier schools, and there are a fair number of really good programs that would go away. Second, since federally funded fellowships would presumably only go to US citizens, this idea would drastically reduce international PhD students in S&E. That, too, would be a mess. Some of our best students are international students, and whether or not they stay in the US after their degrees, training these people is a valuable service that the current US system provides.
It is worth considering other funding schemes, though. I know that in the UK students are supported through their PhD, rather than on a schedule set by external grant deadlines. Perhaps some of my UK readers could comment on the pluses and minuses of this approach.
Monday, August 24, 2009
plasmons instead of cavities
Sorry for the delay in posts. The beginning of the new academic year is a hectic time.
This paper is a very exciting new result. Unfortunately there does not appear to be a publicly accessible version available. Ordinarily, lasing (that is, light amplification by the stimulated emission of radiation) requires a few things. One needs a "gain medium", some kind of optically active system that has (at least one) radiative transition. In this paper, the medium is a dielectric oxide containing dye molecules known to fluoresce at a wavelength of 520 nm. This medium needs to be pumped somehow, so that there are more optically active systems in the excited state than in the ground state. This is called "population inversion". (It is possible to get lasing without inversion, but that's a very special case....) Finally, one generally needs a cavity - an optical resonator of high enough quality that an emitted photon stays around long enough to stimulate the emission of many more photons. The cavity has to be somewhat leaky, so that the laser light can get out. However, if the cavity is too leaky, the optical gain from stimulated emission in the pumped medium can't outpace the cavity losses. The usual approach is to have a rather high quality cavity, made using either dielectric mirrors, total internal reflection, or some other conventional reflectors.
In this paper, however, the authors take a different tactic. They use the near-field from the plasmon resonance of the gold core (not coincidentally, at around 520 nm wavelength) of Au-core-dielectric-shell nanoparticles. Plasmon resonances are often quite lossy, and this is no exception - the Q of the plasmon resonance is around 14. However, the enhanced near field is so large, and the effective mode volume (confinement) is so small, that gain still outpaces loss. When the dye is optically pumped, it is possible to make these nanoparticles lase. This paper is likely to spawn a great deal of further work! It's cool, and there are many clear directions to pursue now that this has been demonstrated.
This paper is a very exciting new result. Unfortunately there does not appear to be a publicly accessible version available. Ordinarily, lasing (that is, light amplification by the stimulated emission of radiation) requires a few things. One needs a "gain medium", some kind of optically active system that has (at least one) radiative transition. In this paper, the medium is a dielectric oxide containing dye molecules known to fluoresce at a wavelength of 520 nm. This medium needs to be pumped somehow, so that there are more optically active systems in the excited state than in the ground state. This is called "population inversion". (It is possible to get lasing without inversion, but that's a very special case....) Finally, one generally needs a cavity - an optical resonator of high enough quality that an emitted photon stays around long enough to stimulate the emission of many more photons. The cavity has to be somewhat leaky, so that the laser light can get out. However, if the cavity is too leaky, the optical gain from stimulated emission in the pumped medium can't outpace the cavity losses. The usual approach is to have a rather high quality cavity, made using either dielectric mirrors, total internal reflection, or some other conventional reflectors.
In this paper, however, the authors take a different tactic. They use the near-field from the plasmon resonance of the gold core (not coincidentally, at around 520 nm wavelength) of Au-core-dielectric-shell nanoparticles. Plasmon resonances are often quite lossy, and this is no exception - the Q of the plasmon resonance is around 14. However, the enhanced near field is so large, and the effective mode volume (confinement) is so small, that gain still outpaces loss. When the dye is optically pumped, it is possible to make these nanoparticles lase. This paper is likely to spawn a great deal of further work! It's cool, and there are many clear directions to pursue now that this has been demonstrated.
Thursday, August 13, 2009
This week in cond-mat
Where did summer go? Several interesting things on the arxiv recently. Here are two from this past week that caught my eye.
arxiv:0806.3547 - Katz et al., Uncollapsing of a quantum state in a superconducting phase qubit
This paper first appeared on the arxiv last year, and it made it onto this week's mailings because the authors uploaded the final, published version (PRL 101, 200401 (2008)). This experiment is important as a technical development in the quantum computing community, since the ability to restore some measure of purity to a quantum state after that state gets entangled with some environmental degrees of freedom could be very useful. It is also a great example of why simplistic thought experiments about wavefunction collapse are misleading. A better way to think about this experiment is in (an imperfect) analogy to spin echo in nuclear magnetic or electron spin resonance. In a spin echo experiment, an ensemble of spins is set precessing, and the evidence of their coherent precession gets smeared out as a function of time as the spins "dephase" (get out of sync because of perturbing interactions with other degrees of freedom). However, in these echo experiments, a properly defined external perturbation (a pulse of microwaves) can flip all of these spins around, so that the ones originally going ahead of the pack are put in the back, and the slow ones are put in the front. The spins rephase, or become coherent in their motion again. The authors do something rather analogous here using superconducting devices. Nice!
arxiv:0908.1126 - N. P. Armitage, Electrodynamics of correlated electron systems
I'm not promoting this just because Peter sometimes comments on this blog. This is a great set of lecture notes from a 2008 summer school at Boulder. These notes provide a very good, pedagogical overview of how electromagnetic radiation interacts with the electronic systems of real materials, and how one can use measurements ranging from the THz (mm-wave) to the ultraviolet to infer details of the electronic properties. These sorts of reviews are a wonderful feature of the arxiv.
arxiv:0806.3547 - Katz et al., Uncollapsing of a quantum state in a superconducting phase qubit
This paper first appeared on the arxiv last year, and it made it onto this week's mailings because the authors uploaded the final, published version (PRL 101, 200401 (2008)). This experiment is important as a technical development in the quantum computing community, since the ability to restore some measure of purity to a quantum state after that state gets entangled with some environmental degrees of freedom could be very useful. It is also a great example of why simplistic thought experiments about wavefunction collapse are misleading. A better way to think about this experiment is in (an imperfect) analogy to spin echo in nuclear magnetic or electron spin resonance. In a spin echo experiment, an ensemble of spins is set precessing, and the evidence of their coherent precession gets smeared out as a function of time as the spins "dephase" (get out of sync because of perturbing interactions with other degrees of freedom). However, in these echo experiments, a properly defined external perturbation (a pulse of microwaves) can flip all of these spins around, so that the ones originally going ahead of the pack are put in the back, and the slow ones are put in the front. The spins rephase, or become coherent in their motion again. The authors do something rather analogous here using superconducting devices. Nice!
arxiv:0908.1126 - N. P. Armitage, Electrodynamics of correlated electron systems
I'm not promoting this just because Peter sometimes comments on this blog. This is a great set of lecture notes from a 2008 summer school at Boulder. These notes provide a very good, pedagogical overview of how electromagnetic radiation interacts with the electronic systems of real materials, and how one can use measurements ranging from the THz (mm-wave) to the ultraviolet to infer details of the electronic properties. These sorts of reviews are a wonderful feature of the arxiv.
Wednesday, August 05, 2009
LHC and the hazards of Big Science
This article in the NY Times about the LHC's current problems was interesting. To be fair, the LHC is an incredibly complex undertaking. Making high quality superconducting joints between magnets is a complex business, involving spot-welding annoying materials like niobium-titanium alloys. Testing is a real pain, since room temperature measurements can't always identify bad joints. Still, they clearly didn't design an optimal testing and commissioning regimen. I'm sure they'll get these problems licked, and great science will eventually come out of the machine - it's just a question of how long that'll take. I do wonder, though, if stories like this are, in part, a consequence of their own publicity machine, which has been hammering the general public relentlessly for years about how the LHC is going to unlock the secrets of the universe.
This situation is a prime hazard of Big Science. One thing I definitely like about condensed matter and AMO physics, for example, is that you are often (though not always) in control of your own destiny. Progress is generally not dependent on 1000 other people and 500 vendors and suppliers, nor do you have to hope that some launch schedule isn't screwed up by a hailstorm. The general public needs to know that really good science can be done on a much smaller scale. While the LHC outreach effort is meant to inspire young people into pursuing physics, situations like these delays and the accompanying reporting probably frighten away more people from the field than they attract. If a layperson ends up with the impression that all physics is hugely expensive, and even then doesn't work right, that's not a good thing.
This situation is a prime hazard of Big Science. One thing I definitely like about condensed matter and AMO physics, for example, is that you are often (though not always) in control of your own destiny. Progress is generally not dependent on 1000 other people and 500 vendors and suppliers, nor do you have to hope that some launch schedule isn't screwed up by a hailstorm. The general public needs to know that really good science can be done on a much smaller scale. While the LHC outreach effort is meant to inspire young people into pursuing physics, situations like these delays and the accompanying reporting probably frighten away more people from the field than they attract. If a layperson ends up with the impression that all physics is hugely expensive, and even then doesn't work right, that's not a good thing.
Saturday, August 01, 2009
Chemistry vs. Physics blogging
Interesting. The pseudonymous Kyle Finchsigmate at The Chem Blog just gave some stats about his blogging. He gets something like 6000 unique visitors a day, while I get about 150. Admittedly, we have rather different styles (pseudonymity makes it easy to write with more, umm, gusto, and to slam lousy papers openly, both of which probably make his blog have more broad appeal), and there are a lot more chemists out there than condensed matter physicists. Still, the factor of 40 is a bit intimidating.
Thursday, July 30, 2009
More musing about phase transitions
Everyone has seen phase transitions - water freezing and water boiling, for example. These are both examples of "first-order" phase transitions, meaning that there is some kind of "latent heat" associated with the transition. That is, it takes a certain amount of energy to convert 1 g of solid ice into 1 g of liquid water while the temperature remains constant. The heat energy is "latent" because as it goes into the material, it's not raising the temperature - instead it's changing the entropy, by making many more microscopic states available to the atoms than were available before. In our ice-water example, at 0 C there are a certain number of microscopic states available to the water molecules in solid ice, including states where the molecules are slightly displaced from their equilibrium positions in the ice crystal and rattling around. In liquid water at the same temperature, there are many more possible microscopic states available, since the water molecules can, e.g., rotate all over the place, which they could not do in the solid state. (This kind of transition is "first order" because the entropy, which can be thought of as the first derivative of some thermodynamic potential, is discontinuous at the transition.) Because this kind of phase transition requires an input or output of energy to convert material between phases, there really aren't big fluctuations near the transition - you don't see pieces of ice bopping in and out of existence spontaneously inside a glass of icewater.
There are other kinds of phase transitions. A major class of much interest to physicists is that of "second-order" transitions. If one goes to high enough pressure and temperature, the liquid-gas transition becomes second order, right at the critical point where the distinction between liquid and gas vanishes. A second order transition is continuous - that is, while there is a change in the collective properties of the system (e.g., in the ferro- to paramagnetic transition, you can think of the electron spins as many little compass needles; in the ferromagnetic phase the needles all point the same direction, while in the paramagnetic phase they don't), the number of microscopic states available doesn't change across the transition. However, the rate at which microstates become available with changes in energy is different on the two sides of the transition. In second order transitions, you can get big fluctuations in the order of the system near the transition. Understanding these fluctuations ("critical phenomena") was a major achievement of late 20th century theoretical physics.
Here's an analogy to help with the distinction: as you ride a bicycle along a road, the horizontal distance you travel is analogous to increasing the energy available to one of our systems, and the height of the road corresponds to the number of microscopic states available to the system. If you pedal along and come to a vertical cliff, and the road continues on above your head somewhere, that's a bit like the 1st order transition. With a little bit of energy available, you can't easily go back and forth up and down the cliff face. On the other hand, if you are pedaling along and come to a change in the slope of the road, that's a bit like the 2nd order case. Now with a little bit of energy available, you can imagine rolling back and forth over that kink in the road. This analogy is far from perfect, but maybe it'll provide a little help in thinking about these distinctions. One challenge in trying to discuss this stuff with the lay public is that most people only have everyday experience with first-order transitions, and it's hard to explain the subtle distinction between 1st and 2nd order.
There are other kinds of phase transitions. A major class of much interest to physicists is that of "second-order" transitions. If one goes to high enough pressure and temperature, the liquid-gas transition becomes second order, right at the critical point where the distinction between liquid and gas vanishes. A second order transition is continuous - that is, while there is a change in the collective properties of the system (e.g., in the ferro- to paramagnetic transition, you can think of the electron spins as many little compass needles; in the ferromagnetic phase the needles all point the same direction, while in the paramagnetic phase they don't), the number of microscopic states available doesn't change across the transition. However, the rate at which microstates become available with changes in energy is different on the two sides of the transition. In second order transitions, you can get big fluctuations in the order of the system near the transition. Understanding these fluctuations ("critical phenomena") was a major achievement of late 20th century theoretical physics.
Here's an analogy to help with the distinction: as you ride a bicycle along a road, the horizontal distance you travel is analogous to increasing the energy available to one of our systems, and the height of the road corresponds to the number of microscopic states available to the system. If you pedal along and come to a vertical cliff, and the road continues on above your head somewhere, that's a bit like the 1st order transition. With a little bit of energy available, you can't easily go back and forth up and down the cliff face. On the other hand, if you are pedaling along and come to a change in the slope of the road, that's a bit like the 2nd order case. Now with a little bit of energy available, you can imagine rolling back and forth over that kink in the road. This analogy is far from perfect, but maybe it'll provide a little help in thinking about these distinctions. One challenge in trying to discuss this stuff with the lay public is that most people only have everyday experience with first-order transitions, and it's hard to explain the subtle distinction between 1st and 2nd order.
Wednesday, July 22, 2009
The Anacapa Society
Hat tip to Arjendu for pointing this out. The Anacapa Society is a national society that promotes and encourages research in computational and theoretical physics at primarily undergrad institutions. They've had a good relationship with the KITP at UCSB, and have just signed an agreement that gives them a real home at Amherst College. (I've had a soft spot for Amherst since back in the day when I was struggling to decide whether to do the tier-1 research route vs. the undergrad education trajectory.) The nice thing about promoting this kind of research is that, particularly on the computational side of things, well-prepared undergrads at smaller institutions can make real contributions to science without necessarily the expensive infrastructure required for some hardcore experimental areas.
Cute optics demo
This youtube video is something that I'll have to remember for a future demo. It shows that cellophane tape makes (1-side) frosted glass appear to be transparent. Quite striking! The reason this works is pretty straightforward from the physics perspective. Frosted glass looks whitish because its surface has been covered (by sandblasting or something analogous) with little irregularities that have a typical size scale comparable to the wavelengths of visible light. Because of the different in index of refraction between glass and air, these little irregularities diffusely scatter light, and they do a pretty equitable job across the visible spectrum. (This is why clouds are white, too, by the way.) By coating the glass intimately with a polymer layer (with an index of refraction closer to the glass than that of the air), one is effectively smoothing out the irregularities to a large degree. As far as I know, this is essentially the same physics behind why wet fabrics often appear darker than dry fabrics. Some of the apparent lightness of the dry material is due to diffuse scattering by ~ wavelength-sized stray threads and fibers. A wetting liquid acts as an index-matching medium, effectively smoothing out those inhomogeneities and reducing that diffuse scattering.