A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, December 31, 2009
Happy New Year
Happy New Year to my readers. Posts will pick up again in 2010. In the mean time, you might be amused by a couple of science-y gifts I got this holiday season. I've got a great science museum-type demo in mind inspired by this desk toy, and no lab should ever be without a sonic screwdriver. Finally, while not strictly science-related, this is very funny, containing such gems as Super Monkey Collider Loses Funding.
Friday, December 25, 2009
Arxiv articles I should read
Some recent arxiv articles that I really should find the time to read in depth:
arxiv:0809.3474 - Affleck, Quantum impurity problems in condensed matter physics
Ian Affleck has revised his (rather mathematical) Les Houches lecture notes about quantum impurity problems (typically a single impurity, such as an unpaired electron, in contact with some kind of quantum environment).
arxiv:0904.1933 - Cubrovic, Zaanan, and Schalm, String theory, quantum phase transitions, and the emergent Fermi liquid
This is a Science paper related to my earlier post about the connection between certain quantum gravity models and condensed matter theories.
arxiv:0912.4868 - Heiblum, Fractional charge determination via quantum shot noise measurements
Heiblum is a consummate experimentalist, and this article in honor of Yoseph Imry looks like a great review of this area, particularly recent insights into the subtleties that happen with temperature and bias.
arxiv:0809.3474 - Affleck, Quantum impurity problems in condensed matter physics
Ian Affleck has revised his (rather mathematical) Les Houches lecture notes about quantum impurity problems (typically a single impurity, such as an unpaired electron, in contact with some kind of quantum environment).
arxiv:0904.1933 - Cubrovic, Zaanan, and Schalm, String theory, quantum phase transitions, and the emergent Fermi liquid
This is a Science paper related to my earlier post about the connection between certain quantum gravity models and condensed matter theories.
arxiv:0912.4868 - Heiblum, Fractional charge determination via quantum shot noise measurements
Heiblum is a consummate experimentalist, and this article in honor of Yoseph Imry looks like a great review of this area, particularly recent insights into the subtleties that happen with temperature and bias.
Sunday, December 20, 2009
Noise IV
The last kind of electrical noise I wanted to discuss is called 1/f or "flicker" noise, and it's something of a special case. It's intrinsic in the sense that it originates with the material whose conductance or resistance is being measured, but it's usually treated as extrinsic, in the sense that its physical mechanism is not what's of interest and in the limit of an "ideal" sample it probably wouldn't be present. Consider a resistance measurement (that is, flowing current through some sample and looking at the resulting voltage drop). As the name implies, the power spectral density of voltage fluctuations, SV, has a component that varies approximately inversely with the frequency. That is, the voltage fluctuates as a function of time, and the slow fluctuations have larger amplitudes than the fast fluctuations. Unlike shot noise, which results from the discrete nature of charge, 1/f noise exists because the actual resistance of the sample itself is varying as a function of time. That is, some fluctuation dV(t) comes from I dR(t), where I is the average DC current. On the bright side, that means there is an obvious test of whether the noise you're seeing is of this type: real 1/f noise power scales like the square of the current (in contrast to shot noise, which is linear in I, and Johnson-Nyquist noise, which is independent of I).
The particular 1/f form is generally thought to result from there being many "fluctuators" with a broad distribution of time scales. A "fluctuator" is some microscopic degree of freedom, usually considered to have two possible states, such that the electrical resistance is different in each state. The ubiquitous two-level systems that I've mentioned before can be fluctuators. Other candidates include localized defect states ("traps") that can either be empty or occupied by an electron. These latter are particularly important in semiconductor devices like transistors. In the limit of a single fluctuator, the resistance toggles back and forth stochastically between two states in what is often called "telegraph noise".
A thorough bibliography of 1/f noise is posted here by a thoughtful person.
I can't leave this subject without talking about one specific instance of 1/f noise that I think is very neat physics. In mesoscopic conductors, where electronic conduction is effectively a quantum interference experiment, changing the disorder seen by the electrons can lead to fluctuations in the conductance (within a quantum coherent volume) by an amount ~ e2/h. In this case, the resulting 1/f noise observed in such a conductor actually grows with decreasing temperature, which is the opposite of, e.g., Johnson-Nyquist noise. The reason is the following. In macroscopic conductors, ensemble averaging of the fluctuations over all the different conducting regions of a sample suppresses the noise; as T decreases, though, the typical quantum coherence length grows, and this kind of ensemble averaging is reduced, since the sample contains fewer coherent regions. My group has done some work on this in the past.
The particular 1/f form is generally thought to result from there being many "fluctuators" with a broad distribution of time scales. A "fluctuator" is some microscopic degree of freedom, usually considered to have two possible states, such that the electrical resistance is different in each state. The ubiquitous two-level systems that I've mentioned before can be fluctuators. Other candidates include localized defect states ("traps") that can either be empty or occupied by an electron. These latter are particularly important in semiconductor devices like transistors. In the limit of a single fluctuator, the resistance toggles back and forth stochastically between two states in what is often called "telegraph noise".
A thorough bibliography of 1/f noise is posted here by a thoughtful person.
I can't leave this subject without talking about one specific instance of 1/f noise that I think is very neat physics. In mesoscopic conductors, where electronic conduction is effectively a quantum interference experiment, changing the disorder seen by the electrons can lead to fluctuations in the conductance (within a quantum coherent volume) by an amount ~ e2/h. In this case, the resulting 1/f noise observed in such a conductor actually grows with decreasing temperature, which is the opposite of, e.g., Johnson-Nyquist noise. The reason is the following. In macroscopic conductors, ensemble averaging of the fluctuations over all the different conducting regions of a sample suppresses the noise; as T decreases, though, the typical quantum coherence length grows, and this kind of ensemble averaging is reduced, since the sample contains fewer coherent regions. My group has done some work on this in the past.
Thursday, December 17, 2009
Physics and Industry
I read this column on the back page of this month's APS News, and I think it hits a lot of the right notes, until this paragraph:
While this is nice in the abstract, I'm trying to imagine how this is any more likely to happen than me getting my own unicorn and a candy-cane tree. How can an academic physicist with a functioning research group possibly take off for two or three years to work in industry? What happens to their students? Their other funding? What university would actually encourage this, given that they have to have the salary line, lab space, and office space still there, and that they have teaching/service needs? In an era when companies are loathe to hire permanent research staff and give them proper facilities and resources (allegedly because such things do not maximize (short-term) profits and therefore dilute shareholder value), why on earth would a company want a revolving door of temporary employees that need the same resources as permanent staff but are in continual need of training and business education?
It seems to me that a more realistic approach, if you really want to encourage an industrial R&D resurgence in the US, would focus on tax and policy incentives to convince companies to invest in this stuff. Discourage ultrashort-term strategies that maximize next quarter's profits rather than ensuring long term health of the company. Give federal loan guarantees to companies that want to establish research efforts. I'm 100% certain that if the industrial R&D jobs were there, we would fill them - the problem is that US companies overall have decided that investing in physics doesn't give them a quick stock price boost. If you want to encourage more interactions between university research faculty and industry, fine. Give tax breaks for industrial consulting or university research funding by industry. (Though biomedical research shows that extremely strong coupling between researchers and their profit-motivated funding sources is not necessarily a good thing.)
Many of the Nation’s physics departments and other departments staffed by physicists should encourage some of their faculty members to take a two or three year sabbatical leave and join the physics staffs of companies wishing to use their skills to strengthen or rebuild their industrial bases. With the expected cutbacks in Federal spending for everything, including scientific research, the physics academic staffs, that already spend far too much of their time writing proposals to compete for Government grants, should help the Nation by joining one of the many companies who really could use their skills to refine their products and introduce the innovations so characteristic of their physics training. In their new industrial positions, the successes of these industrially focused physicists would encourage further enrollments in physics and all related sciences. Meanwhile the Nation’s manufacturing base would be strengthened and rebuilt.
While this is nice in the abstract, I'm trying to imagine how this is any more likely to happen than me getting my own unicorn and a candy-cane tree. How can an academic physicist with a functioning research group possibly take off for two or three years to work in industry? What happens to their students? Their other funding? What university would actually encourage this, given that they have to have the salary line, lab space, and office space still there, and that they have teaching/service needs? In an era when companies are loathe to hire permanent research staff and give them proper facilities and resources (allegedly because such things do not maximize (short-term) profits and therefore dilute shareholder value), why on earth would a company want a revolving door of temporary employees that need the same resources as permanent staff but are in continual need of training and business education?
It seems to me that a more realistic approach, if you really want to encourage an industrial R&D resurgence in the US, would focus on tax and policy incentives to convince companies to invest in this stuff. Discourage ultrashort-term strategies that maximize next quarter's profits rather than ensuring long term health of the company. Give federal loan guarantees to companies that want to establish research efforts. I'm 100% certain that if the industrial R&D jobs were there, we would fill them - the problem is that US companies overall have decided that investing in physics doesn't give them a quick stock price boost. If you want to encourage more interactions between university research faculty and industry, fine. Give tax breaks for industrial consulting or university research funding by industry. (Though biomedical research shows that extremely strong coupling between researchers and their profit-motivated funding sources is not necessarily a good thing.)
Tuesday, December 15, 2009
Noise III
While Johnson-Nyquist noise is an equilibrium phenomenon, shot noise is a nonequilibrium effect, only present when there is a net current being driven through a system. Shot noise is a consequence of the fact that charge comes in discrete chunks. Remember, current noise is the mean-square fluctuations about the average current. If charge was a continuous quantity, then there wouldn't be any fluctuations - the average flow rate would completely describe the situation. However, since charge is quantized, a complete description of charge flow would instead be an itemized list of the arrival times of each electron. With such a list, a theorist could calculate not just the average current, but the fluctuations, and all of the higher statistical moments. This is called "full counting statistics", and is actually achievable under certain very special circumstances.
Schottky, about 90 years ago, worked out the expected current noise power spectral density, SI, for the case of independent electrons traversing a single region with no scattering (as in a vacuum tube diode, for example). If the electrons are truly independent (this electron doesn't know when the last electron came through, or when the next one is going through), and there is just some arrival rate for them, then the electron arrivals are described by Poisson statistics. In this case, Schottky showed that SI = <(I - < I >)2> = 2 e < I > Amps2/Hz. That is, the current noise is proportional to the average current, with a proportionality constant that is twice the electronic charge.
In the general case, when electrons are not necessarily independent of each other, it is more common to write the zero temperature shot noise as SI = F 2 e < I >, where F is called the Fano factor. One can think if F as a correction factor, but under sometimes it's better to think of F as describing the effective charge of the charge carriers. For example, suppose current was carried by pairs of electrons, but the pair arrivals are Poisson distributed. This situation can come up in some experiments involving superconductors. In that case, one would find that F = 2, or you can think of the effective charge carriers being the pairs, which have charge 2e. These deviations away from the classical Schottky result are where all the fun and interesting physics lives. For example, shot noise measurements have been used to show that the effective charge of the quasiparticles in the fractional quantum Hall regime is fractional. Shot noise can also be dramatically modified in highly quantum coherent systems. See here for a great review of all of this, and here for a more technical one.
Nanostructures are particularly relevant for shot noise measurements. It turns out that shot noise is generally suppressed (F approaches zero) in macroscopic conductors. (It's not easy to see this based on what I've said so far. Here's a handwave: the serious derivation of shot noise follows an electron at a particular energy and looks to see whether it's transmitted or reflected from some scattering region. If the electron is instead inelastically scattered with some probability into some other energy state, that's a bit like making the electrons continuous.) To see shot noise clearly, you either need a system where conduction is completely controlled by a single scattering-free region (e.g., a vacuum tube; a thin depletion region in a semiconductor structure; a tunnel barrier), or you need a system small enough and cold enough that inelastic scattering is rare.
The bottom line: shot noise is a result of current flow and the discrete nature of charge, and deviations from the classical Schottky result tell you about correlations between electrons and the quantum transmission properties of your system. Up next: 1/f noise.
Schottky, about 90 years ago, worked out the expected current noise power spectral density, SI, for the case of independent electrons traversing a single region with no scattering (as in a vacuum tube diode, for example). If the electrons are truly independent (this electron doesn't know when the last electron came through, or when the next one is going through), and there is just some arrival rate for them, then the electron arrivals are described by Poisson statistics. In this case, Schottky showed that SI = <(I - < I >)2> = 2 e < I > Amps2/Hz. That is, the current noise is proportional to the average current, with a proportionality constant that is twice the electronic charge.
In the general case, when electrons are not necessarily independent of each other, it is more common to write the zero temperature shot noise as SI = F 2 e < I >, where F is called the Fano factor. One can think if F as a correction factor, but under sometimes it's better to think of F as describing the effective charge of the charge carriers. For example, suppose current was carried by pairs of electrons, but the pair arrivals are Poisson distributed. This situation can come up in some experiments involving superconductors. In that case, one would find that F = 2, or you can think of the effective charge carriers being the pairs, which have charge 2e. These deviations away from the classical Schottky result are where all the fun and interesting physics lives. For example, shot noise measurements have been used to show that the effective charge of the quasiparticles in the fractional quantum Hall regime is fractional. Shot noise can also be dramatically modified in highly quantum coherent systems. See here for a great review of all of this, and here for a more technical one.
Nanostructures are particularly relevant for shot noise measurements. It turns out that shot noise is generally suppressed (F approaches zero) in macroscopic conductors. (It's not easy to see this based on what I've said so far. Here's a handwave: the serious derivation of shot noise follows an electron at a particular energy and looks to see whether it's transmitted or reflected from some scattering region. If the electron is instead inelastically scattered with some probability into some other energy state, that's a bit like making the electrons continuous.) To see shot noise clearly, you either need a system where conduction is completely controlled by a single scattering-free region (e.g., a vacuum tube; a thin depletion region in a semiconductor structure; a tunnel barrier), or you need a system small enough and cold enough that inelastic scattering is rare.
The bottom line: shot noise is a result of current flow and the discrete nature of charge, and deviations from the classical Schottky result tell you about correlations between electrons and the quantum transmission properties of your system. Up next: 1/f noise.
Monday, December 14, 2009
Interesting times.
It's always interesting to read about your institution in the national media. It's a pretty good article that captures both sides of the Rice-Baylor question.
Sunday, December 13, 2009
Noise II
One type of electronic noise that is inescapable is Johnson-Nyquist noise. Roughly a hundred years ago, physicists studying electricity noticed that sensitive measurements showed noise (apparently random time-varying fluctuations in the voltage or current). They found that the power spectral density of (for example) the voltage noise, SV, was larger when the system in question was more resistive, and that higher temperatures seemed to make the problem even worse. Bert Johnson, then at Bell Labs, did a very careful study of this phenomenon in 1927, and showed that this noise appeared to result from statistical fluctuations in the electron "gas". This allowed him to do systematic measurements (with different resistances at fixed temperature, and a fixed resistance at varying temperatures) and determine Boltzmann's constant (though he ends up off by ~ 10% or so). Read the original paper if you want to get a good look at how a careful experimentalist worked eighty years ago.
Very shortly thereafter, Harry Nyquist came up with a very elegant explanation for the precise magnitude of the noise. Imagine a resistor, and think of the electrons in that resistor as a gas at some temperature, T. All the time the electrons are bopping around; at one instant there might be an excess of electrons at one end of the resistor, while later there might be a deficit. This all averages out, since the resistor is overall neutral, but in an open circuit configuration these fluctuations would lead to a fluctuating voltage across the resistor. Nyquist said, imagine a 1d electromagnetic cavity (transmission line), terminated at each end by such a resistor. If the whole system is in thermal equilibrium, we can figure out the energy content of the modes (of various frequencies) of the cavity - it's the black body radiation problem that we know how to solve. Now, any energy in the cavity must come from these fluctuations in the resistors. On the other hand, since the whole system is in steady state and no energy is building up anywhere, the energy in the cavity is also being absorbed by the resistors. This is an example of what we now call the fluctuation-dissipation theorem: the fluctuations (open-circuit voltage or short-circuit current) in the circuit are proportional to how dissipative the circuit is (the resistance). Nyquist ran the numbers and found the result we now take for granted. For open-circuit voltage fluctuations, SV = 4 kBTR V2/Hz, independent of frequency (ignoring quantum effects). For short-circuit current fluctuations, SI = 4 kBT / R A2/Hz.
Johnson-Nyquist noise is an unavoidable consequence of thermodynamic equilibrium. It's a reason many people cool their amplifiers or measurement electronics. It can also be useful. Noise thermometry (here, for example) has become an excellent way of measuring the electronic temperature in many experiments.
Very shortly thereafter, Harry Nyquist came up with a very elegant explanation for the precise magnitude of the noise. Imagine a resistor, and think of the electrons in that resistor as a gas at some temperature, T. All the time the electrons are bopping around; at one instant there might be an excess of electrons at one end of the resistor, while later there might be a deficit. This all averages out, since the resistor is overall neutral, but in an open circuit configuration these fluctuations would lead to a fluctuating voltage across the resistor. Nyquist said, imagine a 1d electromagnetic cavity (transmission line), terminated at each end by such a resistor. If the whole system is in thermal equilibrium, we can figure out the energy content of the modes (of various frequencies) of the cavity - it's the black body radiation problem that we know how to solve. Now, any energy in the cavity must come from these fluctuations in the resistors. On the other hand, since the whole system is in steady state and no energy is building up anywhere, the energy in the cavity is also being absorbed by the resistors. This is an example of what we now call the fluctuation-dissipation theorem: the fluctuations (open-circuit voltage or short-circuit current) in the circuit are proportional to how dissipative the circuit is (the resistance). Nyquist ran the numbers and found the result we now take for granted. For open-circuit voltage fluctuations, SV = 4 kBTR V2/Hz, independent of frequency (ignoring quantum effects). For short-circuit current fluctuations, SI = 4 kBT / R A2/Hz.
Johnson-Nyquist noise is an unavoidable consequence of thermodynamic equilibrium. It's a reason many people cool their amplifiers or measurement electronics. It can also be useful. Noise thermometry (here, for example) has become an excellent way of measuring the electronic temperature in many experiments.
Friday, December 11, 2009
Noise I
For a while now the fraction of condensed matter physicists that think about electronic transport measurements have been interested in noise as a means of learning more about the underlying physics in systems. I thought it would be useful to give a sense of why noise is important. First, what do we mean by noise? As you might imagine from the colloquial meaning of the term, electronic noise manifests itself as fluctuations as a function of time in either the current through a system (current noise) or the voltage difference across a system (voltage noise). These fluctuations are distributed about some mean value of current or voltage, so the smart way to characterize them is by taking the average of the square of the deviation from the mean (e.g., <(I - < I >)2>, where the angle brackets denote averaging over time, and I is the current.). You can imagine that these fluctuations are distributed over all sort of time scales - some might be fast and some might be slow. The natural thing to do is work in the frequency domain (Fourier transforming the fluctuations), and then you can worry about the power spectral density of the fluctuations. For current noise, this is usually written SI, which has units of Amps2/Hz. If you evaluate SI at a particular frequency, then that tells you the size of the mean square current fluctuations within a 1 Hz bandwidth about that frequency. There is an analogous quantity SV [V2/Hz] for voltage noise. If the power spectral density is constant over a broad range of frequencies (up to some eventual high frequency cutoff), the noise is said to be "white". If, instead, there is a systematic trend with a larger power spectral density at low frequencies, the noise is sometimes called "pink".
In any measurement, there might be several kinds of noise that one must worry about. For example, your measuring equipment might show that the apparent SI or SV has several sharp peaks at particular frequencies. This is narrow band noise, and might be extrinsic, resulting from unintentional pickup. The classic examples include 60 Hz (50 Hz in many places outside the US) and its multiples, due to power lines, ~ 30 kHz from fluorescent lights, 540-1700 kHz from AM radio, 85-108 MHz from FM radio, etc. Extrinsic noise is, in physicist parlance, uninteresting, though it may be a major practical annoyance. There are sometimes intrinsic sources of narrow band noise, however, that can be very interesting indeed, since they indicate something going on inside the sample/system in question that has a very particular time scale.
There are three specific types of noise that are often of physical interest, particularly in nanostructures: thermal (Johnson-Nyquist) noise, shot ("partition") noise, and 1/f ("flicker") noise. I'll write a bit about each of these soon.
In any measurement, there might be several kinds of noise that one must worry about. For example, your measuring equipment might show that the apparent SI or SV has several sharp peaks at particular frequencies. This is narrow band noise, and might be extrinsic, resulting from unintentional pickup. The classic examples include 60 Hz (50 Hz in many places outside the US) and its multiples, due to power lines, ~ 30 kHz from fluorescent lights, 540-1700 kHz from AM radio, 85-108 MHz from FM radio, etc. Extrinsic noise is, in physicist parlance, uninteresting, though it may be a major practical annoyance. There are sometimes intrinsic sources of narrow band noise, however, that can be very interesting indeed, since they indicate something going on inside the sample/system in question that has a very particular time scale.
There are three specific types of noise that are often of physical interest, particularly in nanostructures: thermal (Johnson-Nyquist) noise, shot ("partition") noise, and 1/f ("flicker") noise. I'll write a bit about each of these soon.
Wednesday, December 09, 2009
Fun new CD
They Might Be Giants has a new CD that my readers with kids might enjoy: Science is Real. Fun stuff. This song has long been a favorite of mine.
Tuesday, December 08, 2009
Cryogenic dark matter detection
Whether this rumor turns out to be accurate or not, the technology used in the CDMS collaboration's dark matter search is quite interesting. Working down the hall from these folks in graduate school definitely gave me an appreciation for the challenges they face, as well as teaching me some neat condensed matter physics and experimental knowledge.
The basic challenge in dark matter detection is that weakly interacting particles are, well, very weakly interacting. We have all kinds of circumstantial evidence (rotation curves of galaxies; gravitational lensing measurements of mass distributions; particular angular anisotropies in the cosmic microwave background) that there is a lot of mass out there in the universe that is not ordinary baryonic matter (that is, made from protons and neutrons). The dark matter hypothesis is that there are additional (neutral) particles out there that couple only very weakly to normal matter, certainly through gravity, and presumably through other particle physics interactions with very small cross-sections. A reasonable approach to looking for these particles would involve watching for them to recoil off the nuclei of normal matter somehow. These recoils would dump energy into the normal matter, but you'd need to distinguish between these events and all sorts of others. For example, if any atoms in your detector undergo radioactive decay, that would also dump energy into the detector material's lattice. Similarly, if a cosmic ray came in and banged around, that would deposit energy, too. Those two possibilities also deposit charge into the detector, though, so the ability to identify and discount recoil events associated with charged particles would be essential. Neutrons hitting the detector material would be much more annoying.
The CDMS detectors consist of ~ cm-thick slabs of Si (ok) and Ge (better, because Ge is heavier and therefore has more nuclear material), each with an electrical ground plane (very thin low-Z metal film) on one side and an array of meandering tungsten micro-scale wires on the other side. The tungsten meanders are "superconducting transition edge bolometers". The specially deposited tungsten films have a superconducting transition somewhere near 75 mK. By properly biasing them electrically (using "electrothermal feedback"), they sit right on the edge of their transition. If any extra thermal energy gets dumped into the meander, a section of it is driven "normal". This leads to a detectable voltage pulse. At the same time, because that section now has higher resistance, current flow through there decreases, allowing the section to cool back down and go superconducting again. By having very thin W lines, their heat capacity is very small, and this feedback process (recovery time) is fast. A nuclear recoil produces a bunch of phonons which propagate in the crystal with slightly varying sound speeds depending on direction. By having an array of such meanders and correlating their responses, it's possible to back out roughly where the recoil event took place. (They had an image on the cover of Physics Today back in the 90s some time showing beautiful ballistic phonon propagation in Si with this technique.) Moreover, there is a small DC voltage difference between the transition edge detectors and the ground plane. That means that any charge dumped into the detector will drift. By looking for current pulses, it is possible to determine which recoil events came along with charge deposition in the crystal. The CDMS folks have a bunch of these slabs attached via a cold finger to a great big dilution refrigerator (something like 4 mW cooling power at 100 mK, for those cryo experts out there) up in an old salt mine in Minnesota, and they've been measuring for several years now, trying to get good statistics.
To get a flavor for how challenging this stuff is, realize that they can't use ordinary Pb-Sn solder (which often comes pre-tinned on standard electronic components) anywhere near the detector. There's too high an abundance of a radioisotope of Pb that is produced by cosmic rays. They have to use special solder based on "galley lead", which gets its name because it comes from Roman galleys that have been sunk on the bottom of the Mediterranean for 2000 years (and thus not exposed to cosmic rays). I remember as a grad student hearing an anecdote about how they deduced that someone had screwed up and used a commercial pre-tinned LED because they could use the detector itself to see clear as day the location of a local source of events. I also remember watching the challenge of finding a wire-bonder that didn't blow up the meanders due to electrostatic discharge problems. There are competing techniques out there now, of course.
Well, it'll be interesting to see what comes out of this excitement. These are some really careful people. If they claim there's something there, they're probably right.
The basic challenge in dark matter detection is that weakly interacting particles are, well, very weakly interacting. We have all kinds of circumstantial evidence (rotation curves of galaxies; gravitational lensing measurements of mass distributions; particular angular anisotropies in the cosmic microwave background) that there is a lot of mass out there in the universe that is not ordinary baryonic matter (that is, made from protons and neutrons). The dark matter hypothesis is that there are additional (neutral) particles out there that couple only very weakly to normal matter, certainly through gravity, and presumably through other particle physics interactions with very small cross-sections. A reasonable approach to looking for these particles would involve watching for them to recoil off the nuclei of normal matter somehow. These recoils would dump energy into the normal matter, but you'd need to distinguish between these events and all sorts of others. For example, if any atoms in your detector undergo radioactive decay, that would also dump energy into the detector material's lattice. Similarly, if a cosmic ray came in and banged around, that would deposit energy, too. Those two possibilities also deposit charge into the detector, though, so the ability to identify and discount recoil events associated with charged particles would be essential. Neutrons hitting the detector material would be much more annoying.
The CDMS detectors consist of ~ cm-thick slabs of Si (ok) and Ge (better, because Ge is heavier and therefore has more nuclear material), each with an electrical ground plane (very thin low-Z metal film) on one side and an array of meandering tungsten micro-scale wires on the other side. The tungsten meanders are "superconducting transition edge bolometers". The specially deposited tungsten films have a superconducting transition somewhere near 75 mK. By properly biasing them electrically (using "electrothermal feedback"), they sit right on the edge of their transition. If any extra thermal energy gets dumped into the meander, a section of it is driven "normal". This leads to a detectable voltage pulse. At the same time, because that section now has higher resistance, current flow through there decreases, allowing the section to cool back down and go superconducting again. By having very thin W lines, their heat capacity is very small, and this feedback process (recovery time) is fast. A nuclear recoil produces a bunch of phonons which propagate in the crystal with slightly varying sound speeds depending on direction. By having an array of such meanders and correlating their responses, it's possible to back out roughly where the recoil event took place. (They had an image on the cover of Physics Today back in the 90s some time showing beautiful ballistic phonon propagation in Si with this technique.) Moreover, there is a small DC voltage difference between the transition edge detectors and the ground plane. That means that any charge dumped into the detector will drift. By looking for current pulses, it is possible to determine which recoil events came along with charge deposition in the crystal. The CDMS folks have a bunch of these slabs attached via a cold finger to a great big dilution refrigerator (something like 4 mW cooling power at 100 mK, for those cryo experts out there) up in an old salt mine in Minnesota, and they've been measuring for several years now, trying to get good statistics.
To get a flavor for how challenging this stuff is, realize that they can't use ordinary Pb-Sn solder (which often comes pre-tinned on standard electronic components) anywhere near the detector. There's too high an abundance of a radioisotope of Pb that is produced by cosmic rays. They have to use special solder based on "galley lead", which gets its name because it comes from Roman galleys that have been sunk on the bottom of the Mediterranean for 2000 years (and thus not exposed to cosmic rays). I remember as a grad student hearing an anecdote about how they deduced that someone had screwed up and used a commercial pre-tinned LED because they could use the detector itself to see clear as day the location of a local source of events. I also remember watching the challenge of finding a wire-bonder that didn't blow up the meanders due to electrostatic discharge problems. There are competing techniques out there now, of course.
Well, it'll be interesting to see what comes out of this excitement. These are some really careful people. If they claim there's something there, they're probably right.
Tuesday, December 01, 2009
Scale and perspective
Well, my old friends at AIG now owe $25B less to the US government. For those keeping score at home, that's about 4 National Science Foundation annual budgets, or 0.8 NIH annual budgets. AIG still owes the US government an additional $62B.