I understand that the folks at CERN feel like it's important for people to be aware of the LHC and get excited about it - at this point, it looks like it's going to be the only game in town in a few years for the frontier of high energy physics. Still, the steady stream of publicity (much of it arguing that they're going to unlock the secrets of the universe, prove string theory, find evidence of extra dimensions, etc.) is getting to be a bit much. Today comes this article discussing the cooldown of the magnets for the collider and the detector. Technologically impressive to be sure, but the whole "colder than deep space" angle is pretty lame - people have been able to reach these temperatures for nearly 100 years, and superconducting magnets are used in thousands of MRI machines the world over. We get it - it's a big machine. If this is the level of publicity hounding that's going on before they even have a single piece of data, the coverage of the actual physics runs is going to be really oppressive.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Search This Blog
Friday, July 18, 2008
Wednesday, July 16, 2008
Scientists, the media, and desperation
I could've predicted this. Given current energy concerns, it's not at all surprising that the various media are ready to give airtime and column space to wacky stories like this one. The temptation must be irresistible: the public is desperate; the story itself is great TV - the lone inventor, persevering in the face of opposition from those stodgy old scientists; they can even put in quotes from the would-be inventor and the scientists and claim to be covering "both sides". You know the drill: "This conventional scientist says that if he drops this pencil it will fall to the ground. Others disagree! The controversy, up next after this commercial message." News flash: sometimes it doesn't make any sense to cover "both sides".
I think the part that frustrates me the most is the misperception by part of the public and some of the media that scientists want these alleged breakthroughs to fail. Nothing could be further from the truth! If someone discovered cheap, inexhaustible energy because of a remarkable revolutionary breakthrough, we'd love it - it'd be the most exciting time in science since the quantum revolution. The problem is, though, that keeping an open mind doesn't mean lowering your scientific standards because you'd like to believe the result. I, for one, am not holding my breath about hydrino power.
I think the part that frustrates me the most is the misperception by part of the public and some of the media that scientists want these alleged breakthroughs to fail. Nothing could be further from the truth! If someone discovered cheap, inexhaustible energy because of a remarkable revolutionary breakthrough, we'd love it - it'd be the most exciting time in science since the quantum revolution. The problem is, though, that keeping an open mind doesn't mean lowering your scientific standards because you'd like to believe the result. I, for one, am not holding my breath about hydrino power.
Sunday, July 06, 2008
slow blogging + pnictide fun
I'll be traveling (work + vacation), so blogging will be slow until July 15 or so.
Before I go, I wanted to point out that the plot continues to thicken regarding the pairing symmetry of the new iron pnictide superconductors. For example, this paper reports scanning SQUID microscopy on a sample of one of the compounds, with no apparent evidence for sign flips in the order parameter that you might expect if the material was, e.g., d-wave like the cuprates. In contrast, this paper argues that scanning tunneling spectroscopy data resemble d-wave expectations. This paper reports photoemission studies showing that the compounds have quite a complicated band structure and suggests that different parts of the Fermi surface may have different phenomenology. That sounds reminiscent of this theory paper, but I haven't read them in detail.
Before I go, I wanted to point out that the plot continues to thicken regarding the pairing symmetry of the new iron pnictide superconductors. For example, this paper reports scanning SQUID microscopy on a sample of one of the compounds, with no apparent evidence for sign flips in the order parameter that you might expect if the material was, e.g., d-wave like the cuprates. In contrast, this paper argues that scanning tunneling spectroscopy data resemble d-wave expectations. This paper reports photoemission studies showing that the compounds have quite a complicated band structure and suggests that different parts of the Fermi surface may have different phenomenology. That sounds reminiscent of this theory paper, but I haven't read them in detail.
Tuesday, July 01, 2008
What makes an experiment "good"
Recently I've had some conversations with a couple of people, including someone involved in journalism, about what makes a physics experiment good. I've been trying to think of a good way to explain my views on this; I think it's important, particularly since the lay public (and many journalists) don't have the background to judge realistically for themselves the difference between good and bad scientific results.
There are different kinds of experiments, of course, each with its own special requirements. I'll limit myself to condensed matter/AMO sorts of work, rather than high energy or nuclear. Astro is a whole separate issue, where one is often an observer rather than an experimenter, per se. In the world of precision measurement, it's absolutely critical to understand all sources of error, since the whole point of such experiments is to establish new limits of precision (like the g factor of the electron, which serves as an exquisite test of quantum electrodynamics) or bounds on quantities (like the electric dipole moment of the electron, which is darned close to zero as far as anyone can tell, and if it was nonzero there would be some major implications). Less stringent but still important is the broad class of experiments where some property is measured and compared quantitatively with theoretical expectations, either to demonstrate a realization of a prediction or, conversely, to show that a theoretical explanation now exists that is consistent with some phenomenon. A third kind of experiment is more phenomenological - demonstrating some new effect and placing bounds on it, showing the trends (how the phenomenon depends on controllable parameters), and advancing a hypothesis of explanation. This last type of situation is moderately common in nanoscale science.
One major hallmark of a good experiment is reproducibility. In the nano world this can be challenging, since there are times when measured properties can depend critically on parameters over which we have no direct control (e.g., the precise configuration of atoms at some surface). Still, in macroscopic systems at the least, one should reasonably expect that the same experiment with identical sample preparation run multiple times should give the same quantitative results. If it doesn't, that means (a) you don't actually have control of all the parameters that are important, and (b) it will be very difficult to figure out what's going on. If someone is reporting a surprising finding, how often is it seen? How readily is it reproduced, especially by independent researchers? This is an essential component of good work.
Likewise, clarity of design is nice. How are different parameters in the experiment inferred? Is the procedure to find those values robust? Are there built-in crosschecks that one can do to ensure that the measurements and related calculations make sense? Can the important independent variables be tweaked without affecting each other? Are the measurements really providing information that is useful?
Good analysis is also critical. Are there hidden assumptions? Are quantities normalized in sensible ways? Do trends make sense? Are the data plotted in ways that are fair? In essence, are apples being compared to apples? Are the conclusions consistent with the data, or truly implied by the data?
I know that some of this sounds vague. Anyone more eloquent than me want to try to articulate this more clearly?
There are different kinds of experiments, of course, each with its own special requirements. I'll limit myself to condensed matter/AMO sorts of work, rather than high energy or nuclear. Astro is a whole separate issue, where one is often an observer rather than an experimenter, per se. In the world of precision measurement, it's absolutely critical to understand all sources of error, since the whole point of such experiments is to establish new limits of precision (like the g factor of the electron, which serves as an exquisite test of quantum electrodynamics) or bounds on quantities (like the electric dipole moment of the electron, which is darned close to zero as far as anyone can tell, and if it was nonzero there would be some major implications). Less stringent but still important is the broad class of experiments where some property is measured and compared quantitatively with theoretical expectations, either to demonstrate a realization of a prediction or, conversely, to show that a theoretical explanation now exists that is consistent with some phenomenon. A third kind of experiment is more phenomenological - demonstrating some new effect and placing bounds on it, showing the trends (how the phenomenon depends on controllable parameters), and advancing a hypothesis of explanation. This last type of situation is moderately common in nanoscale science.
One major hallmark of a good experiment is reproducibility. In the nano world this can be challenging, since there are times when measured properties can depend critically on parameters over which we have no direct control (e.g., the precise configuration of atoms at some surface). Still, in macroscopic systems at the least, one should reasonably expect that the same experiment with identical sample preparation run multiple times should give the same quantitative results. If it doesn't, that means (a) you don't actually have control of all the parameters that are important, and (b) it will be very difficult to figure out what's going on. If someone is reporting a surprising finding, how often is it seen? How readily is it reproduced, especially by independent researchers? This is an essential component of good work.
Likewise, clarity of design is nice. How are different parameters in the experiment inferred? Is the procedure to find those values robust? Are there built-in crosschecks that one can do to ensure that the measurements and related calculations make sense? Can the important independent variables be tweaked without affecting each other? Are the measurements really providing information that is useful?
Good analysis is also critical. Are there hidden assumptions? Are quantities normalized in sensible ways? Do trends make sense? Are the data plotted in ways that are fair? In essence, are apples being compared to apples? Are the conclusions consistent with the data, or truly implied by the data?
I know that some of this sounds vague. Anyone more eloquent than me want to try to articulate this more clearly?
Monday, June 23, 2008
New comic
Thanks to Tom for having a link on his page to this. Too true! This is also brilliant. Good to have a laugh, despite the passing of George Carlin. One of my favorite Carlin quotes: [W]e have flamethrowers. And what this indicates to me, it means that at some point, some person said to himself, "Gee, I sure would like to set those people on fire over there. But I'm way too far away to get the job done. If only I had something that would throw flame on them."
No "Singularity" for you.
I wasn't going to even mention the idea of a Singularity, but then the IEEE made a point of dedicating an issue of their magazine to the concept. For those who don't know, the term "Singularity" originates with sci-fi author Vernor Vinge, who has written some compelling novels. Proponents of the concept believe that we live in an era of exponentially accelerating technological change, and that at some point (the Singularity) there will be a complete break in the nature of our species and societies, ushering in what some call a transhumanist future. The technologies typically associated with this idea are (1) Drexlerian molecular nanotechnology, so that we can eliminate scarcity by building anything we want anytime we want via (self-reproducing) nanomachines; (2) immortality via nanotechnological or biochemical control over biological processes that lead to senescence; and (3) strong AI, often including the concept of people uploading their minds to constructed hardware. The thing that continues to surprise me about this idea is that so many people seem to take it so seriously.
Hey, I'm all for optimism, and I'm generally bullish on the future of the species despite current scariness and some scientific arguments, but asserting that we will have a transhumanist utopia in twenty or thirty years is a wee bit of a reach, to put it mildly.
Hey, I'm all for optimism, and I'm generally bullish on the future of the species despite current scariness and some scientific arguments, but asserting that we will have a transhumanist utopia in twenty or thirty years is a wee bit of a reach, to put it mildly.
Friday, June 20, 2008
New physics building - suggestions? ideas? horror stories?
My university is in the design phase on a new physics building. This is exciting - first, it's a rare opportunity to design new lab space literally from the ground up. Second, new space will make possible some targeted expansion in the experimental directions in our department as well as in the experimental physicsy part of our electrical and computer engineering department.
Anyone out there have suggestions on building design, particularly with regards to laboratory facilities, utilities, HVAC, electrical service, vibrations, etc.? We're already looking at several recently constructed buildings elsewhere to learn lessons about best practices. If you have thoughts on physics buildings that you think are particularly well done (e.g., the electrical wiring system for the labs at the new nano building at Purdue looks extremely clever and well done), or, conversely, specific examples of design ideas that are lousy in practice or implementation, please post in the comments or email me.
Anyone out there have suggestions on building design, particularly with regards to laboratory facilities, utilities, HVAC, electrical service, vibrations, etc.? We're already looking at several recently constructed buildings elsewhere to learn lessons about best practices. If you have thoughts on physics buildings that you think are particularly well done (e.g., the electrical wiring system for the labs at the new nano building at Purdue looks extremely clever and well done), or, conversely, specific examples of design ideas that are lousy in practice or implementation, please post in the comments or email me.
Thursday, June 12, 2008
Great scientific workshop
Posting from the scenic Newark International Airport.... I just finished attending the 2008 international workshop, ESPMI-08 - Electronic Structure and Processes at Molecular-based Interfaces, at Princeton University, hosted by Antoine Kahn and David Cahen. For me, this was practically the perfect scientific meeting - about 80 attendees, a mix of theorists and experimentalists, and all the talks were very good and pitched at the right level. I'll write more about this later, but for now, two highlights that show that some things are truly universal.
First, we were having a group discussion about organic photovoltaics and the relevant issues, and it was refreshing to see that everyone, even people who have been thinking about these problems for twenty years, starts out thinking about semiconductor interfaces by drawing the un-coupled materials and then thinking about what happens when they are brought into contact. I know I think this way, but it's reassuring to see that no one can just draw complicated band alignment diagrams freehand.
Second, during this morning's session there was a 1-second brownout/power glitch - the air conditioning shut down and restarted; the computer at the front of the lecture hall rebooted. The part that struck me as amusing was how the Princeton faculty immediately motioned to their grad students/postdocs to run off, or ran off themselves, to check on the lab equipment (particularly the UHV systems). I can totally see myself doing that.
First, we were having a group discussion about organic photovoltaics and the relevant issues, and it was refreshing to see that everyone, even people who have been thinking about these problems for twenty years, starts out thinking about semiconductor interfaces by drawing the un-coupled materials and then thinking about what happens when they are brought into contact. I know I think this way, but it's reassuring to see that no one can just draw complicated band alignment diagrams freehand.
Second, during this morning's session there was a 1-second brownout/power glitch - the air conditioning shut down and restarted; the computer at the front of the lecture hall rebooted. The part that struck me as amusing was how the Princeton faculty immediately motioned to their grad students/postdocs to run off, or ran off themselves, to check on the lab equipment (particularly the UHV systems). I can totally see myself doing that.
Friday, June 06, 2008
Simple numbers
So, if crude oil futures cost at least $126/42 gallon barrel these days, doesn't that imply that the raw starting material for gasoline already costs (once you factor in the time delay between futures contracts and refining) $3/gallon? This suggests to me that the "correct" price for gasoline in the US should be closer to $5-6/gallon, when the refining catches up with futures. (That doesn't even touch on issues about how much of the crude oil pricing is due to speculation vs. actual supply & demand, or how much of this is due to the effective weak dollar policies of the US central bank.)
Wednesday, June 04, 2008
Plagiarism at the professional level
Remember my discussion of plagiarism? Remember how a couple of readers didn't seem to thing that this was necessarily that big a deal, particularly if it was "just" background stuff and not actual data? Well, I'd be curious to know what they think of this case. I hope that someone follows through and notifies the editors at the respective journals. Makes you curious about their other publications, doesn't it?
This week in the arxiv: superconductivity update
Summer writing and travel are eating my blogging time a bit, and I've also agreed to write the occasional nano-related blurb for the ACS. While my posting rate has taken a hit, science has continued to march forward, with a lot of exciting new preprints concerning (relatively) high temperature superconductivity. Here's a sampling....
arxiv:0805.4463 - Matsumoto et al., Superconductivity in undoped T' cuprates with Tc over 30 K
This paper is a perfect example of why materials growers are (unfortunately often unsung) heroes in this field. The authors have come up with a new method for growing cuprate compounds of the form T'Re2CuO4, where T'Re is a rare earth from the series (Pr, Nd, Sm, Eu, Gd). Historically these compounds were found to be antiferromagnetic insulators - no superconductivity. In this new work the authors argue that these old results were due to interstitial oxygen leading to pair-breaking. Instead, with the new growth + annealing technique, these compounds are found to exhibit superconductivity with transition temperatures as high as 30 K. These subtleties are why one should always be very careful when looking at suggested compositions in new compounds....
arxiv:0805.4630 - Rotter et al., Superconductivity at 38 K in the iron arsenide (Ba1-xKx)Fe2As2
This is the first paper I've seen (though I may have missed one) that reports superconductivity in a compound related to the new iron arsenide systems but with two iron arsenide layers per unit cell rather than one. Back in the heyday of the cuprates, the same sort of thing happened - people went from compounds with single copper oxide planes to those with multiple planes per unit cell, and transition temperatures went up. Once again we see how rich the materials landscape can be. Update: as anon. in the comments pointed out, this isn't actually the 2-layer version of the compound. Rather, it's analogous to the so-called "infinite layer" version. My mistake.
arxiv:0806.0063 - Wang et al., Very high critical field and superior Jc-field performance in NdO0.82F0.18FeAs with Tc of 51 K
Other exciting features of the new iron arsenide superconductors are their extremely high critical fields and critical currents. If the transition temperatures could be raised a bit (say past 77 K) and the compounds could be made in wire form (certainly not easy in the cuprates; unlikely to be simple in these either since like the cuprates they are brittle), this could be a huge deal for high field magnets and other applications of superconductivity.
arxiv:0805.4616 - Chen et al., The BCS-like gap in superconductor SmFeAsO0.85F0.15
arxiv:0806.0249 - Matano et al., Spin-singlet superconductivity with multiple gaps in PrO0.89F0.11FeAs
These two papers examine two related compounds with different techniques, trying to figure out how the charge carriers in these iron arsenides pair up to form the Cooper pairs that make up the superconducting condensate state. In the former, measurements of Andreev reflection (a process where an electron in a normal metal approaches a superconductor, two electrons actually cross into the superconductor, and a hole is "retroreflected" back into the normal metal, leading to a pronounced feature in the conductance of the metal/superconductor interface) strongly suggest that the samarium compound acts like an ordinary BCS superconductor. That is, each Cooper pair has zero angular momentum (s-wave pairing); this implies that the superconducting gap is uniform in momentum space, with no nodes. In contrast, the cuprates exhibit d-wave pairing, with a superconducting gap that has a four-lobe structure in momentum space and that goes to zero along four particular crystallographic directions.
The second paper uses NMR measurements of the Pr compound to argue instead that there are multiple gaps, and further that the pairing symmetry is p-wave (which has been seen in superfluid 3He and in strontium ruthenate). At first glance, these two results seem to disagree, though (a) they are talking about different materials, and (b) the Andreev measurements are particularly sensitive to the surface, while the NMR measurements are nontrivial to interpret, at least for nonexperts. Well, this is the fun part - stay tuned, and we'll see how this shakes out.
arxiv:0805.4463 - Matsumoto et al., Superconductivity in undoped T' cuprates with Tc over 30 K
This paper is a perfect example of why materials growers are (unfortunately often unsung) heroes in this field. The authors have come up with a new method for growing cuprate compounds of the form T'Re2CuO4, where T'Re is a rare earth from the series (Pr, Nd, Sm, Eu, Gd). Historically these compounds were found to be antiferromagnetic insulators - no superconductivity. In this new work the authors argue that these old results were due to interstitial oxygen leading to pair-breaking. Instead, with the new growth + annealing technique, these compounds are found to exhibit superconductivity with transition temperatures as high as 30 K. These subtleties are why one should always be very careful when looking at suggested compositions in new compounds....
arxiv:0805.4630 - Rotter et al., Superconductivity at 38 K in the iron arsenide (Ba1-xKx)Fe2As2
This is the first paper I've seen (though I may have missed one) that reports superconductivity in a compound related to the new iron arsenide systems but with two iron arsenide layers per unit cell rather than one. Back in the heyday of the cuprates, the same sort of thing happened - people went from compounds with single copper oxide planes to those with multiple planes per unit cell, and transition temperatures went up. Once again we see how rich the materials landscape can be. Update: as anon. in the comments pointed out, this isn't actually the 2-layer version of the compound. Rather, it's analogous to the so-called "infinite layer" version. My mistake.
arxiv:0806.0063 - Wang et al., Very high critical field and superior Jc-field performance in NdO0.82F0.18FeAs with Tc of 51 K
Other exciting features of the new iron arsenide superconductors are their extremely high critical fields and critical currents. If the transition temperatures could be raised a bit (say past 77 K) and the compounds could be made in wire form (certainly not easy in the cuprates; unlikely to be simple in these either since like the cuprates they are brittle), this could be a huge deal for high field magnets and other applications of superconductivity.
arxiv:0805.4616 - Chen et al., The BCS-like gap in superconductor SmFeAsO0.85F0.15
arxiv:0806.0249 - Matano et al., Spin-singlet superconductivity with multiple gaps in PrO0.89F0.11FeAs
These two papers examine two related compounds with different techniques, trying to figure out how the charge carriers in these iron arsenides pair up to form the Cooper pairs that make up the superconducting condensate state. In the former, measurements of Andreev reflection (a process where an electron in a normal metal approaches a superconductor, two electrons actually cross into the superconductor, and a hole is "retroreflected" back into the normal metal, leading to a pronounced feature in the conductance of the metal/superconductor interface) strongly suggest that the samarium compound acts like an ordinary BCS superconductor. That is, each Cooper pair has zero angular momentum (s-wave pairing); this implies that the superconducting gap is uniform in momentum space, with no nodes. In contrast, the cuprates exhibit d-wave pairing, with a superconducting gap that has a four-lobe structure in momentum space and that goes to zero along four particular crystallographic directions.
The second paper uses NMR measurements of the Pr compound to argue instead that there are multiple gaps, and further that the pairing symmetry is p-wave (which has been seen in superfluid 3He and in strontium ruthenate). At first glance, these two results seem to disagree, though (a) they are talking about different materials, and (b) the Andreev measurements are particularly sensitive to the surface, while the NMR measurements are nontrivial to interpret, at least for nonexperts. Well, this is the fun part - stay tuned, and we'll see how this shakes out.
Monday, May 26, 2008
Cold fusion - same old same old.
Once again (and it seems like this happens every couple of years) someone is claiming "success" in a cold fusion experiment. Basically this fellow has made a cell containing some composite of ZrO2 and nanoscale Pd crystals. The claim is that when this cell is filled to moderate pressures (a few bar) with deuterium gas over a couple of days, the cell gets hot (compared to its surroundings) and stays hot for a while (tens of hours), and that 4He is detected afterward. Furthermore, the claim is that control experiments with ordinary hydrogen do not produce the long-term heating or helium, and that control experiments without the Pd/ZrO2 produce no heating at all. People who know next to nothing about nuclear physics argue that the lack of neutrons (from the D+D goes to 3He + n reaction pathway) or gamma rays is fine, since simple p and n counting lets you have D + D goes to 4He, despite the fact that the 3He reaction is vastly more favored in ordinary fusion. There continues to be no credible mechanism for getting the D nuclei close enough to each other to get fusion. Now, it's entirely possible that there is weird chemistry going on here, but how come in twenty years of people trying to do this stuff there has yet to be a clean, well-designed experiment done by physicists that is reproducible and actually shows anything interesting? It's grating on many levels that this, an anecdotal discussion of nonconclusive experiments, gets touted online through slashdot, gizmodo, digg, engadget, etc. Extraordinary claims require extraordinary evidence.
Sunday, May 25, 2008
This week in the arxiv
Two papers from the past week that caught my eye....
arxiv:0805.3309 - Bunch et al., Impermeable atomic membranes from graphene sheets
This is a nice piece of work from Cornell combining the techniques from three research groups to look at the permeability of single-layer graphene sheets. The authors prepare freely suspended graphene trampolines and apply controlled pressure differences across them. They use scanned probe methods to measure the membrane shape, which ends up being well described by elasticity theory assuming that the elastic modulus for the graphene sheet is about 1012 Pa (that's big but not unexpected). By watching that shape as a function of time, they can tell how long it takes the pressure inside the chamber (sealed off by the graphene) to equilibrate with the outside environment. Elegant.
arxiv:0805.2414 - Finck et al., Area dependence of interlayer tunneling in strongly correlated bilayer 2d systems at nu(total)=1.
I've written before about two-dimensional electronic systems (2des), and how they are very useful for looking at all sorts of rich physics such as the fractional quantum Hall effect. This experiment looks at a variation on this theme. For a while now it's been possible to make two high quality 2des separated by a thin barrier - thin enough that the charges in one layer can feel the charges in the other layer via the Coulomb interaction. Since like charges repel, if the two layers have the same density of electrons, a favored low energy state would have every electron in the upper layer accompanied by a hole (the absence of an electron) in the lower layer. If the barrier is sufficiently thin, tunneling can take place between the two layers. One fascinating observation has been that this interlayer tunneling, under certain circumstances, can look very much like the kind of Josephson tunneling that one gets between superconductors. One nagging question out there has been whether the very sharp tunneling seen is a bulk effect (and taking place over the whole area where the two layers are tuned to each other) or something else (e.g., an edge effect, like many quantum Hall phenomena). This experiment shows that the tunneling really is proportional to the area, and thus is a bulk effect. This is a tough experiment, requiring great samples, demanding fabrication, and very sensitive measurements at low temperatures.
arxiv:0805.3309 - Bunch et al., Impermeable atomic membranes from graphene sheets
This is a nice piece of work from Cornell combining the techniques from three research groups to look at the permeability of single-layer graphene sheets. The authors prepare freely suspended graphene trampolines and apply controlled pressure differences across them. They use scanned probe methods to measure the membrane shape, which ends up being well described by elasticity theory assuming that the elastic modulus for the graphene sheet is about 1012 Pa (that's big but not unexpected). By watching that shape as a function of time, they can tell how long it takes the pressure inside the chamber (sealed off by the graphene) to equilibrate with the outside environment. Elegant.
arxiv:0805.2414 - Finck et al., Area dependence of interlayer tunneling in strongly correlated bilayer 2d systems at nu(total)=1.
I've written before about two-dimensional electronic systems (2des), and how they are very useful for looking at all sorts of rich physics such as the fractional quantum Hall effect. This experiment looks at a variation on this theme. For a while now it's been possible to make two high quality 2des separated by a thin barrier - thin enough that the charges in one layer can feel the charges in the other layer via the Coulomb interaction. Since like charges repel, if the two layers have the same density of electrons, a favored low energy state would have every electron in the upper layer accompanied by a hole (the absence of an electron) in the lower layer. If the barrier is sufficiently thin, tunneling can take place between the two layers. One fascinating observation has been that this interlayer tunneling, under certain circumstances, can look very much like the kind of Josephson tunneling that one gets between superconductors. One nagging question out there has been whether the very sharp tunneling seen is a bulk effect (and taking place over the whole area where the two layers are tuned to each other) or something else (e.g., an edge effect, like many quantum Hall phenomena). This experiment shows that the tunneling really is proportional to the area, and thus is a bulk effect. This is a tough experiment, requiring great samples, demanding fabrication, and very sensitive measurements at low temperatures.
Monday, May 19, 2008
Public service announcement re: cheating
I want to alert faculty colleagues to a website of which they need to be aware if they teach, particularly undergraduates. I won't link to them since I don't want to drive up their revenue, but it's called cramster.com, and while they bill themselves as a "24/7 study community", what they do is provide links to scanned solution manuals for many many textbooks. What this means is, if you teach a course from a reasonably popular book, you need to be aware that students can and often do buy the homework solutions online. As far as physics goes, they have a rather eclectic assortment. Lots of intro books, and a few major upper level ones (Griffiths; Goldstein; Jackson). If you make up a final exam using problems from the textbook, you're opening yourself up to this problem. If your problem sets contribute a lot to the final grade in a course and you use verbatim problems from the book, again you are almost certainly going to see this on some level. The more you know....
Thursday, May 15, 2008
Now that would speed up sample fabrication.
There's no question that one of these would be useful to have in the lab. Check out the whole catalog of their products - fun for all ages.
Tuesday, May 13, 2008
This week in the arxiv
A couple of interesting papers, two about graphene and one about a weird fluid mechanics effect.
arxiv:0805.1830 - Bolotin et al., Temperature dependent transport in suspended graphene
It's become clear over the last year that a lot of what was limiting the measured electrical transport properties of graphene sheets had to do with interactions between the graphene and the underlying substrate (usually SiO2). Now multiple groups have started preparing suspended graphene membranes (supported around the edges by oxide) overhanging underlying gate electrodes. By ramping up the current through the suspended membrane, the graphene sheet can be resistively heated in vacuum up to a temperature sufficient to desorb residual contaminants, and electronic properties can be measured without substrate effects. In this paper the Columbia group demonstrates that extremely high mobilities are then possible (well over 100000 cm2/Vs), and by examining the temperature and gate dependence of the conduction they can understand the scattering mechanisms at work as well as residual disorder in the system. Very clean looking data.
arxiv:0805.1884 - Booth et al., Macroscopic graphene membranes and their extraordinary stiffness
The Manchester group has also been very busy. In this paper they show a cute technique to produce large (say 0.1mm in diameter) graphene sheets in a form that's easy to suspend and handle. Basically instead of abrading or cleaving graphite into graphene on top of oxidized Si, they do so on top of Si coated with a layer of e-beam resist. An additional layer of a different sensitivity resist is put on top and patterned, followed by metal deposition. The metal layer forms a frame that goes around the previously identified graphene sheet, and the metal is then used as a seed layer to deposit a more robust Cu layer via electrochemistry. Finally, the original resist layer is dissolved, freeing the graphene+Cu frame for manipulation. They then further study the mechanical properties of these suspended layers, finding that single sheets of graphene are indeed very stiff - much more so than you might think, since they're 1 atom thick. The technique is elegant, and there is one particularly impressive TEM image. Nice SuperSTEM that they have over there in Cheshire.
arxiv:0805.0490 - Amjadi et al., A liquid film motor
Hat tip to arxivblog for pointing this out to me. These folks at Sharif University in Iran have found that DC electric fields can make soap films flow in very interesting and controllable ways. They suggest a few possible mechanisms for this kind of electrohydrodynamic motion, but conclude that none of them are entirely satisfactory. The paper has a minor rendering problem with Fig. 4, but you should definitely watch the movies on their webpage. Very dramatic! Soft CM physics can be inspiring - here's a visually impressive phenomenon that might actually be useful in fluidic applications, and the whole experiment is simple, elegant, and inexpensive. No exotic apparatus required.
arxiv:0805.1830 - Bolotin et al., Temperature dependent transport in suspended graphene
It's become clear over the last year that a lot of what was limiting the measured electrical transport properties of graphene sheets had to do with interactions between the graphene and the underlying substrate (usually SiO2). Now multiple groups have started preparing suspended graphene membranes (supported around the edges by oxide) overhanging underlying gate electrodes. By ramping up the current through the suspended membrane, the graphene sheet can be resistively heated in vacuum up to a temperature sufficient to desorb residual contaminants, and electronic properties can be measured without substrate effects. In this paper the Columbia group demonstrates that extremely high mobilities are then possible (well over 100000 cm2/Vs), and by examining the temperature and gate dependence of the conduction they can understand the scattering mechanisms at work as well as residual disorder in the system. Very clean looking data.
arxiv:0805.1884 - Booth et al., Macroscopic graphene membranes and their extraordinary stiffness
The Manchester group has also been very busy. In this paper they show a cute technique to produce large (say 0.1mm in diameter) graphene sheets in a form that's easy to suspend and handle. Basically instead of abrading or cleaving graphite into graphene on top of oxidized Si, they do so on top of Si coated with a layer of e-beam resist. An additional layer of a different sensitivity resist is put on top and patterned, followed by metal deposition. The metal layer forms a frame that goes around the previously identified graphene sheet, and the metal is then used as a seed layer to deposit a more robust Cu layer via electrochemistry. Finally, the original resist layer is dissolved, freeing the graphene+Cu frame for manipulation. They then further study the mechanical properties of these suspended layers, finding that single sheets of graphene are indeed very stiff - much more so than you might think, since they're 1 atom thick. The technique is elegant, and there is one particularly impressive TEM image. Nice SuperSTEM that they have over there in Cheshire.
arxiv:0805.0490 - Amjadi et al., A liquid film motor
Hat tip to arxivblog for pointing this out to me. These folks at Sharif University in Iran have found that DC electric fields can make soap films flow in very interesting and controllable ways. They suggest a few possible mechanisms for this kind of electrohydrodynamic motion, but conclude that none of them are entirely satisfactory. The paper has a minor rendering problem with Fig. 4, but you should definitely watch the movies on their webpage. Very dramatic! Soft CM physics can be inspiring - here's a visually impressive phenomenon that might actually be useful in fluidic applications, and the whole experiment is simple, elegant, and inexpensive. No exotic apparatus required.
Saturday, May 10, 2008
The fun parts
In contrast to the previous post, there have been some fun parts of the job lately. Today was commencement, which is always amusing - I get to play dress-up and look like a real academic. If only point 4 in this list was true, then commencement would be much more exciting.
In the lab we've had some genuinely weird data come along, and that can be fun, too. In one kind of structure we're observing a phenomenon that is completely reproducible but for which we have essentially no sensible explanation. We've been messing around with this for a month, and every time we come up with a plan, thinking we know what's going on, nature turns around and proves us wrong. Whatever is going on, it seems interesting. When we figure it out enough to write it up, I'll discuss it further here.
Lastly, after a trip to the movies last week I had the shocking realization that Rice is now partnering with Stark Industries. Sweet. I need to get one of those flying suits.
In the lab we've had some genuinely weird data come along, and that can be fun, too. In one kind of structure we're observing a phenomenon that is completely reproducible but for which we have essentially no sensible explanation. We've been messing around with this for a month, and every time we come up with a plan, thinking we know what's going on, nature turns around and proves us wrong. Whatever is going on, it seems interesting. When we figure it out enough to write it up, I'll discuss it further here.
Lastly, after a trip to the movies last week I had the shocking realization that Rice is now partnering with Stark Industries. Sweet. I need to get one of those flying suits.
Tuesday, April 29, 2008
Copying text without attribution is plagiarism.
Amazingly, there are graduate-level students out there who do not understand this simple, basic fact. When you're writing a scholastic or scientific document, you never copy other people's words - certainly not complete verbatim sentences - without clear attribution and indication that you're quoting someone else. You just don't. Ever. Doing so is plagiarism, and as any kind of professional you should know that it's wrong. Amazingly, some students don't seem to get this point, even when they've been told about this, explicitly, repeatedly, and actually signed documents attesting that they understand this, and when they know that the professor can use this amazing tool called google to figure this sort of thing out.
Just. Don't. Do. It.
Just. Don't. Do. It.
Friday, April 25, 2008
Come on, AAAS
I'm a member of the AAAS, in part because I support their various efforts, and in part because I like my subscription to Science. However, at least three times a year, I get junk mail at my house or at my departmental address, asking me if I'd like to join AAAS for the low new-member rate of $99/yr. How can these geniuses not realize that I'm already a member? I have an unusual last name, and they already have both my work and home addresses on file. Can't they tell that Prof. Douglas Natelson and Mr. Douglas Natelson with identical addresses are the same person? They must waste hundreds of dollars in postage and thousands of pieces of paper doing this, since I'm sure I'm not the only one getting these useless mailings. Good grief, folks, just do a sensible search on your mailing database for duplicates.
Thursday, April 24, 2008
AMO physics coolness
I saw two things in Science this week that I found quite interesting. First was a mention in Editor's Choice of this paper from my old stomping grounds at Stanford. The arxiv version is here. The idea is another great example of using essentially table-top physics (if you have a large, stainless steel vacuum chamber and lasers on your table) to test the limits of the Standard Model of particle physics, usually the domain of the high energy folks. Here's the story: there are many weird alternatives to the standard model where things like charge quantization (the idea that charge comes in chunks of exactly -e for electrons, and +e for protons, for example) and charge neutrality are approximate rather than exact, due to the breaking of some far out symmetries at very high energy scales. This paper points out that this idea can be tested very precisely (to 1 part in 1028) using interferometry of Bose-condensed atoms. In an optical interferometer, light (consider only one particular color) is split into two beams that take different paths, and then recombined. As light travels on each path, you can figure out how much phase the light waves accumulate by dividing the pathlength by the wavelength (and multiplying by 2 pi if you want your phase to be in radians). The intensity when the beams are recombined is proportional to the cos of the phase difference between the paths. This can be an incredibly precise way of measuring relative path lengths, and is essential to lots of modern technology. In the proposed experiment, the Bose-condensed atoms act like matter waves, and the idea is to do the same thing. However, in quantum mechanics the phase difference that builds up is related not just to the path length, but also picks up a contribution due to the (integrated) difference in (potential) energy (times time, divided by hbar) between the two paths. This is the way AMO and neutron interferometry measurements of gravity work: send waves along paths at different heights and recombine them, and the phase difference will include a contribution proportional to (m g h) where m is the mass of the particles, g is the gravitational acceleration, and h is the height difference. In the proposed experiment the atom waves are sent through regions of different electrostatic potential (voltage). If the atoms aren't exactly neutral, the voltage will couple to their charge and lead to a phase difference that would otherwise be absent. It's very elegant, and may be a way to test advanced high energy ideas without TeV particle accelerators.
The second bit that I read was this article about the race to use cold fermionic atoms trapped in optical lattices as a means of implementing condensed matter models of interesting systems (e.g., the Hubbard model of high-Tc superconductors). The theoretical models are computationally nightmarish to solve exactly, in large part because of the Fermi-Dirac statistics problem that the correct many-body wavefunctions must pick up a minus sign if the positions of any two electrons are swapped. The plan is to implement what are basically analog computers - cold atom systems that can be poked, prodded, and tuned - to map out the solutions. Using tunable model systems to explore strong correlations in quantum matter also happens to be the focus of Rice's Keck Program in Quantum Materials. (One note for regular commenter Sylow: now do you believe me that there is a DARPA program on this?)
The second bit that I read was this article about the race to use cold fermionic atoms trapped in optical lattices as a means of implementing condensed matter models of interesting systems (e.g., the Hubbard model of high-Tc superconductors). The theoretical models are computationally nightmarish to solve exactly, in large part because of the Fermi-Dirac statistics problem that the correct many-body wavefunctions must pick up a minus sign if the positions of any two electrons are swapped. The plan is to implement what are basically analog computers - cold atom systems that can be poked, prodded, and tuned - to map out the solutions. Using tunable model systems to explore strong correlations in quantum matter also happens to be the focus of Rice's Keck Program in Quantum Materials. (One note for regular commenter Sylow: now do you believe me that there is a DARPA program on this?)
Sunday, April 20, 2008
Career comments
Well, it's that time of the year again. Lots of blogging (here , here, here, here) about advice to tenure-track faculty (and other interested parties) about the tenure process. I've decided to dust off a post I originally made last May, with a few revisions and additions, to contribute to the discussion.
In terms of the job pipeline, the biggest cut in population happens when trying to get a faculty position, not at the tenure stage. In reasonable departments, no one is happy when a tenure promotion case fails. Good departments (and schools and universities) try very hard to filter at the hiring level and give their faculty the resources they need to succeed. I can only think of two or three places (in physics anyway) that historically have had a "sink or swim" attitude (that is, hiring a junior person in an area today means that seven years from now the university wants the best senior person in the world in that area - being in-house is not advantage), and I'm not sure that's even true anymore.
Generally advice is not in short supply, though good advice can be. Many institutions are setting up official mentoring efforts to ensure that junior candidates have people to talk to about these issues. A colleague of mine found several nice documents online about this issue of advice-giving and receiving. This one (pdf), from the ADVANCE program at the University of Michigan, is particularly good. I am hardly in a position to give too much sage advice about tenure, and what follows below is largely common sense. Obviously the situation is different in various disciplines and at different universities, but here's some basic points that I think should be considered. I'm sure I'll leave things out - feel free to chide me in the comments.
Understand the process. Find out how the tenure process works at your institution. This should be written down in a faculty handbook. Talk to your department chair, your faculty mentor (if your department has such a thing) or senior colleagues. Understand the timeline. Get a sense of the weight that your institution places on the different components of the job (see below). Does the departmental vote carry a lot of weight (as it usually does at Rice, for example), or are the deans or the university promotions and tenure (P&T) committee commonly overriding departmental decisions?
The process probably goes something like this: the candidate is hired for a 4-year tenure-track appointment, with some kind of annual reviews and a more major renewal review in year 3 or 4. (This gives the university a chance to end the process early if there's a major problem with an assistant prof, and forces departments to give some concrete feedback to the assistant prof about how they stand.) In the summer before year 6 (at most places) the candidate is asked to put together a dossier (complete CV, reprints of papers, a summary of funding, a statement about university service, a statement about teaching, a summary of research accomplishments, etc.) and suggest names for external evaluators. The department comes up with additional names for external evaluation, and sends the full dossier to some mix of the external people. Eventually these external letters come back, and the department reads them, puts the whole package together, and there's a vote of the tenured faculty (in October or November) about whether to recommend the assistant prof for tenure. The departmental recommendation then goes to the cognizant dean, and from there to the university P&T committee (which generally would have people from all sorts of disciplines on there, from bio to French lit). Sometimes P&T committees or deans can request more external letters, and they get copies of teaching evaluations, etc., and may meet directly with department chairs. Eventually the P&T committee makes its decisions (in late spring) and the candidate finds out. That decision is finally signed off by the president of the university and the board of trustees.
The research component. To get tenure you need actually need to be getting science done. There's no sure-fire recipe for success here, but let me make a few suggestions:
The mentoring component. This is related to both of the above. It definitely helps make the case that you are running a successful research and education enterprise if you can actually graduate students. This means making sure that they are making real progress, publishing papers (and/or patents), and ideally enabling them to land a good job (postdoc or industry) afterwards. This is not just altruistic; it's also enlightened self-interest - if you build a reputation for getting good people out in a reasonable timeframe and with real job prospects, it will help in graduate and postdoc recruiting in the long term. Managing a group isn't easy, and every student is different. If you feel like you're having trouble, definitely find colleagues to ask for advice! Every faculty research mentor has been there.
The service component. Do a decent job in departmental and university service. Don't let it eat all your time, but get involved in things that matter to you. It's also a good way to get to know your administrators and people in other departments. I'm not suggesting currying favor - just be a solid citizen. Becoming known as a pain-in-the-ass on this is not going to help you on any level.
Common sense. People argue about whether blogging can hurt your tenure chances. Blogging is only one example of a public forum, though. Use some common sense. Publicly badmouthing your institution, colleagues, administrators, etc. is not a good idea. (I'm not talking about hushing up legitimate grievances - I'm saying don't antagonize people gratuitously.) Remember, in a practical sense, the tenure decision is based not just on your scientific quality, but on whether you are the kind of colleague that people want to have for the next n years.
Don't panic. At some point, you just have to buckle down and do the work without inducing a psychodrama about the process. If you've been in a graduate program, you've undoubtedly known someone who, rather than actually solving their research problems, spent their time kvetching about how nothing was working. Don't do that to yourself. Remember, you're doing this because you enjoy it intellectually (at least, some of the time!).
In terms of the job pipeline, the biggest cut in population happens when trying to get a faculty position, not at the tenure stage. In reasonable departments, no one is happy when a tenure promotion case fails. Good departments (and schools and universities) try very hard to filter at the hiring level and give their faculty the resources they need to succeed. I can only think of two or three places (in physics anyway) that historically have had a "sink or swim" attitude (that is, hiring a junior person in an area today means that seven years from now the university wants the best senior person in the world in that area - being in-house is not advantage), and I'm not sure that's even true anymore.
Generally advice is not in short supply, though good advice can be. Many institutions are setting up official mentoring efforts to ensure that junior candidates have people to talk to about these issues. A colleague of mine found several nice documents online about this issue of advice-giving and receiving. This one (pdf), from the ADVANCE program at the University of Michigan, is particularly good. I am hardly in a position to give too much sage advice about tenure, and what follows below is largely common sense. Obviously the situation is different in various disciplines and at different universities, but here's some basic points that I think should be considered. I'm sure I'll leave things out - feel free to chide me in the comments.
Understand the process. Find out how the tenure process works at your institution. This should be written down in a faculty handbook. Talk to your department chair, your faculty mentor (if your department has such a thing) or senior colleagues. Understand the timeline. Get a sense of the weight that your institution places on the different components of the job (see below). Does the departmental vote carry a lot of weight (as it usually does at Rice, for example), or are the deans or the university promotions and tenure (P&T) committee commonly overriding departmental decisions?
The process probably goes something like this: the candidate is hired for a 4-year tenure-track appointment, with some kind of annual reviews and a more major renewal review in year 3 or 4. (This gives the university a chance to end the process early if there's a major problem with an assistant prof, and forces departments to give some concrete feedback to the assistant prof about how they stand.) In the summer before year 6 (at most places) the candidate is asked to put together a dossier (complete CV, reprints of papers, a summary of funding, a statement about university service, a statement about teaching, a summary of research accomplishments, etc.) and suggest names for external evaluators. The department comes up with additional names for external evaluation, and sends the full dossier to some mix of the external people. Eventually these external letters come back, and the department reads them, puts the whole package together, and there's a vote of the tenured faculty (in October or November) about whether to recommend the assistant prof for tenure. The departmental recommendation then goes to the cognizant dean, and from there to the university P&T committee (which generally would have people from all sorts of disciplines on there, from bio to French lit). Sometimes P&T committees or deans can request more external letters, and they get copies of teaching evaluations, etc., and may meet directly with department chairs. Eventually the P&T committee makes its decisions (in late spring) and the candidate finds out. That decision is finally signed off by the president of the university and the board of trustees.
The research component. To get tenure you need actually need to be getting science done. There's no sure-fire recipe for success here, but let me make a few suggestions:
- Have a mix of projects that range from easier to high-risk/high-reward. Having only one major project can be very risky, particularly if it takes five years to get any results. One key element of getting tenure is that people in your community need to know who you are, what you've done, and what you've been doing that's really yours - new stuff from your professorial position, not rehash of your thesis or postdoc work.
- Make sure that your colleagues know what you're doing. Your colleagues are going to need to understand your work at least on some level, and particularly for hard projects, they will need to have some idea why it may take four years before a paper comes out.
- Have backup plans. High risk things may not succeed (no kidding.). Make sure, for your students' sake and yours, that you have thought out the projects well, so that even if you don't achieve the BIG goal, you are still learning useful things that are worth publishing.
- Have a high attempt frequency for funding. If there's literally only one agency in the world that funds your work, that's risky and unfortunate. Make sure that you know what your options are for funding sources. Call up program officers. Ask to get a chance to serve on review panels - you'll learn a huge amount about writing proposals that way! Know if there are state funding opportunities. Think ahead about private foundations (e.g., Research Corporation).
- Do some self-promotion but don't sell your soul. If your external evaluators don't know who you are, that's the kiss of death. Make sure you give talks at meetings. See what you can do about getting invited to give seminars at other schools. Yes, this is one issue where "well-connected" people really benefit, but if you go to meetings and get to know the people in your field, it's not that bad. Get involved in your own department's seminar series, and invite in people that you'd like to meet and talk to.
- Publish good stuff. This is always the tricky bit, and people joke about the "least publishable unit". Still, holding back everything for the one big Nature paper that may not happen is not necessarily the best strategy, for you or your students.
- Get stuff going relatively quickly. Think about the timescales associated with publications and citations. Even if you do the greatest piece of work in your field ever, if you don't get it out the door at least a year or two before your tenure review (that is, a year before letters get sent out to external reviewers), it's going to be very hard for that work to have had much of an impact by the time of the decision.
The mentoring component. This is related to both of the above. It definitely helps make the case that you are running a successful research and education enterprise if you can actually graduate students. This means making sure that they are making real progress, publishing papers (and/or patents), and ideally enabling them to land a good job (postdoc or industry) afterwards. This is not just altruistic; it's also enlightened self-interest - if you build a reputation for getting good people out in a reasonable timeframe and with real job prospects, it will help in graduate and postdoc recruiting in the long term. Managing a group isn't easy, and every student is different. If you feel like you're having trouble, definitely find colleagues to ask for advice! Every faculty research mentor has been there.
The service component. Do a decent job in departmental and university service. Don't let it eat all your time, but get involved in things that matter to you. It's also a good way to get to know your administrators and people in other departments. I'm not suggesting currying favor - just be a solid citizen. Becoming known as a pain-in-the-ass on this is not going to help you on any level.
Common sense. People argue about whether blogging can hurt your tenure chances. Blogging is only one example of a public forum, though. Use some common sense. Publicly badmouthing your institution, colleagues, administrators, etc. is not a good idea. (I'm not talking about hushing up legitimate grievances - I'm saying don't antagonize people gratuitously.) Remember, in a practical sense, the tenure decision is based not just on your scientific quality, but on whether you are the kind of colleague that people want to have for the next n years.
Don't panic. At some point, you just have to buckle down and do the work without inducing a psychodrama about the process. If you've been in a graduate program, you've undoubtedly known someone who, rather than actually solving their research problems, spent their time kvetching about how nothing was working. Don't do that to yourself. Remember, you're doing this because you enjoy it intellectually (at least, some of the time!).
Sunday, April 13, 2008
Talk this week
First, to the readers of this blog, thanks for the recent trend of posting informative links in the comments. I think that this really adds something to the discussion. One tip: in the comments you're allowed to use html tags, so if you want to post a link with a long URL, you may want to write the html that actually posts the link.
This week we had a fun physics colloquium given by Paul Canfield of Ames Lab and Iowa State University. He spoke about the discovery and characterization of new materials, with a particular emphasis on heavy fermion compounds, but with a significant discussion of MgB2 as well. His main purpose was to convey how physicists like him think and approach problems, and I think he succeeded. He also had a funny slide called "Periodic Table According to Most Physicists" that looked roughly like this:
Amusing stuff.
This week we had a fun physics colloquium given by Paul Canfield of Ames Lab and Iowa State University. He spoke about the discovery and characterization of new materials, with a particular emphasis on heavy fermion compounds, but with a significant discussion of MgB2 as well. His main purpose was to convey how physicists like him think and approach problems, and I think he succeeded. He also had a funny slide called "Periodic Table According to Most Physicists" that looked roughly like this:
H H' (almost like hydrogen)
H'' H''' C H'''' H'''''
Si
Metals Cu
Au
Elements that may not even be real
|<---Not on the final exam --->|
|<---Stuff for bombs -------->|
Amusing stuff.
Friday, April 11, 2008
Your tax dollars at work.
Like many of my colleagues, I review lots of grant proposals. Recently I was asked to review one for the Department of Energy, and when I said 'yes', they sent me the proposal. By Federal Express. On a CD. Now, you might wonder why, if they don't mind me ending up with this in an electronic format anyway, and if they want me to send in my review electronically, they wouldn't just handle this purely over the web, and save the money and environmental impact of shipping a CD from northern VA to Houston. Ahh well.
Friday, April 04, 2008
Talks this week
I saw some very good talks this week. First up was a physics colloquium by Stuart Parkin from IBM Almaden. In some very real sense, you're reading this because of Parkin - he and his team were the people who first took giant magnetoresistance (GMR) and developed it into a useful technology in the read heads of hard disk drives. The remarkable explosion in data storage capacity over the last decade and a half is largely due to this advance, possibly the best example of true nanotechnology (the film thicknesses involved in spin valves are a few nm) making it out of the lab and into manufacturing and consumer products. Their later work on tunneling magnetoresistance has also now been transferred into hard drive read heads. In fact, TMR heads with MgO tunnel barriers between ferromagnetic layers can have room temperature resistance changes of several hundred percent in the presence of few-Oersted fields like those from drive media. After reviewing all of this at just the right level, Parkin went on to talk a bit about his latest ideas and work on high performance "racetrack" memory. In this idea, a single transistor cell can be responsible for reading and writing tens of bits of memory (as opposed to one in current RAM designs). The bits are stored as domain walls in a ferromagnetic nanowire. The walls can be detected through their local change in the magnetization, and they can be moved by pulsing spin-polarized currents through the ferromagnetic wires. All in all, a great colloquium - one of my colleagues wished that we'd taped it so that we could show it to job candidates as an example of a real general audience colloquium.
There was also a workshop on campus this week about probabilistic and nanoscale computing that featured some nice talks. One of the best was by Tom Theis, head of physical sciences research at IBM, who reviewed their latest developments and the future of the field-effect transistor from his perspective. Anyone who has alternative ideas in mind about computing technologies really needs to do their homework by listening to someone like Theis, who has perspective about the science as well as the economic and manufacturing issues.
There was also a workshop on campus this week about probabilistic and nanoscale computing that featured some nice talks. One of the best was by Tom Theis, head of physical sciences research at IBM, who reviewed their latest developments and the future of the field-effect transistor from his perspective. Anyone who has alternative ideas in mind about computing technologies really needs to do their homework by listening to someone like Theis, who has perspective about the science as well as the economic and manufacturing issues.
Monday, March 31, 2008
Yet more superconductivity fun.
Even more excitement on the novel FeAs-based superconductors. By my count there are seven more papers (0, 1, 2, 3, 4, 5, 6) on tonight's arxiv update, including two different groups demonstrating transition temperatures exceeding 50 K in the Nd version of the compound, and one showing 52 K in the Pr version. Gee, this makes my comments here look prescient. For my next trick, again guided by the periodic table, I suggest that we'll see more rare earth variations. For example, there's no reason not to try the comparatively stable actinides (thorium, protactinium, uranium), or the mostly-filled-f-shell lanthanides (thulium, ytterbium, lutetium) as opposed to the mostly-empty-f-shell ones (La, Ce, Pr). Given that pressure boosts Tc (see paper 1 above), one could try duplicating the effect of pressure by creating more internal stress within the lattice via substitutions of larger atoms between the FeAs layers. Of course, it's easy for me to say this stuff, since I don't actually have to make the compounds....
It's also worth noting that early photoemission data and heat capacity measurements on a couple of the compounds strongly suggest that the superconducting gap is zero (or darned close to it) on at least part of the Fermi surface. This is the case for the cuprates, and exactly not the case in conventional low temperature superconductors.
It's also worth noting that early photoemission data and heat capacity measurements on a couple of the compounds strongly suggest that the superconducting gap is zero (or darned close to it) on at least part of the Fermi surface. This is the case for the cuprates, and exactly not the case in conventional low temperature superconductors.
Sunday, March 30, 2008
Outreach can be fun.
Yesterday I did an "Ask a scientist" event at the Children's Museum of Houston, as part of their Nano Days events. It was fun. I started with the obligatory "sizes of things" slide, and then talked a little about scientists (who want to figure out how things work) and engineers (who want to take what we've learned and make new, useful things). To emphasize that nanoscale tech was all around them, I showed the guts of a Nintendo Wii, including the cell processor and the little accelerometer chip that the Wii uses to figure out what you're doing with the controller. I had made a demo accelerometer prop out of PVC pipe and springs that was a big hit. The most fun part was when I invited the kids up to help me take apart a Wii-mote, while I explained that sometimes breaking things down is the best way to figure out how things work. The high point: an eight-year-old whispering "Awwwwsome!" to his friend after playing with the guts of the Wii-mote. (Note: opening up any Nintendo gear requires this kind of screwdriver....)
A nice piece of physics with an elegant consequence
Kuzmenko et al. from Geneva have a PRL out this week titled "Universal Optical Conductance of Graphite". This is a pretty physics result, where the authors find theoretically and demonstrate experimentally that the optical conductivity of graphite is quantized - each graphene sheet has, at optical frequencies, a sheet conductance that is (\pi/2) e2/h. This is a consequence of the particular band structure of graphene, and I think it's rather impressive that this is robust even when there are multiple graphene layers. You might imagine, a priori, that the interlayer coupling would kill this kind of universality.
Nair et al., in a preprint, arrive at essentially the same result, but present the data in a much more dramatic way that does a great job of emphasizing the consequences of the physics. Many people don't have a good intuition for what optical conductivity means. Nearly everyone, though, has a decent sense of what optical absorption means. These folks demonstrate that the quantized optical conductivity implies that the white light absorption of graphite is quantized (!) in units of the fine structure constant (!!), so that each additional graphene layer absorbs 2.3% of the light incident on it, even though each layer is just one atom thick. The figures in the paper, particularly the last one, do a great job of making this point.
The take-home message about presentation: having a compelling physics story to tell is good, and casting it in terms that a general audience can appreciate with some intuition is even better.
Nair et al., in a preprint, arrive at essentially the same result, but present the data in a much more dramatic way that does a great job of emphasizing the consequences of the physics. Many people don't have a good intuition for what optical conductivity means. Nearly everyone, though, has a decent sense of what optical absorption means. These folks demonstrate that the quantized optical conductivity implies that the white light absorption of graphite is quantized (!) in units of the fine structure constant (!!), so that each additional graphene layer absorbs 2.3% of the light incident on it, even though each layer is just one atom thick. The figures in the paper, particularly the last one, do a great job of making this point.
The take-home message about presentation: having a compelling physics story to tell is good, and casting it in terms that a general audience can appreciate with some intuition is even better.
Tuesday, March 25, 2008
The superconductivity fun continues
Another bunch of papers on the arxiv (1, 2, 3, 4, 5). The last one is particularly interesting - the group reports that replacing lanthanum with samarium boosts Tc up to 43 K. This is the first non-cuprate with a transition temperature that high. Again, it's a long way from room temperature, but the fact that there's a system besides the cuprates that shows transition temperatures this high is exciting. This may give us more clues to the mechanism at work - what is the normal state like? Are these doped Mott insulators? Do they have a pseudogap? What is the pairing symmetry?
Friday, March 21, 2008
A new (apparently unconventional) family of superconductors
I heard about this at the March Meeting, and now it looks like things are picking up steam. There are a number of papers that have started appearing on the arxiv (1, 2, 3, 4, 5, 6, 7 update 8, 9, 10) about a new high temperature superconductor based on the parent compound LaOFeAs. This material has FeAs planes rather reminiscent of the CuO planes in the copper oxide superconductors. It's quite unusual to have an iron-based superconductor, since ferromagnetic correlations are usually associated with killing ordinary superconductivity. More exciting is the fact that this is not directly related to the cuprates and when doped with electrons (by replacing some of the oxygen with fluorine) it has a clear superconducting transition at 28 K. There are indications already that this is an unconventional superconductor, and third-hand rumors suggest that higher Tc values are on the way. It'll be interesting to see where this leads!
Wednesday, March 19, 2008
Endowed lectureships
One nice thing about being in a good department that rarely gets discussed is the quality of visitors. We're able to get a good stream of top-notch speakers for colloquia and seminars, and that is very important for maintaining an intellectually rich atmosphere for both the faculty and the students. On top of the usual calendar, we also have a couple of named, endowed lectureships. For example, every year we have a public lecture (followed the next day by a physics colloquium) in honor of William V. Houston (pronounced "how-ston"). The Houston lecturers are usually Nobel Laureates and their visits are very fun. This week we had George Smoot of cosmic microwave background fame, and once again I was reminded how much more we know about cosmology now than when I entered college.
Friday, March 14, 2008
March APS Meeting wrapup
I returned yesterday evening from the March Meeting, and spent much of today helping out with our graduate recruiting weekend for both my department and the applied physics graduate program. Hence the delayed blogging.
My last day at the March Meeting was spent largely flitting from session to session. I saw a very nice pair of talks by David Cobden and one of his students from Washington, showing measurements of the metal-insulator transition in VO2 nano-beams. Vanadium dioxide is allegedly a Mott insulator in its low temperature state, meaning that the on-site repulsion of the d orbitals of the vanadium is so strong and the electronic population is just right so that the whole correlated system is frozen. A bit above room temperature (around 65 C) VO2 becomes metallic, and there's been a lot of interest in understanding the transition, which is accompanied by a lattice distortion. In the new work, suspended beams of the oxide are observed in an optical microscope while the transition is examined. There is optical contrast between the two phases, so one can determine how much of the beam is in each phase in the coexistence region. Moreover, the elastic properties of the beam allow them to infer much information about the phase diagram for the transition, and offer some hints in conjunction with conductance measurements that the metal/insulator transition may be separate from the structural transition.
After this, I went off to the session on charge and orbital ordering to give my own talk about our magnetite results. Then I headed over to a session on molecular electronics. Finally, I ended up over near a focus session on nanotechnology, where there were a couple of nice talks on fabrication methods.
Overall, it was a good meeting - as good as these things usually are. Most of the talks that I saw were pretty decent, and I had some useful conversations with lots of colleagues. Only once or twice did it occur to me that sessions could be more pleasant if someone replaced the usual oven timer for pacing talks with either a giant gong or perhaps one of those big hooks used to pull people off stage in bad vaudeville skits.
My last day at the March Meeting was spent largely flitting from session to session. I saw a very nice pair of talks by David Cobden and one of his students from Washington, showing measurements of the metal-insulator transition in VO2 nano-beams. Vanadium dioxide is allegedly a Mott insulator in its low temperature state, meaning that the on-site repulsion of the d orbitals of the vanadium is so strong and the electronic population is just right so that the whole correlated system is frozen. A bit above room temperature (around 65 C) VO2 becomes metallic, and there's been a lot of interest in understanding the transition, which is accompanied by a lattice distortion. In the new work, suspended beams of the oxide are observed in an optical microscope while the transition is examined. There is optical contrast between the two phases, so one can determine how much of the beam is in each phase in the coexistence region. Moreover, the elastic properties of the beam allow them to infer much information about the phase diagram for the transition, and offer some hints in conjunction with conductance measurements that the metal/insulator transition may be separate from the structural transition.
After this, I went off to the session on charge and orbital ordering to give my own talk about our magnetite results. Then I headed over to a session on molecular electronics. Finally, I ended up over near a focus session on nanotechnology, where there were a couple of nice talks on fabrication methods.
Overall, it was a good meeting - as good as these things usually are. Most of the talks that I saw were pretty decent, and I had some useful conversations with lots of colleagues. Only once or twice did it occur to me that sessions could be more pleasant if someone replaced the usual oven timer for pacing talks with either a giant gong or perhaps one of those big hooks used to pull people off stage in bad vaudeville skits.
Thursday, March 13, 2008
March APS Meeting III
Day 3 in New Orleans continued to be interesting, though I missed some talks so that I could have conversations with a few people, including my program officers from a couple of funding agencies. It's never a bad idea to make sure that the program officers know what you've been doing with their resources.
I started out the day by catching an invited talk by Doug Scalapino talking about his take on the binding "glue" in the high-Tc superconductors. Scalapino uses "glue" to refer to the retarded (time-delayed) interaction that leads to pairing of the electrons. In the low temperature superconductors, the glue in this sense is the retarded phonon interaction - in a sense, one electron leaves behind a lattice vibration that slightly deforms the ion charge distribution, leading to a second electron of opposite momentum to feel a slight residual attraction to the first electron. The screened Coulomb interaction between the electrons is effectively instantaneous (and repulsive). In the high-Tc case, it's not clear what the glue is. Scalapino would argue that it's a spin fluctuation interaction; Phil Anderson would probably argue that there is no important glue in this sense of the term.
I then chaired my session, which was fun but tiring. One particularly cute experiment was from the Weig/Kotthaus group at Munich. They are trying to use nanomechanical resonators as charge shuttles. The idea is a bit like a bucket brigade. Have a metal island be suspended on a resonant beam between a source and a drain electrode. When set up ideally and driven at resonance, the island will swing back and forth between the source and drain like the clapper between the bells of an old alarm clock. When the island gets close to the source, an electron can tunnel onto the island. Ideally Coulomb blockade would ensure that it's one and only one electron. Then the island can swing over to the drain electrode, and drop off that electron. The experiment was elegant - they make many resonators at once and wire them all up in parallel. The clever bit is that they have each resonator tailored with a different mass, so that they can selectively drive just the one that they want. They drive mechanically, by shaking the whole chip back and forth, to avoid electrical crosstalk trouble. It should be very nice when they can get the structures even smaller and colder, to see strong Coulomb blockade effects.
I started out the day by catching an invited talk by Doug Scalapino talking about his take on the binding "glue" in the high-Tc superconductors. Scalapino uses "glue" to refer to the retarded (time-delayed) interaction that leads to pairing of the electrons. In the low temperature superconductors, the glue in this sense is the retarded phonon interaction - in a sense, one electron leaves behind a lattice vibration that slightly deforms the ion charge distribution, leading to a second electron of opposite momentum to feel a slight residual attraction to the first electron. The screened Coulomb interaction between the electrons is effectively instantaneous (and repulsive). In the high-Tc case, it's not clear what the glue is. Scalapino would argue that it's a spin fluctuation interaction; Phil Anderson would probably argue that there is no important glue in this sense of the term.
I then chaired my session, which was fun but tiring. One particularly cute experiment was from the Weig/Kotthaus group at Munich. They are trying to use nanomechanical resonators as charge shuttles. The idea is a bit like a bucket brigade. Have a metal island be suspended on a resonant beam between a source and a drain electrode. When set up ideally and driven at resonance, the island will swing back and forth between the source and drain like the clapper between the bells of an old alarm clock. When the island gets close to the source, an electron can tunnel onto the island. Ideally Coulomb blockade would ensure that it's one and only one electron. Then the island can swing over to the drain electrode, and drop off that electron. The experiment was elegant - they make many resonators at once and wire them all up in parallel. The clever bit is that they have each resonator tailored with a different mass, so that they can selectively drive just the one that they want. They drive mechanically, by shaking the whole chip back and forth, to avoid electrical crosstalk trouble. It should be very nice when they can get the structures even smaller and colder, to see strong Coulomb blockade effects.
Tuesday, March 11, 2008
March APS Meeting II
The March Meeting continues. Other topics that seem relatively hot (based on the number of abstracts) compared to previous years include thermoelectrics and ultracold gases and fluids. The latter are really at the border between condensed matter and atomic/molecular/optical physics, and it's interesting to see the merger of the two disciplines. While the ultracold gases provide an exquisitely clean, tunable environment for studying some physics problems, it's increasingly clear to me that they also have some significant restrictions; for example, while optical lattices enable simulations of some model potentials from solid state physics, there doesn't seem to be any nice way to model phonons or the rich variety of real-life crystal structures that can provide so much rich phenomenology.
Anyway, I saw some very pretty talks today. Taking the prize for coolest graphics in a presentation were definitely two talks from an invited session on Kondo physics. The first was by Andreas Heinrich, giving an overview of the IBM Almaden's use of scanning tunneling microscopy to examine magnetic anisotropy and Kondo physics on the single atom level. The second was by Hari Manoharan of Stanford, who showed around three experiments, the most elegant of which involved using STM of magnetic atoms to demonstrate that sometimes it's possible to really extract phase information about superpositions of quantum states. Basically he showed that one could make a designer system (an elliptical corral that confines the Cu(111) surface states) and then use STM spectroscopy based on the Kondo properties of Co atoms on the Cu(111) surface to identify specific superpositions of the eigenstates of that corral.
Another interesting series of talks took place in a session that I organized, where Lindsay Moore of the Goldhaber-Gordon group at Stanford discussed some recent studies of the so-called "0.7 anomaly". In 2d electron gas, it is possible to use gates to create a 1d constriction for a small number of electronic modes. This is called a quantum point contact (QPC). In zero magnetic field, as the point contact is pinched off the conductance of the QPC drops in quantized steps of 2e2/h until it falls to zero. The 0.7 anomaly is the appearance of an extra plateau in the conductance at around 0.7 x 2e2/h. People have been bandying about possible explanations for this feature for a while now, and finding new probes to apply is a popular tactic. The following contributed talk was by Alex Hamilton from UNSW, who had looked at the 0.7 anomaly in 2d hole systems. The holes have strong spin-orbit scattering effects that, through the study of response to applied magnetic fields, allow one to demonstrate convincingly that the 0.7 anomaly clearly has some mechanism related to spin. Nice.
Anyway, I saw some very pretty talks today. Taking the prize for coolest graphics in a presentation were definitely two talks from an invited session on Kondo physics. The first was by Andreas Heinrich, giving an overview of the IBM Almaden's use of scanning tunneling microscopy to examine magnetic anisotropy and Kondo physics on the single atom level. The second was by Hari Manoharan of Stanford, who showed around three experiments, the most elegant of which involved using STM of magnetic atoms to demonstrate that sometimes it's possible to really extract phase information about superpositions of quantum states. Basically he showed that one could make a designer system (an elliptical corral that confines the Cu(111) surface states) and then use STM spectroscopy based on the Kondo properties of Co atoms on the Cu(111) surface to identify specific superpositions of the eigenstates of that corral.
Another interesting series of talks took place in a session that I organized, where Lindsay Moore of the Goldhaber-Gordon group at Stanford discussed some recent studies of the so-called "0.7 anomaly". In 2d electron gas, it is possible to use gates to create a 1d constriction for a small number of electronic modes. This is called a quantum point contact (QPC). In zero magnetic field, as the point contact is pinched off the conductance of the QPC drops in quantized steps of 2e2/h until it falls to zero. The 0.7 anomaly is the appearance of an extra plateau in the conductance at around 0.7 x 2e2/h. People have been bandying about possible explanations for this feature for a while now, and finding new probes to apply is a popular tactic. The following contributed talk was by Alex Hamilton from UNSW, who had looked at the 0.7 anomaly in 2d hole systems. The holes have strong spin-orbit scattering effects that, through the study of response to applied magnetic fields, allow one to demonstrate convincingly that the 0.7 anomaly clearly has some mechanism related to spin. Nice.
Monday, March 10, 2008
March APS Meeting I
Yes, it's that time of the year again, when I get together with 6500-7000 of my closest colleagues and talk physics until our brains are full and it's time to leave. This year the March APS Meeting, the big US national meeting of (mostly) condensed matter physics folks, is in New Orleans. So far the biggest topic at the meeting by a wide margin seems to be graphene, just like last year. There are various divisions of the APS, including the Division of Condensed Matter Physics (DCMP), the Division of Materials Physics (DMP), the Division of Chemical Physics (DCP), and the Division of Polymer Physics (DPOLY). Each division sponsors Focus Topics designed to appeal to their membership and centered around hot ideas of the moment. One challenge in laying out the meeting is coordinating all of their Focus Topic sessions and invited sessions so that we don't end up with what seems to happen every year: head-to-head competition of researchers in a hot field speaking at the same time on similar subjects in different sessions at the meeting. Like this morning, when DMP had "Graphene Transport" at the same time as DCMP's "Electronic Properties of Graphene and Related Structures", or 3.25 hours later when DMP had "Graphene, Graphite, and Related Structures" at the same time as DCMP's "Graphene Transport II". Ahh, coordination.
I spent most of my time in sessions that I helped to organize, and I saw some interesting talks on STM work and single molecule electronic measurement techniques. One talk by Anping Li of Oak Ridge gave me a classic case of stainless steel envy. He has put together a variable temperature 4-probe (!) ultrahigh vacuum scanning tunneling microscope with built-in UHV scanning electron microscope and electron analyzer for scanning Auger microscopy, as well as an integrated UHV deposition chamber with built-in electron diffraction. Wow. Now that's a cool toy! That was followed by a very interesting set of measurements from a group at the University of Tokyo, looking at the electrical properties of truly 2d atomic layers of indium on Si. It's quite fascinating how the temperature dependence of such a film can be changed from metallic (better conduction at low temperatures) to insulating just by the introduction of a very few defects; this is a great demonstration of localization physics.
I spent most of my time in sessions that I helped to organize, and I saw some interesting talks on STM work and single molecule electronic measurement techniques. One talk by Anping Li of Oak Ridge gave me a classic case of stainless steel envy. He has put together a variable temperature 4-probe (!) ultrahigh vacuum scanning tunneling microscope with built-in UHV scanning electron microscope and electron analyzer for scanning Auger microscopy, as well as an integrated UHV deposition chamber with built-in electron diffraction. Wow. Now that's a cool toy! That was followed by a very interesting set of measurements from a group at the University of Tokyo, looking at the electrical properties of truly 2d atomic layers of indium on Si. It's quite fascinating how the temperature dependence of such a film can be changed from metallic (better conduction at low temperatures) to insulating just by the introduction of a very few defects; this is a great demonstration of localization physics.
Sunday, March 09, 2008
Another physicist in Congress
Thanks to a special election to fill the IL-14 seat of former Speaker of the House Dennis Hastert, there is another physicist in Congress, Bill Foster. Good for him. We need more civic scientists.
Friday, March 07, 2008
This week in cond-mat
Super-brief pre-March Meeting blogging. Earlier this week there were five papers that particularly caught my eye on the arxiv. The first three are closely related....
arxiv:0803.0562 - Kiguchi et al., Highly conductive molecular junctions based on direct binding of benzene to Pt electrodes
This paper is from the always impressive van Ruitenbeek group. Here they demonstrate a chemical method of bridging the nanoscale gap between two movable Pt electrodes (in a geometry called a mechanically controllable break junction) by a benzene ring, with direct C-Pt bonds. The result is a junction that has a conductance approaching the conductance quantum, 2e2/h. This is impressive because achieving such strong electronic coupling in single molecule junctions has been challenging in the past without special transport mechanisms (like the Kondo effect). They prove that they have the desired structure by looking at vibrational signatures in the tunneling conductance at finite bias. By comparing regular 12C benzene devices and those made with 13C, they see a distinct isotopic shift in the vibrational modes - the molecule with heavy carbon has lower vibrational frequencies. They also use sub-gap structure in the tunneling (with the electrodes driven to superconduct by the proximity effect, as in this paper) to show that the conductance comes predominantly from a single, highly transmissive channel. Very nice.
arxiv:0803.0582 - Hybertsen et al., Amine-linked single-molecule circuits: Systematic trends across molecular families
This paper summarizes a large and very pretty body of experimental work done by Latha Venkataraman et al., with complementary theory calculations by Hybertsen and collaborators. Using
the STM equivalent of a mechanical break junction, these folks have made comprehensive studies of single-molecule conductance by compiling histograms of tens of thousands of conductance measurements in various junction configurations. This is a nice review of the work, and is an invited paper that is part of a forthcoming special issue of Journal of Physics: Condensed Matter. Our group also has a contribution to that issue.
arxiv:0803.0710 - Prodan and Car, Tunneling conductance of amine-linked alkyl chains
This is a new theory paper that examines one subset of the devices mentioned in 0803.0582. The neat thing about this is that this work uses a novel approach to density functional theory to do the transport calculations.
Changing the topic,
arxiv:0803.0719 - Marini et al., Fluctuation-dissipation: response theory in statistical physics
This is a long, comprehensive review article about the deep connection between equilibrium fluctuations and nonequilibrium dissipation. The classic example of this is Johnson-Nyquist noise, the voltage noise in a resistor that results from thermal fluctuations of the electron distribution, and its relationship to the actual resistance that determines dissipation when current flows. I need to find the time to read through this in detail - it looks like a real resource.
arxiv:0803.0568 - Wenzler and Mohanty, Measurement of Aharonov-Bohm oscillations in mesoscopic metallic rings in the presence of high-frequency electromagnetic fields
This is another experiment in an area that I continue to find interesting, the challenge of inferring information about the quantum coherence of electrons in solids. As the intro to this paper reminds, there are ambiguities in how various quantum corrections to electronic conduction define the coherence length - the characteristic distance scale that an electron can travel before its quantum mechanical phase becomes ill-defined due to "decoherence mechanisms" (inelastic interactions that somehow change the state of the environment). This paper examines one such correction, the Aharonov-Bohm effect, when electromagnetic radiation is introduced at a frequency related to the coherence length.
arxiv:0803.0562 - Kiguchi et al., Highly conductive molecular junctions based on direct binding of benzene to Pt electrodes
This paper is from the always impressive van Ruitenbeek group. Here they demonstrate a chemical method of bridging the nanoscale gap between two movable Pt electrodes (in a geometry called a mechanically controllable break junction) by a benzene ring, with direct C-Pt bonds. The result is a junction that has a conductance approaching the conductance quantum, 2e2/h. This is impressive because achieving such strong electronic coupling in single molecule junctions has been challenging in the past without special transport mechanisms (like the Kondo effect). They prove that they have the desired structure by looking at vibrational signatures in the tunneling conductance at finite bias. By comparing regular 12C benzene devices and those made with 13C, they see a distinct isotopic shift in the vibrational modes - the molecule with heavy carbon has lower vibrational frequencies. They also use sub-gap structure in the tunneling (with the electrodes driven to superconduct by the proximity effect, as in this paper) to show that the conductance comes predominantly from a single, highly transmissive channel. Very nice.
arxiv:0803.0582 - Hybertsen et al., Amine-linked single-molecule circuits: Systematic trends across molecular families
This paper summarizes a large and very pretty body of experimental work done by Latha Venkataraman et al., with complementary theory calculations by Hybertsen and collaborators. Using
the STM equivalent of a mechanical break junction, these folks have made comprehensive studies of single-molecule conductance by compiling histograms of tens of thousands of conductance measurements in various junction configurations. This is a nice review of the work, and is an invited paper that is part of a forthcoming special issue of Journal of Physics: Condensed Matter. Our group also has a contribution to that issue.
arxiv:0803.0710 - Prodan and Car, Tunneling conductance of amine-linked alkyl chains
This is a new theory paper that examines one subset of the devices mentioned in 0803.0582. The neat thing about this is that this work uses a novel approach to density functional theory to do the transport calculations.
Changing the topic,
arxiv:0803.0719 - Marini et al., Fluctuation-dissipation: response theory in statistical physics
This is a long, comprehensive review article about the deep connection between equilibrium fluctuations and nonequilibrium dissipation. The classic example of this is Johnson-Nyquist noise, the voltage noise in a resistor that results from thermal fluctuations of the electron distribution, and its relationship to the actual resistance that determines dissipation when current flows. I need to find the time to read through this in detail - it looks like a real resource.
arxiv:0803.0568 - Wenzler and Mohanty, Measurement of Aharonov-Bohm oscillations in mesoscopic metallic rings in the presence of high-frequency electromagnetic fields
This is another experiment in an area that I continue to find interesting, the challenge of inferring information about the quantum coherence of electrons in solids. As the intro to this paper reminds, there are ambiguities in how various quantum corrections to electronic conduction define the coherence length - the characteristic distance scale that an electron can travel before its quantum mechanical phase becomes ill-defined due to "decoherence mechanisms" (inelastic interactions that somehow change the state of the environment). This paper examines one such correction, the Aharonov-Bohm effect, when electromagnetic radiation is introduced at a frequency related to the coherence length.
Wednesday, March 05, 2008
Reviewing- why, how, and how often?
I review lots of papers and proposals for various journals and funding agencies. While time allocation is a continual challenge, and while there is no good framework for rewarding this kind of professional service, I think it's important to do my fair share for several reasons. First, I'd like someone else out there to do me the same courtesy - well written, thorough, timely referee reports almost always improve the quality of scientific papers. Sometimes it's just a matter of the referee having fresh eyes and a different perspective; a referee can point out that something which seems obvious to you may not be clear to others. Second, reviewing is a way to keep abreast of what's going on out there in the community. Third, reading articles and proposals and having to write reviews is intellectually stimulating - it gets me to think about new things and areas that aren't necessarily my primary interest.
When writing a report, I try to produce something that's actually useful to the authors (as well as the editors in the case of journal articles). I briefly summarize the main points of the paper or proposal to indicate that I've actually read it and understand the key ideas. For a paper, I emphasize my overall opinion of the work. Then I point out anything that I found unclear or any parts of the argument that don't seem supported by the data or calculations, with an eye toward what would improve the manuscript. I rarely reject papers out of hand, since I rarely get manuscripts to review that I think are hopeless (though some are submitted to inappropriate journals). On the other hand, it's pretty rare that I think something is absolutely flawless (though if my comments are minor I don't ask to see the paper again). I truly don't understand why some people submit two-sentence referee reports that are dismissive - this doesn't help anyone. I also don't understand why a small number of people can be venomous in reviews. Ok, so you didn't like the work for some reason - why get nasty? Just explain rationally why you don't think the paper is right. The point of refereeing is not to fire off insults under the shelter of anonymity - that's what blogs and internet forums are for.
Proposals can be more work. The big questions are usually (1) Does the PI clearly articulate the science or engineering question that is under investigation? (2) Is the plan well considered and likely to lead to good science? (3) How much of this is new and how much is completely incremental? (4) Did the PI(s) include everything that they were required to (e.g., description of prior work, for NSF discussion of outreach and education)? Grade inflation in proposal refereeing makes this process more painful as well. I am well aware that labeling a proposal as merely "good" is the kiss of death.
I have a tough time saying "no" to refereeing requests, and I need to get better at it. Prompt refereeing is important, and it's better to decline to review than to sit on a manuscript for two months. Still, good refereeing is definitely needed. I'm sure many readers have had the experience of a manuscript sitting with editors for a long time because it's tough to find qualified reviewers with the right expertise who also have enough time to review promptly. It is a shame that there isn't some intelligent mechanism for rewarding refereeing. It shows up as a line or so on your CV, even though it's arguably more valuable than serving on some university committees. Ahh well. Off to write some reviews.
When writing a report, I try to produce something that's actually useful to the authors (as well as the editors in the case of journal articles). I briefly summarize the main points of the paper or proposal to indicate that I've actually read it and understand the key ideas. For a paper, I emphasize my overall opinion of the work. Then I point out anything that I found unclear or any parts of the argument that don't seem supported by the data or calculations, with an eye toward what would improve the manuscript. I rarely reject papers out of hand, since I rarely get manuscripts to review that I think are hopeless (though some are submitted to inappropriate journals). On the other hand, it's pretty rare that I think something is absolutely flawless (though if my comments are minor I don't ask to see the paper again). I truly don't understand why some people submit two-sentence referee reports that are dismissive - this doesn't help anyone. I also don't understand why a small number of people can be venomous in reviews. Ok, so you didn't like the work for some reason - why get nasty? Just explain rationally why you don't think the paper is right. The point of refereeing is not to fire off insults under the shelter of anonymity - that's what blogs and internet forums are for.
Proposals can be more work. The big questions are usually (1) Does the PI clearly articulate the science or engineering question that is under investigation? (2) Is the plan well considered and likely to lead to good science? (3) How much of this is new and how much is completely incremental? (4) Did the PI(s) include everything that they were required to (e.g., description of prior work, for NSF discussion of outreach and education)? Grade inflation in proposal refereeing makes this process more painful as well. I am well aware that labeling a proposal as merely "good" is the kiss of death.
I have a tough time saying "no" to refereeing requests, and I need to get better at it. Prompt refereeing is important, and it's better to decline to review than to sit on a manuscript for two months. Still, good refereeing is definitely needed. I'm sure many readers have had the experience of a manuscript sitting with editors for a long time because it's tough to find qualified reviewers with the right expertise who also have enough time to review promptly. It is a shame that there isn't some intelligent mechanism for rewarding refereeing. It shows up as a line or so on your CV, even though it's arguably more valuable than serving on some university committees. Ahh well. Off to write some reviews.
Friday, February 29, 2008
More political advertising
Remember, you should be scared. Very scared.
Stop trying to frighten me. To be trite, that's just what the terrorists want.
Update: Here's someone who agrees with me. Feel the irony.
Stop trying to frighten me. To be trite, that's just what the terrorists want.
Update: Here's someone who agrees with me. Feel the irony.
Tuesday, February 26, 2008
This week in cond-mat
A brief look at three papers from the past week that I thought looked particularly interesting.
arxiv:0802.3236 - Bleszynski-Jayich et al., Imaging a 1-electron InAs quantum dot in an InAs/InP nanowire
For a number of years now the Westervelt group at Harvard has been at the forefront of using scanned probe microscopy to examine the electronic states in semiconductor nanostructures. The basic idea is simple: use a conducting AFM tip as a local gate, and measure the transport through the nanodevice as a function of the tip position. If the gate is located somewhere irrelevant to the current paths through the device, you see no effect. By mapping the device response to the gate, you can map out many interesting features in the electronic states that contribute to transport. This is another example of applying this basic technique, this time to one of the InAs-based structures that Lars Samuelson has been developing extensively in recent years. Very nice. The data are rather psychedelic.
arxiv:0802.2350 - Geraci et al., Improved constraints on non-Newtonian forces at 10 microns
These kinds of experiments are "small scale physics" at its best. The high energy theory community has been talking for a while about whether "large" extra dimensions (beyond the usual 3+1 of ordinary space-time) can show themselves through deviations in Newtonian gravity at the sub-mm scale. Measurements of G, the gravitational constant, at these distances are extremely challenging. Remember, electromagnetic forces can swamp gravity by 40 orders of magnitude, and there are all kinds of complications that can arise in such measurements. I always enjoy these experiments, where extreme skill and cleverness are used to go after big foundational questions without gigadollar particle accelerators.
arxiv:0802.3462 - Min et al., Room-temperature superfluidity in graphene bilayers?
There's an old saying that the answer to any rhetorical question in the title of a paper is always "no". Here, however, Allan MacDonald and company suggest the opposite. It would appear that the special properties of graphene's unusual band structure may lead to superfluidity of bilayer excitons (a hole in one layer bound electrostatically to an electron in the neighboring layer to form an effective composite boson that is overall charge-neutral) at room temperature. There's been evidence for a while of low-T excitonic superfluidity in 2d electron/hole bilayers. This would be very neat, and it's always nice to see theorists making provocative predictions. (It would not lead to room temperature superconductivity, though! Since the excitons are neutral, their superfluid state doesn't carry a net current.)
arxiv:0802.3236 - Bleszynski-Jayich et al., Imaging a 1-electron InAs quantum dot in an InAs/InP nanowire
For a number of years now the Westervelt group at Harvard has been at the forefront of using scanned probe microscopy to examine the electronic states in semiconductor nanostructures. The basic idea is simple: use a conducting AFM tip as a local gate, and measure the transport through the nanodevice as a function of the tip position. If the gate is located somewhere irrelevant to the current paths through the device, you see no effect. By mapping the device response to the gate, you can map out many interesting features in the electronic states that contribute to transport. This is another example of applying this basic technique, this time to one of the InAs-based structures that Lars Samuelson has been developing extensively in recent years. Very nice. The data are rather psychedelic.
arxiv:0802.2350 - Geraci et al., Improved constraints on non-Newtonian forces at 10 microns
These kinds of experiments are "small scale physics" at its best. The high energy theory community has been talking for a while about whether "large" extra dimensions (beyond the usual 3+1 of ordinary space-time) can show themselves through deviations in Newtonian gravity at the sub-mm scale. Measurements of G, the gravitational constant, at these distances are extremely challenging. Remember, electromagnetic forces can swamp gravity by 40 orders of magnitude, and there are all kinds of complications that can arise in such measurements. I always enjoy these experiments, where extreme skill and cleverness are used to go after big foundational questions without gigadollar particle accelerators.
arxiv:0802.3462 - Min et al., Room-temperature superfluidity in graphene bilayers?
There's an old saying that the answer to any rhetorical question in the title of a paper is always "no". Here, however, Allan MacDonald and company suggest the opposite. It would appear that the special properties of graphene's unusual band structure may lead to superfluidity of bilayer excitons (a hole in one layer bound electrostatically to an electron in the neighboring layer to form an effective composite boson that is overall charge-neutral) at room temperature. There's been evidence for a while of low-T excitonic superfluidity in 2d electron/hole bilayers. This would be very neat, and it's always nice to see theorists making provocative predictions. (It would not lead to room temperature superconductivity, though! Since the excitons are neutral, their superfluid state doesn't carry a net current.)
Sunday, February 17, 2008
This week in the arxiv
One particularly nice paper from this past week:
arxiv:0802.0930 - Dolev et al., Towards identification of a non-Abelian state: observation of a quarter of electron charge and \nu=5/2 quantum Hall state
I've written in the past a couple of times about how the low energy electronic excitations of some condensed matter systems can be particle-like (that is, they have a well-defined set of quantum numbers and interact relatively weakly with one another) but with properties quite different from those of free electrons. The fractional quantum Hall system is a perfect example of this. For cold electrons confined to a two-dimensional layer in the presence of a large magnetic field, the best way to think about the low energy excitations of the electronic system is not as free electrons. Rather, interactions between the electrons in the presence of the field lead to the formation of a new description (the so-called Laughlin liquid) when the ratio of electron density to magnetic flux quanta is certain rational fractions with odd denominators. The quasiparticles in those states have fractional charge (!) rather than the usual -e of an electron. One particularly exotic state happens in very very clean 2d electron systems when that ratio is 5/2. Even though this is an even-denominator state, and the usual expectation would be that the quasiparticles (called composite fermions) should be rather like free electrons, the quantum Hall state shows that something else is going on. The proposed explanation is that the composite fermions pair up to form a special condensate (not unlike in a superconductor), and the excitations of this paired state are predicted to have all sorts of weird properties. Swapping two such quasiparticles around each other is supposed to leave a topological imprint on the system, a bit like braiding the ends of ropes. There is a lot of interest in using such a system to do quantum computation. There's only one problem: so far no one has proven that the 5/2 state really has these exotic properties. This paper by the always-impressive group at the Weizmann goes part of the way there, demonstrating via shot noise that the excitations at nu=5/2 have charge e/4 (!), consistent with the theories of an exotic state. This is a major experimental achievement - historically the kind of surface processing required to do these shot-noise or more complex measurements usually degrades the charge mobility in the 2d layer enough to kill the 5/2 state altogether.
arxiv:0802.0930 - Dolev et al., Towards identification of a non-Abelian state: observation of a quarter of electron charge and \nu=5/2 quantum Hall state
I've written in the past a couple of times about how the low energy electronic excitations of some condensed matter systems can be particle-like (that is, they have a well-defined set of quantum numbers and interact relatively weakly with one another) but with properties quite different from those of free electrons. The fractional quantum Hall system is a perfect example of this. For cold electrons confined to a two-dimensional layer in the presence of a large magnetic field, the best way to think about the low energy excitations of the electronic system is not as free electrons. Rather, interactions between the electrons in the presence of the field lead to the formation of a new description (the so-called Laughlin liquid) when the ratio of electron density to magnetic flux quanta is certain rational fractions with odd denominators. The quasiparticles in those states have fractional charge (!) rather than the usual -e of an electron. One particularly exotic state happens in very very clean 2d electron systems when that ratio is 5/2. Even though this is an even-denominator state, and the usual expectation would be that the quasiparticles (called composite fermions) should be rather like free electrons, the quantum Hall state shows that something else is going on. The proposed explanation is that the composite fermions pair up to form a special condensate (not unlike in a superconductor), and the excitations of this paired state are predicted to have all sorts of weird properties. Swapping two such quasiparticles around each other is supposed to leave a topological imprint on the system, a bit like braiding the ends of ropes. There is a lot of interest in using such a system to do quantum computation. There's only one problem: so far no one has proven that the 5/2 state really has these exotic properties. This paper by the always-impressive group at the Weizmann goes part of the way there, demonstrating via shot noise that the excitations at nu=5/2 have charge e/4 (!), consistent with the theories of an exotic state. This is a major experimental achievement - historically the kind of surface processing required to do these shot-noise or more complex measurements usually degrades the charge mobility in the 2d layer enough to kill the 5/2 state altogether.
Allocation of resources
When probabilities of some events become very low, it can be hard to calibrate your thinking and planning about them. The classic large-scale example is that of asteroid defense. The odds of an asteroid hitting the earth within our lifetimes are very low. On the other hand, the likelihood isn't zero, the negative consequences would be severe for millions if not billions of people, and we actually have the technical capability to do something about the problem with enough advanced warning. So, how much money should we as a species spend on asteroid defense? A bit closer to home, there are funding opportunities out there sometimes that are game-changing amounts of money, but getting the grant is something like a 0.5% chance, and the criteria are quite opaque. It's tough to get a good handle on how much time one should invest in the (relatively short) proposal....
Saturday, February 09, 2008
Where to publish
I've had two different conversations in the last couple of days about how people choose where to submit papers, and it's a decent topic for a blog post. I can only speak for myself, but I think I'm pretty typical. To frame the discussion, consider why we publish journal articles in the first place. We want the scientific community to know what we've been doing, so that our work can be built upon - if we've answered a question that many people want answered, those people should know. If we've developed a new technique that will be useful, or if we've learned something that changes the way we think about some (ideally important) system, the rest of the community should know. Of course, publications and citations are also one metric of performance. It's a marketplace of ideas out there, and if no one cites your papers, then that says that you may not be having a major influence in moving the field forward.
The desire to disseminate knowledge and get recognition both provide a motive to try to publish in the highest impact journals that are appropriate. On the other hand, not every publication-worthy result is necessarily earth-shaking in significance. I know that there are some people who apparently send every halfway-decent paper to Science and Nature first, because "why not?" I tend to be more conservative and self-assessing. Not everything is of interest to a broad readership. Similarly, there are physicists who send every result to PRL. Again, let's be honest - not every physics result is PRL-worthy. Furthermore, in the nano arena, sometimes the chemistry or engineering literature may really be more appropriate than Phys Rev, and that's fine. I do try to aim for the highest "impact factor" journal that seems topical and reasonable - that's just common sense.
An additional factor is the time-to-publication. If you're working in a competitive area, you may want to get a result out in the peer-reviewed literature fast, and the best way to do that may be to publish in something other than PRL. The arxiv mitigates this a bit, but not all publishers like electronic preprints.
The desire to disseminate knowledge and get recognition both provide a motive to try to publish in the highest impact journals that are appropriate. On the other hand, not every publication-worthy result is necessarily earth-shaking in significance. I know that there are some people who apparently send every halfway-decent paper to Science and Nature first, because "why not?" I tend to be more conservative and self-assessing. Not everything is of interest to a broad readership. Similarly, there are physicists who send every result to PRL. Again, let's be honest - not every physics result is PRL-worthy. Furthermore, in the nano arena, sometimes the chemistry or engineering literature may really be more appropriate than Phys Rev, and that's fine. I do try to aim for the highest "impact factor" journal that seems topical and reasonable - that's just common sense.
An additional factor is the time-to-publication. If you're working in a competitive area, you may want to get a result out in the peer-reviewed literature fast, and the best way to do that may be to publish in something other than PRL. The arxiv mitigates this a bit, but not all publishers like electronic preprints.
Wednesday, February 06, 2008
Combined single-molecule electronics and optics
This'll be my last self-referential post for a while. Now that it's out online, I want to write a post about our latest result. As readers of this blog know, I am not a big fan of the hype that accompanies a lot of nano research. I cringe everytime someone claims that a minor development is a breakthrough, and it drives me crazy when people who know better feel compelled to imply that self-reproducing nanobots are going to build spaceships out of single-crystal diamond in five years. That being said, I really do think that this result is a major advance, both in molecular-scale electronics and in ultrasensitive chemical sensing.
The two-sentence summary: we can do simultaneous electronic and optical measurements on single molecules (!) by using our electrodes as optical antennas. This opens up lots of science to be done as well as some very intriguing technological possibilities.
Over the last decade, a number of techniques have been developed to measure electronic conduction through single molecules. There are lots of basic physics and physical chemistry questions that still need to be answered in such systems (e.g., how does dissipation work at these scales? What happens when electronic correlations are strong and the system is driven out of equilibrium?). One long-standing problem, though, has been the lack of any independent (non-transport) way to confirm that conduction is taking place through a particular molecule of interest. Except for scanning tunneling microscopy (great for science, but impractical for some measurements and definitely not scalable for devices), there are no good imaging techniques to see the object (molecule of interest? contaminant? accidental nanoparticle?) through which the current is passing. The resulting approach to these devices has been essentially statistical, requiring the fabrication of large numbers of devices with many control experiments, etc.
Over the same period, as I discussed here in reference to an earlier paper from our group, surface-enhanced Raman spectroscopy (SERS) has been studied extensively. Raman spectroscopy is a very common physical chemistry technique to probe the vibrational spectrum of materials. Light comes in at some frequency, dumps some energy into the vibrational modes of the material (this is called Stokes scattering), and leaves with less energy. By measuring the energy shift between incoming and outgoing light, it's possible to pick out a material's characteristic vibrational modes - a kind of chemical fingerprint. In SERS, nanostructured metal surfaces act like little optical antennas when illuminated, creating so-called hotspots where the local electromagnetic intensity can be as much as a million times greater than the incident intensity, leading to greatly enhanced Raman emission. People have reported SERS capable of measuring single molecules, but demonstrating that conclusively is extremely difficult.
In our new paper, we've been able to kill two birds with one stone. We have been able to perform simultaneous electronic transport and Raman spectroscopy on individual molecules. The same metal electrodes used to push current through the molecules also function as a plasmonic antenna, giving enormous SERS enhancements. Conduction between the electrodes is known to be by tunneling, and tunneling depends so steeply on distance that the total volume through which the current is passing can only contain at most one or two molecules. (This steep distance dependence is the reason STMs work!) At room temperature (and in air), we see that the conduction from one electrode to the other bops around a bit as a function of time. This is due to molecular motion and the changing molecular environment, and isn't surprising. However, we can simultaneously measure the Raman signal from the region between the electrodes. We find that the time variation in the Raman emission correlates extremely well with the time variation in the interelectrode conduction. Since the conduction occurs via tunneling and probes about one molecular volume, the Raman emission must be from the same single molecule in question.
This demonstrates that we can mass-fabricate single-molecule sensitive SERS hotspots in high yield in pre-defined locations. At the same time, this multimodal single-molecule sensing shows via the Raman signature that we are pushing current through the specific molecule of interest in a given device.
We've got lots of ideas on where to go with this - it's very exciting.
The two-sentence summary: we can do simultaneous electronic and optical measurements on single molecules (!) by using our electrodes as optical antennas. This opens up lots of science to be done as well as some very intriguing technological possibilities.
Over the last decade, a number of techniques have been developed to measure electronic conduction through single molecules. There are lots of basic physics and physical chemistry questions that still need to be answered in such systems (e.g., how does dissipation work at these scales? What happens when electronic correlations are strong and the system is driven out of equilibrium?). One long-standing problem, though, has been the lack of any independent (non-transport) way to confirm that conduction is taking place through a particular molecule of interest. Except for scanning tunneling microscopy (great for science, but impractical for some measurements and definitely not scalable for devices), there are no good imaging techniques to see the object (molecule of interest? contaminant? accidental nanoparticle?) through which the current is passing. The resulting approach to these devices has been essentially statistical, requiring the fabrication of large numbers of devices with many control experiments, etc.
Over the same period, as I discussed here in reference to an earlier paper from our group, surface-enhanced Raman spectroscopy (SERS) has been studied extensively. Raman spectroscopy is a very common physical chemistry technique to probe the vibrational spectrum of materials. Light comes in at some frequency, dumps some energy into the vibrational modes of the material (this is called Stokes scattering), and leaves with less energy. By measuring the energy shift between incoming and outgoing light, it's possible to pick out a material's characteristic vibrational modes - a kind of chemical fingerprint. In SERS, nanostructured metal surfaces act like little optical antennas when illuminated, creating so-called hotspots where the local electromagnetic intensity can be as much as a million times greater than the incident intensity, leading to greatly enhanced Raman emission. People have reported SERS capable of measuring single molecules, but demonstrating that conclusively is extremely difficult.
In our new paper, we've been able to kill two birds with one stone. We have been able to perform simultaneous electronic transport and Raman spectroscopy on individual molecules. The same metal electrodes used to push current through the molecules also function as a plasmonic antenna, giving enormous SERS enhancements. Conduction between the electrodes is known to be by tunneling, and tunneling depends so steeply on distance that the total volume through which the current is passing can only contain at most one or two molecules. (This steep distance dependence is the reason STMs work!) At room temperature (and in air), we see that the conduction from one electrode to the other bops around a bit as a function of time. This is due to molecular motion and the changing molecular environment, and isn't surprising. However, we can simultaneously measure the Raman signal from the region between the electrodes. We find that the time variation in the Raman emission correlates extremely well with the time variation in the interelectrode conduction. Since the conduction occurs via tunneling and probes about one molecular volume, the Raman emission must be from the same single molecule in question.
This demonstrates that we can mass-fabricate single-molecule sensitive SERS hotspots in high yield in pre-defined locations. At the same time, this multimodal single-molecule sensing shows via the Raman signature that we are pushing current through the specific molecule of interest in a given device.
We've got lots of ideas on where to go with this - it's very exciting.
Sunday, February 03, 2008
political advertising: good and bad
I don't want to start a political flamewar, but I find the contrast between these two political ads very striking:
Obama's Yes, we can - feel-good, inspirational video with lots of stars, excerpts of Obama's NH concession (!) speech.
Clinton's Freefall - be scared! Only we can save you from certain doom!
Obama's Yes, we can - feel-good, inspirational video with lots of stars, excerpts of Obama's NH concession (!) speech.
Clinton's Freefall - be scared! Only we can save you from certain doom!
One PRL/arxiv paper
I'll write more in the next day or two about what I think is a very exciting new result of ours. For now, I wanted to write a little about this paper:
arxiv:0801.4021, Frolov et al., Electrical generation of pure spin currents in a two-dimensional electron gas
For quite some time there has been a strong interest in using the spin degree of freedom of electrons for information processing. In some sense this is old news (see this past year's Nobel in physics), but the real trick is to see whether one can generate currents of only spin, rather than pushing whole, spin-polarized electrons through a circuit. In principle pure spin currents can be moved without dissipation, so if they can be generated and detected in a "nice" way, it may be possible to reduce the power required for certain computations. Of course, unlike charge, spin polarization is not conserved - spins generally prefer to relax back to an unpolarized state in the absence of big magnetic fields. This paper reports a way of generating spin currents that is quite clever - use quantum point contacts + spin-orbit scattering to generate an excess spin population in a region of 2d electron gas, and then the excess spin population diffuses away (without a net flow of charge). This paper also demonstrates that reducing the dimensionality of the system leads to an enhanced spin lifetime. It's a neat result and a very pretty experiment.
arxiv:0801.4021, Frolov et al., Electrical generation of pure spin currents in a two-dimensional electron gas
For quite some time there has been a strong interest in using the spin degree of freedom of electrons for information processing. In some sense this is old news (see this past year's Nobel in physics), but the real trick is to see whether one can generate currents of only spin, rather than pushing whole, spin-polarized electrons through a circuit. In principle pure spin currents can be moved without dissipation, so if they can be generated and detected in a "nice" way, it may be possible to reduce the power required for certain computations. Of course, unlike charge, spin polarization is not conserved - spins generally prefer to relax back to an unpolarized state in the absence of big magnetic fields. This paper reports a way of generating spin currents that is quite clever - use quantum point contacts + spin-orbit scattering to generate an excess spin population in a region of 2d electron gas, and then the excess spin population diffuses away (without a net flow of charge). This paper also demonstrates that reducing the dimensionality of the system leads to an enhanced spin lifetime. It's a neat result and a very pretty experiment.
Subscribe to:
Posts (Atom)