This'll be my last talk description for a while, I promise. Colloquium today was Gerry Gabrielse, talking about their group's latest measurements of the g factor of the electron (really [g/2-1]) and the accompanying inferred value for the fine structure constant. Gabrielse did open his talk with most of this clip, since it's about their work. On a random note, I TAed the first author on that first paper once when he was an undergrad.
Precision measurement physics is extremely impressive in its own way. They measure g to parts in 10^13, and \alpha to parts in 10^10 by doing incredibly precise spectroscopy on a single trapped electron in a magnetic field. To really do this right, they have to get rid of all the relevant black body photons in the microwave range, meaning that they have to cool their cavity down to 80 mK. They also need to account for cavity QED effects - again it's a restricted density of states argument. They get the lifetime for spontaneous emission of a microwave photon from the first excited state to the ground state of their trapped electron to be 260 times what it would be in free space. They achieve this lifetime enhancement by making sure to operate their cavity such that there just aren't any cavity modes available at the right energy for the would-be photon to occupy. A tour de force piece of work. I'm pretty sure that precision measurement like this would drive me bonkers.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Search This Blog
Wednesday, March 28, 2007
Tuesday, March 27, 2007
Frank Wilczek talk, part two
Frank Wilczek gave his second talk at Rice, "The lightness of being", about the origins of mass and the "feebleness" of gravity. He demonstrated the relative weakness of gravity very effectively by jumping up and down, showing that by using a tiny amount of chemical energy, he could overcome (temporarily) the gravitational attraction of the entire planet. I'll admit that I was a bit disappointed in this talk, in the sense that there was more overlap with yesterday's public lecture than I was expecting. I did come away having learned a new way to think about the origin of the mass of the nucleons, though. Wilczek's most famous contribution to physics is asymptotic freedom of quarks, which can be summarized as this: unlike the other forces that weaken with interparticle distance, the gluon-mediated color charge interaction between quarks grows as the separation between quarks is increased. One result of this is that there are no free quarks - if you try and separate a lone quark, the energies involved in the strong interaction become large enough to favor creation of quark-antiquark pairs. So try to build a nucleon out of three quarks. The quarks have to be pretty localized relative to each other, so that from far away there is no unscreened color charge. Localizing quantum mechanical objects leads to a particle-in-a-box type kinetic energy, though. You can think of this as coming from the uncertainty principle. It's this internal kinetic energy that is the source of 95% of the mass of the proton, via m = E/c^2. Voila - mass comes about due to quantum confinement. "Nano" concepts at work on the "femto" scale.
Another interesting point that Wilczek made: the near-perfect conservation of mass law identified by Lavoisier in chemical reactions is a great example of an emergent law. Strictly speaking, mass isn't conserved - energy is. That's very clear at particle accelerators, where a colliding e-e+ pair can produce particles massing 30000x that of the two electrons. The reason that chemistry doesn't see this effect is very much in the spirit of condensed matter. The excitation spectrum of the bound quarks is very strongly gapped. There are no available excited states of the coupled quark system at the few-eV energies relevant to chemical reactions. This basic idea, that processes can be suppressed because of a lack of available states, is also prevalent in much nanoscale physics.
I asked him about the proton "spin problem", as discussed recently here. At issue is where does the intrinsic angular momentum of the proton come from. Wilczek pointed out that there actually isn't any discrepancy with theory; lattice QCD does give spin-1/2 as the final total. What rubs people the wrong way is that the calculations run counter to most intuition. Rather than that angular momentum coming from the spins of the quarks, it appears that much of it comes from the gluon field. There you have it.
Finally, in the Q&A period, someone asked Wilczek about the possibility of extra dimensions - from context, I assume "large" ones. Wilczek really doesn't like this idea; he favors supersymmetry-driven Planck-scale grand unification. He said that it's hard enough accomplishing that and not running into problems like proton decay, and that pushing unification to lower energies (as would happen in the large extra dimension case) would cause all kinds of difficulties like that. I hadn't heard this said before, and would be curious to know more about it. Presumably the proponents of these extra dimension ideas have thought about this.
Another interesting point that Wilczek made: the near-perfect conservation of mass law identified by Lavoisier in chemical reactions is a great example of an emergent law. Strictly speaking, mass isn't conserved - energy is. That's very clear at particle accelerators, where a colliding e-e+ pair can produce particles massing 30000x that of the two electrons. The reason that chemistry doesn't see this effect is very much in the spirit of condensed matter. The excitation spectrum of the bound quarks is very strongly gapped. There are no available excited states of the coupled quark system at the few-eV energies relevant to chemical reactions. This basic idea, that processes can be suppressed because of a lack of available states, is also prevalent in much nanoscale physics.
I asked him about the proton "spin problem", as discussed recently here. At issue is where does the intrinsic angular momentum of the proton come from. Wilczek pointed out that there actually isn't any discrepancy with theory; lattice QCD does give spin-1/2 as the final total. What rubs people the wrong way is that the calculations run counter to most intuition. Rather than that angular momentum coming from the spins of the quarks, it appears that much of it comes from the gluon field. There you have it.
Finally, in the Q&A period, someone asked Wilczek about the possibility of extra dimensions - from context, I assume "large" ones. Wilczek really doesn't like this idea; he favors supersymmetry-driven Planck-scale grand unification. He said that it's hard enough accomplishing that and not running into problems like proton decay, and that pushing unification to lower energies (as would happen in the large extra dimension case) would cause all kinds of difficulties like that. I hadn't heard this said before, and would be curious to know more about it. Presumably the proponents of these extra dimension ideas have thought about this.
Monday, March 26, 2007
Frank Wilczek talk, part one
Frank Wilczek is visiting Rice for two days this week, and is giving two talks. I was fortunate enough to have lunch with him. He's amazingly smart, and extremely versatile. You really don't run into too many people who are conversant on the highest levels of high energy theory (hey, the guy did win a Nobel for asymptotic freedom) and also on the highest levels of condensed matter (he's very interested in non-Abelian statistics and topological quantum numbers in condensed matter systems). His first talk, a public (named) lecture entitled "The Universe is a strange place," was this afternoon. As you might expect from someone as adept at writing physics for a general audience, Wilczek gave a very clear presentation that surveyed modern high energy physics. He discussed ideas relevant from QCD - that most of the mass of nucleons comes from the energy balled up in their constituent quarks and gluons rather than from the rest mass of the quarks. He also emphasized strongly the idea that quarks and other fundamental particles are simply organized, long-lived excitations of underlying quantum fields that are always fluctuating on short time scales (h/mc^2) and length scales (10^-13 cm for nucleons). I hadn't appreciated before that after fixing only three masses (e.g., the K, pi, and b-bbar mesons) lattice QCD nails all the other hadron masses. He talked briefly about dark matter and dark energy, and explained his reasoning for liking supersymmetry. In his words, either the beautiful ideas of supersymmetry are right, leading to unification of the running strong, electroweak, and gravitational couplings, with testable consequences in the form of superpartners detectable at the LHC; or, Nature is cruelly teasing us.
At the very end an audience member asked his opinion on string theory. Wilczek said that string theory was not, properly, a theory - it was not a well-defined set of equations with real predictive solutions (as in QCD). While recognizing the value of aesthetics and symmetry, he clearly understands that the real test of theory is experiment, not intrinsic beauty. (Cue Lubos denouncing Wilczek in 5, 4, 3, ....). He went on to say that it was a collection of very interesting ideas, that it may one day get to an actually predictive form, and that there were only a small number of approaches out there for treating quantum gravity.
At the very end an audience member asked his opinion on string theory. Wilczek said that string theory was not, properly, a theory - it was not a well-defined set of equations with real predictive solutions (as in QCD). While recognizing the value of aesthetics and symmetry, he clearly understands that the real test of theory is experiment, not intrinsic beauty. (Cue Lubos denouncing Wilczek in 5, 4, 3, ....). He went on to say that it was a collection of very interesting ideas, that it may one day get to an actually predictive form, and that there were only a small number of approaches out there for treating quantum gravity.
Monday, March 19, 2007
Long-term research, companies, and universities
I've posted about this topic before, but Gordon Watts' recent post on the subject of long-term research makes me want to throw this out there again. That, and the disturbing news I heard at the APS March Meeting about a round of layoffs of some of the few remaining physical sciences researchers at Bell Labs. It's terribly depressing: since my time in high school, long-term industrial R&D has been gutted in this country (and in most of the world). "Long-term" now means two years. Companies are under so much pressure to have year-over-year quarterly revenue increases that they blanch at the idea of spending money on something risky that may not lead to a big revenue stream quickly. Maybe that's always been true to some extent, and places like Bell Labs and IBM Research (and RCA and GE Research and GM and Ford Scientific and Westinghouse Research) were all effectively accidental monopolies or near-monopolies when they had major research labs. It's demonstrably much worse now.
More distressing to me is the tacit assumption, mentioned by Gordon, that university research will somehow pick up the slack. That is, federal dollars are more appropriate for this kind of basic work, and companies can always fund university labs to do work for them, too. Anyone who knows how university research actually works can tell you many reasons why this is a bad idea. Apart from low-level practical considerations (publish vs patent? foreign vs. domestic students? export controls?), the big killer here is just one of resources. Back when I was at Bell, if they wanted to they could have put a dozen condensed matter PhDs to work on a problem, along with technical support staff. Given how universities work, with teaching commitments, administrative tasks, student timescales, etc., no university achieve that kind of critical mass.
More distressing to me is the tacit assumption, mentioned by Gordon, that university research will somehow pick up the slack. That is, federal dollars are more appropriate for this kind of basic work, and companies can always fund university labs to do work for them, too. Anyone who knows how university research actually works can tell you many reasons why this is a bad idea. Apart from low-level practical considerations (publish vs patent? foreign vs. domestic students? export controls?), the big killer here is just one of resources. Back when I was at Bell, if they wanted to they could have put a dozen condensed matter PhDs to work on a problem, along with technical support staff. Given how universities work, with teaching commitments, administrative tasks, student timescales, etc., no university achieve that kind of critical mass.
Sunday, March 18, 2007
This week in cond-mat
A few highlights from this week, though brief. I've actually been working on my book rather than writing as much. Coming soon: more about faculty searches (now that I don't have to worry that my comments could give an unfair advantage to any candidate, since we're past the interviewing stage).
cond-mat/0703230 - Karabacak et al., High frequency nanofluidics: an experimental study using nanomechanical resonators
With my mech-E background, I've always liked fluid dynamics and lamented that it gets left out of the typical physics curriculum. This is a nice use of nanomechanical resonators as a means to study fluid motion via the resulting damping of the resonator. Of particular interest is the transition between Newtonian flow (shear stress on a wall given by the product of a viscosity times the velocity gradient at the wall) and non-Newtonian flow (shear stress depending on shear rate, for example; cornstarch in water gets stiff at high shear rates, while mayonnaise gets softer at high shear rates. Both are non-Newtonian fluids).
cond-mat/0703374 - Katsnelson and Novoselov, Graphene: new bridge between condensed matter physics and quantum electrodynamics
This is a good, pedagogical review of a lot of the interesting physics seen in electronic transport in graphene. Because of its band structure, electrons and holes in graphene act rather like ultrarelativistic particles (that is, their energy is approximately linearly proportional to their (crystal) momentum, like photons). The discussion in this paper of the Klein paradox is particularly nice; I hadn't read such a clear summary of it before.
cond-mat/0703247 - Malyshev, DNA double helices for single molecule electronics
This has already come out in PRL. While I'm sure the calculations are reasonable and robust, this is a classic example of a theory proposal that is much easier to talk about than ever actually try. My main problem here is that actually preparing electronic devices from DNA and ending up with a controlled system is incredibly hard. There are compensating ions all over the place; DNA in vacuum or on a surface is not nearly the same thing as in a biological environment, including its conformations. Ahh well.
cond-mat/0703419 - Zhang et al., Noise correlations in a Coulomb blockaded quantum dot
Yet another pretty piece of experimental work from Harvard and Tokyo. Using a combination of tank circuits (RLC resonators), cold voltage amplifiers, and a cross-correlation system, these folks are able to measure shot noise in a Coulomb-blockaded quantum dot. They can use a gate to tune the dot in and out of blockade, and can watch the noise vary from sub- to superPoissonian (that is, are the electrons behaving independently (Poisson statistics for tunneling), avoiding each other (sub-Poissonian), or bunching (super-Poissonian). It all looks so easy, though I know experiments like this are very challenging.
cond-mat/0703230 - Karabacak et al., High frequency nanofluidics: an experimental study using nanomechanical resonators
With my mech-E background, I've always liked fluid dynamics and lamented that it gets left out of the typical physics curriculum. This is a nice use of nanomechanical resonators as a means to study fluid motion via the resulting damping of the resonator. Of particular interest is the transition between Newtonian flow (shear stress on a wall given by the product of a viscosity times the velocity gradient at the wall) and non-Newtonian flow (shear stress depending on shear rate, for example; cornstarch in water gets stiff at high shear rates, while mayonnaise gets softer at high shear rates. Both are non-Newtonian fluids).
cond-mat/0703374 - Katsnelson and Novoselov, Graphene: new bridge between condensed matter physics and quantum electrodynamics
This is a good, pedagogical review of a lot of the interesting physics seen in electronic transport in graphene. Because of its band structure, electrons and holes in graphene act rather like ultrarelativistic particles (that is, their energy is approximately linearly proportional to their (crystal) momentum, like photons). The discussion in this paper of the Klein paradox is particularly nice; I hadn't read such a clear summary of it before.
cond-mat/0703247 - Malyshev, DNA double helices for single molecule electronics
This has already come out in PRL. While I'm sure the calculations are reasonable and robust, this is a classic example of a theory proposal that is much easier to talk about than ever actually try. My main problem here is that actually preparing electronic devices from DNA and ending up with a controlled system is incredibly hard. There are compensating ions all over the place; DNA in vacuum or on a surface is not nearly the same thing as in a biological environment, including its conformations. Ahh well.
cond-mat/0703419 - Zhang et al., Noise correlations in a Coulomb blockaded quantum dot
Yet another pretty piece of experimental work from Harvard and Tokyo. Using a combination of tank circuits (RLC resonators), cold voltage amplifiers, and a cross-correlation system, these folks are able to measure shot noise in a Coulomb-blockaded quantum dot. They can use a gate to tune the dot in and out of blockade, and can watch the noise vary from sub- to superPoissonian (that is, are the electrons behaving independently (Poisson statistics for tunneling), avoiding each other (sub-Poissonian), or bunching (super-Poissonian). It all looks so easy, though I know experiments like this are very challenging.
Tuesday, March 13, 2007
Quote verification?
Last week at the APS, Lars Samuelson closed his nano-related talk with the following quote, reportedly from Albert Einstein: "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." Can anyone tell me the primary source of this quote, and whether it's legitimate? I've googled a bit, and all I've found are lists of quotes that appear to have circulated online since the mid 1990s, with no primary source attribution. Since a number of fake quotes propagate online, I want to check this one out. Thanks....
Friday, March 09, 2007
MM2007 - final thoughts
Well, I'm back home from APS. I'll write a bit more about the science over the weekend, but for now, here are some last thoughts on the meeting.
Three things that are frustrating about conferences:
Three things that are frustrating about conferences:
- Speakers that run way over their time. There was an invited talk this morning that was physically very interesting, but the speaker must've run 10 minutes over. The timer goes off - no sign of conclusions. The session chair stands up. No slowing down. The session chair whispers in the ear of the speaker. "I'm concluding." Followed by three more slides.
- Senior people that get your name wrong. Repeatedly. In front of a full room.
- Parallel sessions on nearly identical topics at opposite ends of the convention center.
- Senior people that do cite you, and get your name right.
- Competitors that do similar measurements that complement your work and are nice about it, and good agreement between the independent experiments. (Hurray! Science actually works!)
- Former students doing well in their careers.
- Good audiences that ask smart questions.
Thursday, March 08, 2007
More MM07
More good physics at the APS meeting, though I'm rapidly approaching the point of mental exhaustion.
There was an invited symposium on silicon nanoelectronics on Wednesday that was very nice - I only saw the first three talks, but they were all good. Steve Lyon from Princeton spoke about his ESR measurements on small numbers of electrons in Si/SiGe heterostructures and dots. Mark Eriksson from Wisconsin gave a good overview of their recent work on trying to get gate-defined quantum dots in Si/SiGe to act as nicely as those in GaAs/AlGaAs. A main point of physics in both of those talks was the effect of valley degeneracy on spin physics in those structures. In bulk Si the bottom of the conduction band is 6-fold degenerate and not located at k=0. In quantum wells or heterojunctions, the degeneracy is partially lifted due to the broken spatial symmetry. Mark and Steve have both been worrying about the size of the splitting in energy between the lowest valley and the next valley, and Mark's work looks like it answers the question in gate-defined dots. The third talk was by my old friend Sven Rogge now from Delft. There he has been working on making measurements on states confined to individual dopant atoms in ultrasmall Si transistors. It's extremely interesting to look at how the hydrogen-like donor wavefunctions hybridize with Si well states when the gate field pulls the electron from the donor toward the well.
Today I've seen two very smooth talks in nanostructures sessions. In the first Amir Yacoby, late of the Weizmann Institute and now at Harvard, showed new work on transport through "double dot" structures made from two metal nanoparticles linked by a small organic molecule. At low temperatures and voltages, the physics is dominated by Coulomb charging effects of the two nanoparticles. They see all kinds of rich Coulomb blockade behavior that can be modeled basically perfectly with only a few free parameters (the capacitances and resistances of the relevant junctions). The second was a talk by Lars Samuelson at Lund. He's one of the big movers and shakers in growing semiconductor nanowires. He gave a full overview of their work on this, which has included some obscene number of high impact publications. People with that kind of productivity are simultaneously impressive and depressing.
Incoherent Ponderer is absolutely right about the graphene thing. I've heard some nanotube folks griping that graphene is the new hotness.
There was an invited symposium on silicon nanoelectronics on Wednesday that was very nice - I only saw the first three talks, but they were all good. Steve Lyon from Princeton spoke about his ESR measurements on small numbers of electrons in Si/SiGe heterostructures and dots. Mark Eriksson from Wisconsin gave a good overview of their recent work on trying to get gate-defined quantum dots in Si/SiGe to act as nicely as those in GaAs/AlGaAs. A main point of physics in both of those talks was the effect of valley degeneracy on spin physics in those structures. In bulk Si the bottom of the conduction band is 6-fold degenerate and not located at k=0. In quantum wells or heterojunctions, the degeneracy is partially lifted due to the broken spatial symmetry. Mark and Steve have both been worrying about the size of the splitting in energy between the lowest valley and the next valley, and Mark's work looks like it answers the question in gate-defined dots. The third talk was by my old friend Sven Rogge now from Delft. There he has been working on making measurements on states confined to individual dopant atoms in ultrasmall Si transistors. It's extremely interesting to look at how the hydrogen-like donor wavefunctions hybridize with Si well states when the gate field pulls the electron from the donor toward the well.
Today I've seen two very smooth talks in nanostructures sessions. In the first Amir Yacoby, late of the Weizmann Institute and now at Harvard, showed new work on transport through "double dot" structures made from two metal nanoparticles linked by a small organic molecule. At low temperatures and voltages, the physics is dominated by Coulomb charging effects of the two nanoparticles. They see all kinds of rich Coulomb blockade behavior that can be modeled basically perfectly with only a few free parameters (the capacitances and resistances of the relevant junctions). The second was a talk by Lars Samuelson at Lund. He's one of the big movers and shakers in growing semiconductor nanowires. He gave a full overview of their work on this, which has included some obscene number of high impact publications. People with that kind of productivity are simultaneously impressive and depressing.
Incoherent Ponderer is absolutely right about the graphene thing. I've heard some nanotube folks griping that graphene is the new hotness.
Tuesday, March 06, 2007
The accidental session chair
I can already tell that I have one big thing in common with my thesis advisor besides our first name: I have a tough time saying 'no' to favors when asked nicely. As a result, I became a session chair this morning when the designated chair didn't show up. Ahh well.
Some neat science that I saw today:
Some neat science that I saw today:
- Buckley Prize talk by Jim Eisenstein, covering his work on liquid crystalline phenomena in high Landau levels of 2d electron systems, and his work on exciton superfluidity in 2d electron bilayers. I want to get him to come to Rice for a Keck seminar or physics colloquium this fall - the physics is really pretty.
- STM experiments by Mike Crommie's group at Berkeley looking at optically induced isomerization switching of azobenzene molecules. Now I know why our own efforts in this direction met with some difficulties. The switching gets quenched in regular azobenzene when the molecule is physisorbed on Au(111). Functionalizing the molecules to weaken their coupling to the metal surface leads to some switching, though even then the cross-section seems to be very small - long exposure to lots of photons = switching of maybe 5% of the molecules.
Monday, March 05, 2007
Thoughts from the APS March Meeting
I would live-blog the APS meeting, except that the wireless connection at the Denver convention center is completely dysfunctional. I saw some nice talks today after arriving here, but I'll save science until tomorrow. For now, a couple of remarks:
- $2.97 for a cup of coffee? Seriously?
- What is the deal with the recorded laughter that plays on the escalator up to Exhibition Hall F? Is it supposed to put me in a good mood? It doesn't - it creeps me out. Escalators aren't supposed to be jolly. They're supposed to be escalators.
- There are now a large number of vendors selling cryostats that get down to 100 mK, and at least two cryogen-free models. Maybe Oxford Instruments will be forced to adapt now that they have real competition.
- Overheard in Bush Airport on the way here: "I'm a dentist, and wait 'til you hear about my alternate use for KY jelly!"
Thursday, March 01, 2007
Jim Carrey and Conan O'Brian: quantum mechanics
This video that Kristen Kulinowski sent me is great. metadatta gets major street cred for figuring out which paper this refers to.
Wednesday, February 28, 2007
Great talk today
Alain Aspect gave the departmental colloquium today, and his talk was fantastic. He let the audience choose whether to hear about his more recent work on the Hanbury Brown-Twiss experiment with cold atoms, or his very famous work on Bell's Inequalities. By show of hands the packed audience picked the latter, and Aspect gave an extremely clear talk about why local hidden variable theories like the kind desired by Einstein just aren't compatible with quantum mechanics. I know that the talk has been fine-tuned and updated over the years, so the fact that it's polished shouldn't be surprising. Still, it was an impressively well structured colloquium: a good, generally accessible set-up and statement of the problem, a discussion of the experiment and what it means, and conclusions updated to include modern experiments about entanglement and quantum cryptography.
Sunday, February 25, 2007
Physics, smarts, and perspective
There's a great post on Cosmic Variance about the "cult of genius" in physics - the myth in our discipline that if you're not supermegabrilliant (Feynman/Einstein/Hawking, as Julianne puts it), you're basically a pedestrian loser. Hand in hand with this is the still-persistent attitude out there that if you get a physics PhD but don't end up a full professor at Harvard, you're a plodder. Read the post and the comments. It's great stuff. It also makes me remember my first real intellectual wake-up call, realizing that I was surrounded by really smart folks and would have to get used to it. First semester, freshman year, taking this class from this fellow, and getting 6 out of 30 on the first exam. The mean was a 9. One real advantage to getting an undergrad degree at a top-tier place is the character-building early realization that there are many people smarter than you. Better to come to that conclusion at 18 than at 22 or 25....
This week in cond-mat
One theory paper, and two experimental papers this time.
cond-mat/0702446 - Poggio et al., Feedback cooling of a cantilever's fundamental mode below 5 mK
Suppose you had a mechanical resonator (mass on a spring). At moderate temperatures you know from the good, old equipartition theorem that the average kinetic energy and average potential energy in the resonator would each equal 1/2 k_{B}T. (Note to self: get LaTeX working in blogger....) At low enough temperatures (k_{B}T < \hbar \omega), you should instead think about the number of vibrational quanta in the resonator. Suppose you could actively damp the resonator - if it's moving toward you, you push back to slow it down. It is possible to effectively cool the resonator this way (though in a Maxwell's demon sense, there's no such thing as a free lunch). How far you can go depends on the noise in your measurement system used for the feedback. In this paper by Dan Rugar's group, they demonstrate that they can cool a Si cantilever from a base temperature of around 4.2 K all the way down to 5 mK, limited by the noise in their feedback system. This is impressive, and of obvious interest to those who want to examine the fundamental quantum properties of mechanical systems (including detector back-action).
cond-mat/0702472 - Kalb et al., Organic small-molecule field-effect transistors with Cytop(tm) gate dielectric: eliminating gate bias stress effects
A persistent problem with organic FETs is that their performance degrades if the gate is biased for long periods. There can be many reasons for this, but one major issue involves the interaction between the semiconductor and the gate dielectric. It is widely believed that in many OFETs charge leaking through the gate dielectric introduces defects and trap states right at the channel interface in the organic semiconductor. Here, Batlogg's group at ETH seems to have found, with collaborators, a fluoropolymer dielectric that doesn't seem to have these problems, and has impressive breakdown strength as well. I'll have to look into getting some.
cond-mat/0702505 - Khodas et al., One-dimensional Fermi-Luttinger liquid
Fermi liquid theory is the standard model of electrons in metals (as well as normal-state liquid 3He). The upshot of FLT is that the quasiparticles of the interacting electron gas look very much like weakly interacting electrons, and have well defined quantum numbers (spin 1/2, charge -e, k-vectors and band indices). In 1d, though, FLT doesn't do well. Luttinger, by assuming that the dispersion E(k) of the carriers around the Fermi points is linear, came up with an exact solution to the 1d problem now called the Luttinger liquid (LL). The LL has some very interesting properties, including separate spin and charge excitations. In this paper, Glazman, Pustilnik, Khamanev, and Khodas consider what happens when the dispersion a the Fermi points is more realistic: linear with a little bit of quadratic correction. This breaks particle-hole symmetry around the Fermi points, and has some profound effects on the structure of the density of states. This is a long paper, and while I think I get the main point, I haven't had a chance to look at it thoroughly. It seems important, though, since the slight nonlinear correction considered here seems very physically reasonable for many systems.
cond-mat/0702446 - Poggio et al., Feedback cooling of a cantilever's fundamental mode below 5 mK
Suppose you had a mechanical resonator (mass on a spring). At moderate temperatures you know from the good, old equipartition theorem that the average kinetic energy and average potential energy in the resonator would each equal 1/2 k_{B}T. (Note to self: get LaTeX working in blogger....) At low enough temperatures (k_{B}T < \hbar \omega), you should instead think about the number of vibrational quanta in the resonator. Suppose you could actively damp the resonator - if it's moving toward you, you push back to slow it down. It is possible to effectively cool the resonator this way (though in a Maxwell's demon sense, there's no such thing as a free lunch). How far you can go depends on the noise in your measurement system used for the feedback. In this paper by Dan Rugar's group, they demonstrate that they can cool a Si cantilever from a base temperature of around 4.2 K all the way down to 5 mK, limited by the noise in their feedback system. This is impressive, and of obvious interest to those who want to examine the fundamental quantum properties of mechanical systems (including detector back-action).
cond-mat/0702472 - Kalb et al., Organic small-molecule field-effect transistors with Cytop(tm) gate dielectric: eliminating gate bias stress effects
A persistent problem with organic FETs is that their performance degrades if the gate is biased for long periods. There can be many reasons for this, but one major issue involves the interaction between the semiconductor and the gate dielectric. It is widely believed that in many OFETs charge leaking through the gate dielectric introduces defects and trap states right at the channel interface in the organic semiconductor. Here, Batlogg's group at ETH seems to have found, with collaborators, a fluoropolymer dielectric that doesn't seem to have these problems, and has impressive breakdown strength as well. I'll have to look into getting some.
cond-mat/0702505 - Khodas et al., One-dimensional Fermi-Luttinger liquid
Fermi liquid theory is the standard model of electrons in metals (as well as normal-state liquid 3He). The upshot of FLT is that the quasiparticles of the interacting electron gas look very much like weakly interacting electrons, and have well defined quantum numbers (spin 1/2, charge -e, k-vectors and band indices). In 1d, though, FLT doesn't do well. Luttinger, by assuming that the dispersion E(k) of the carriers around the Fermi points is linear, came up with an exact solution to the 1d problem now called the Luttinger liquid (LL). The LL has some very interesting properties, including separate spin and charge excitations. In this paper, Glazman, Pustilnik, Khamanev, and Khodas consider what happens when the dispersion a the Fermi points is more realistic: linear with a little bit of quadratic correction. This breaks particle-hole symmetry around the Fermi points, and has some profound effects on the structure of the density of states. This is a long paper, and while I think I get the main point, I haven't had a chance to look at it thoroughly. It seems important, though, since the slight nonlinear correction considered here seems very physically reasonable for many systems.
Saturday, February 17, 2007
This week in cond-mat
Several papers caught my eye this week; I'll be brief, particularly since I haven't had time to read them in detail. Now that our paper is in and our search is nearing the end, I'll have more time soon. Maybe I'll even get time to work on my book. Anyway....
cond-mat/0702246 - Capelle et al., Energy gaps and interaction blockade in confined quantum systems
The authors consider the general problem of interacting quantum particles confined in a harmonic potential. This could apply to electrons in a small quantum dot, or cold atoms in a magneto-optic trap. They then come up with expressions for the addition energies (how much energy is needed to add one more particle to the confined, interacting system) based on single-particle properties plus the interactions. They predict phenomena analogous to Coulomb blockade for other interacting systems, including some kind of Van der Waals blockade for trapped atoms.
cond-mat/0702259 - Kornyushin, An introduction to the polaron and bipolaron theoretical concepts
This looks like a nice pedagogical derivation of polarons and bipolarons. Should be good for students.
cond-mat/0702332 - Wu et al., Shot noise with interaction effects in single walled carbon nanotubes
This is a typically nice piece of experimental work from the Helsinki group. They've measured shot noise in carbon nanotube devices, and while they have seen interesting quantum coherence effects (Fabry-Perot electronic resonances as have been observed in dc conduction in these systems), they do not see any clear signs of Luttinger liquid physics.
cond-mat/0702348 - Phillips, Mottness
This is a longer article by Phil Phillips on his ideas about the properties and excitations of Mott insulators - materials that are insulating not because their bands are all full, but because strong electron-electron interactions lock the carriers in place. Interesting ideas explained in a compelling way, though theorists have been arguing about this stuff (in particular, the role or lack thereof of Mott physics in, e.g., the normal state of the high Tc compounds) for some time. Prof. Phillips is also the best dressed scientist I've ever met, bar none.
cond-mat/0702246 - Capelle et al., Energy gaps and interaction blockade in confined quantum systems
The authors consider the general problem of interacting quantum particles confined in a harmonic potential. This could apply to electrons in a small quantum dot, or cold atoms in a magneto-optic trap. They then come up with expressions for the addition energies (how much energy is needed to add one more particle to the confined, interacting system) based on single-particle properties plus the interactions. They predict phenomena analogous to Coulomb blockade for other interacting systems, including some kind of Van der Waals blockade for trapped atoms.
cond-mat/0702259 - Kornyushin, An introduction to the polaron and bipolaron theoretical concepts
This looks like a nice pedagogical derivation of polarons and bipolarons. Should be good for students.
cond-mat/0702332 - Wu et al., Shot noise with interaction effects in single walled carbon nanotubes
This is a typically nice piece of experimental work from the Helsinki group. They've measured shot noise in carbon nanotube devices, and while they have seen interesting quantum coherence effects (Fabry-Perot electronic resonances as have been observed in dc conduction in these systems), they do not see any clear signs of Luttinger liquid physics.
cond-mat/0702348 - Phillips, Mottness
This is a longer article by Phil Phillips on his ideas about the properties and excitations of Mott insulators - materials that are insulating not because their bands are all full, but because strong electron-electron interactions lock the carriers in place. Interesting ideas explained in a compelling way, though theorists have been arguing about this stuff (in particular, the role or lack thereof of Mott physics in, e.g., the normal state of the high Tc compounds) for some time. Prof. Phillips is also the best dressed scientist I've ever met, bar none.
Tuesday, February 13, 2007
Quantum computing: are we there yet?
(Updated and corrected) As others in the blogging world have pointed out, today is the big day for D-Wave, a privately held, VC-financed Canadian company that plans a public demonstration of a 16 qubit quantum computer. One of the main ideas behind quantum computation is that, because of the way quantum mechanics works, performing a linear number of operations, N, allows you to build up quantum states that can be written as superpositions containing an exponentially large (e.g. 2^N) number of terms. If one can do this and not have decoherence (due to environmental interactions) mess up the superposition states, it is possible to use this property of quantum mechanics to do certain computations much faster than classical computers. Another way to view the power of this quantum parallelism: suppose you want to solve a math problem, and the input is an N-bit binary number. With a generic quantum computer, you can imagine preparing an initial state built out of N qubits that is actually a superposition of all 2^N possible inputs. Your quantum computer could then solve the problem, producing a superposition of all solutions corresponding to those inputs. Readout is the tricky bit, of course, since simple-minded measurement of the final state will only pick out one of those solutions.
There have been many ideas proposed for physical implementations of quantum computers. The requirement that decoherence be small is extremely restrictive. With so-called "fault-tolerant" quantum computation, one can beat down that requirement a bit by using additional qubits to do error correction. In the last few years, there has been great progress in using small superconducting systems as quantum mechanical bits (qubits), either thinking about the charge on small "Cooper pair box" metal islands, or persistent currents in superconducting loops with Josephson junctions. One can do a form of quantum computation using NMR, though the number of effective qubits is strongly limited in molecules. There have been proposals to use tunable hyperfine interactions in phosphorous doped Si to get around that restriction. Some people want to do quantum computation using photons, or through optical manipulations of excitons in semiconductor dots, or directly using individual electron spins in semiconductor nanostructures. The current record (6 qubits) for producing superpositions like the ones I described above, or other related superpositions (8 qubits) has been set using trapped ions.
The D-wave demo is an attempt to do adiabatic quantum computation. The idea is to formulate a problem such that one can start out with the initial data being represented by the ground state (lowest energy state) of a system of interacting qubits. Then one very gently changes the Hamiltonian of the system such that the system never leaves its instantaneous ground state (that's the adiabatic part), but arranges matters so that the solution to the problem is represented by the ground state of the final Hamiltonian. The main proponent of this approach has been Seth Lloyd. Empirically, the D-wave folks are going to use 16 qubits made out of Nb loops and Josephson junctions (as explained here), and they cool this whole mess (128 filtered leads) down to 5 mK in a dilution refrigerator.
There seem to be three big questions here: (1) Is this really quantum computation? It's difficult for me to assess this, as I'm no expert. There seem to be arguments about which problems can really be solved in the adiabatic formulation that's implemented here, and about whether one can actually get significant improvements relative to classical algorithms. (2) Will the demo be fair? The high tech world is no stranger to rigged demos, and in this is a particular black-box affair. One has to trust that the stuff displayed on the screen of the PC controlling the electronics is really being determined by the chip at the bottom of the fridge, and not by some clever software. I'm willing to give them the benefit of the doubt, provided that they let some independent experts play with the system. (3) Why haven't they published everything in the open literature and let outsiders come in to verify that it's all legit? Well, I can't say I really blame them. The paper I linked to up there for their implementation never got into PRL, as far as I can see. I don't see Intel hurrying up to get outside approval for their new gate metallization. If these folks think they can actually get this to work and make a buck at it, more power to them. The truth will out.
There have been many ideas proposed for physical implementations of quantum computers. The requirement that decoherence be small is extremely restrictive. With so-called "fault-tolerant" quantum computation, one can beat down that requirement a bit by using additional qubits to do error correction. In the last few years, there has been great progress in using small superconducting systems as quantum mechanical bits (qubits), either thinking about the charge on small "Cooper pair box" metal islands, or persistent currents in superconducting loops with Josephson junctions. One can do a form of quantum computation using NMR, though the number of effective qubits is strongly limited in molecules. There have been proposals to use tunable hyperfine interactions in phosphorous doped Si to get around that restriction. Some people want to do quantum computation using photons, or through optical manipulations of excitons in semiconductor dots, or directly using individual electron spins in semiconductor nanostructures. The current record (6 qubits) for producing superpositions like the ones I described above, or other related superpositions (8 qubits) has been set using trapped ions.
The D-wave demo is an attempt to do adiabatic quantum computation. The idea is to formulate a problem such that one can start out with the initial data being represented by the ground state (lowest energy state) of a system of interacting qubits. Then one very gently changes the Hamiltonian of the system such that the system never leaves its instantaneous ground state (that's the adiabatic part), but arranges matters so that the solution to the problem is represented by the ground state of the final Hamiltonian. The main proponent of this approach has been Seth Lloyd. Empirically, the D-wave folks are going to use 16 qubits made out of Nb loops and Josephson junctions (as explained here), and they cool this whole mess (128 filtered leads) down to 5 mK in a dilution refrigerator.
There seem to be three big questions here: (1) Is this really quantum computation? It's difficult for me to assess this, as I'm no expert. There seem to be arguments about which problems can really be solved in the adiabatic formulation that's implemented here, and about whether one can actually get significant improvements relative to classical algorithms. (2) Will the demo be fair? The high tech world is no stranger to rigged demos, and in this is a particular black-box affair. One has to trust that the stuff displayed on the screen of the PC controlling the electronics is really being determined by the chip at the bottom of the fridge, and not by some clever software. I'm willing to give them the benefit of the doubt, provided that they let some independent experts play with the system. (3) Why haven't they published everything in the open literature and let outsiders come in to verify that it's all legit? Well, I can't say I really blame them. The paper I linked to up there for their implementation never got into PRL, as far as I can see. I don't see Intel hurrying up to get outside approval for their new gate metallization. If these folks think they can actually get this to work and make a buck at it, more power to them. The truth will out.
Saturday, February 10, 2007
A scientific direction that I think is promising
I haven't written too much about my own research on this blog, mostly because I figure that people who really care about it can read my group homepage or my papers. However, there is one area out there that I think has real promise, and I'd like to get other folks thinking about it, at least in general terms.
Electronic transport measurements in nanoscale systems can be considered a kind of spectroscopy. In particular, when a chunk of conducting material is sufficiently small and relatively weakly coupled to leads (call them a "source" and a "drain", after transistor terminology), conduction can be dominated by one or a few specific quantum states of that material. There has been great work done by many groups over the past 15 years or so, looking at these individual electronic states in a bunch of systems, including metal nanoparticles, patches of doped semiconductor, and semiconductor nanowires and nanocrystals. As neat as these systems are, they're all comparatively simple from the electron-electron interaction point of view. With a few exceptions (like Kondo-based physics), you can pretty much work in a single-particle picture. That is, adding one more electron to these systems doesn't drastically change the spectrum of electronic states - the spectrum itself is mostly unchanged except for the population of the states, one of which has increased by 1.
Many interesting materials exist where strong electronic correlations are more important. For example, the high-Tc superconductors in their normal state are often "bad metals" that are not well described by a picture of weakly interacting electrons. There are similar phases in the heavy fermion compounds. Even magnetite (Fe3O4), a comparatively simple compound, has strong correlation effects: it's not really a metal or a semiconductor; it has a room temperature resistivity in the milliOhm-cm range (say 1000 times higher than Cu or Au), and that resistivity increases with decreasing temperature, but not in a simple way as in a semiconductor.
I think it would be very revealing for transport spectroscopy experiments to be performed on nanostructures made from these strongly correlated materials. This won't be easy for many practical reasons (e.g., stoichiometry can be tough to control in nanomaterials; noone knows how to make many of these systems in nanostructured forms yet), but I'm convinced that there is much to learn in such experiments.
Electronic transport measurements in nanoscale systems can be considered a kind of spectroscopy. In particular, when a chunk of conducting material is sufficiently small and relatively weakly coupled to leads (call them a "source" and a "drain", after transistor terminology), conduction can be dominated by one or a few specific quantum states of that material. There has been great work done by many groups over the past 15 years or so, looking at these individual electronic states in a bunch of systems, including metal nanoparticles, patches of doped semiconductor, and semiconductor nanowires and nanocrystals. As neat as these systems are, they're all comparatively simple from the electron-electron interaction point of view. With a few exceptions (like Kondo-based physics), you can pretty much work in a single-particle picture. That is, adding one more electron to these systems doesn't drastically change the spectrum of electronic states - the spectrum itself is mostly unchanged except for the population of the states, one of which has increased by 1.
Many interesting materials exist where strong electronic correlations are more important. For example, the high-Tc superconductors in their normal state are often "bad metals" that are not well described by a picture of weakly interacting electrons. There are similar phases in the heavy fermion compounds. Even magnetite (Fe3O4), a comparatively simple compound, has strong correlation effects: it's not really a metal or a semiconductor; it has a room temperature resistivity in the milliOhm-cm range (say 1000 times higher than Cu or Au), and that resistivity increases with decreasing temperature, but not in a simple way as in a semiconductor.
I think it would be very revealing for transport spectroscopy experiments to be performed on nanostructures made from these strongly correlated materials. This won't be easy for many practical reasons (e.g., stoichiometry can be tough to control in nanomaterials; noone knows how to make many of these systems in nanostructured forms yet), but I'm convinced that there is much to learn in such experiments.
Another claim to fame
See this comic? See how, down at the bottom, it says "This comic courtesy of Jeff from Rice U."? That's my grad student, Jeff, who has been having problems with me walking by and having his devices die mysteriously.
Monday, February 05, 2007
My touch with fame
I can't resist posting a link to this article (NY Times, reg. req.), about two friends of mine from college. If you ever see a joke on The Daily Show that involves pretty serious math or science, there's a good chance it was written by Rob. He had a great one a year or two ago involving Venn diagrams....
Friday, February 02, 2007
This week in PRL (last year in cond-mat)
Real life continues to limit my blogging time. I'll hopefully be posting more often again soon. In the mean time, here's a neat paper that just came out in Phys. Rev. Lett. today, and was actually on the arxiv last year:
cond-mat/0603079 - Matthey et al., Electric field modulation of transition temperature, mobile carrier density and in-plane penetration depth in NdBa2Cu3O(7-delta) thin films
In this work the authors grow (by sputtering) underdoped high-Tc superconducting films on top of a SrTiO3 gate dielectric with an underlying gate electrode. A number of people (e.g. Allen Goldman's group at Minnesota) have played with SrTiO3 as a high-k gate dielectric to do experiments involving large gated charge densities. It's almost a ferroelectric, so it is possible to get an extremely large electric polarization in that material. The reason to do this is that in principle it allows you to tune the carrier density in an overlying material via the field effect: in a properly designed field-effect transistor, applying a potential difference between the gate electrode and the source/drain electrodes capacitively accumulates or depletes charge at the interface between the overlying material and the dielectric. Of course, there's no guarantee that the interface is nice, and that all the gated charge is actually mobile, even at "clean" interfaces between simple materials (Si, SiO2). However, if you can get it to work, you can tune charge density (at least in a thin layer of material) without accompanying changes in disorder that result from chemical doping. Anyway, the authors of this paper have managed to get this approach to work surprisingly well in this essentially 2d (the sample is only 3-4 unit cells thick) high-Tc material, and can electrostatically tune the transition temperature by about a factor of two within a given sample. This has allowed them to do detailed studies of the superconductor-insulator transition that happens as a function of carrier density, without having to worry about variable disorder. (This kind of phase transition, driven by a control parameter rather than temperature, can occur at T=0 and is called a quantum phase transition.) Very nice stuff. They've been working on this for several years, and it's nice to see them succeed. I met Jean-Marc Triscone, the PI, when we were working on this.
cond-mat/0603079 - Matthey et al., Electric field modulation of transition temperature, mobile carrier density and in-plane penetration depth in NdBa2Cu3O(7-delta) thin films
In this work the authors grow (by sputtering) underdoped high-Tc superconducting films on top of a SrTiO3 gate dielectric with an underlying gate electrode. A number of people (e.g. Allen Goldman's group at Minnesota) have played with SrTiO3 as a high-k gate dielectric to do experiments involving large gated charge densities. It's almost a ferroelectric, so it is possible to get an extremely large electric polarization in that material. The reason to do this is that in principle it allows you to tune the carrier density in an overlying material via the field effect: in a properly designed field-effect transistor, applying a potential difference between the gate electrode and the source/drain electrodes capacitively accumulates or depletes charge at the interface between the overlying material and the dielectric. Of course, there's no guarantee that the interface is nice, and that all the gated charge is actually mobile, even at "clean" interfaces between simple materials (Si, SiO2). However, if you can get it to work, you can tune charge density (at least in a thin layer of material) without accompanying changes in disorder that result from chemical doping. Anyway, the authors of this paper have managed to get this approach to work surprisingly well in this essentially 2d (the sample is only 3-4 unit cells thick) high-Tc material, and can electrostatically tune the transition temperature by about a factor of two within a given sample. This has allowed them to do detailed studies of the superconductor-insulator transition that happens as a function of carrier density, without having to worry about variable disorder. (This kind of phase transition, driven by a control parameter rather than temperature, can occur at T=0 and is called a quantum phase transition.) Very nice stuff. They've been working on this for several years, and it's nice to see them succeed. I met Jean-Marc Triscone, the PI, when we were working on this.
Subscribe to:
Posts (Atom)