I've been thinking more about explaining what we mean by "insulators", in light of some of the insightful comments. As I'd said, we can think about three major classes of insulators: band insulators (a large gap due to single-particle effects (more below) exists in the ladder of electronic states above the highest occupied state); Anderson insulators (the highest occupied electronic states are localized in space, rather than extending over large distances; localization happens because of disorder and quantum interference); and Mott insulators (hitherto neglected electron-electron interactions make the energetic cost of moving electrons prohibitively high).
The idea of an energy gap (a big interval in the ladder of states, with the states below the gap filled and the states above the gap empty) turns out to be a unifying concept that can tie all three of these categories together. In the band insulator case, the states are pretty much single-particle states (that is, the energy of each state is dominated by the kinetic energies of single electrons and their interactions with the ions that supply the electrons). In the Anderson insulator case, the gap is really the difference in energy between the highest occupied state and the nearest extended state (called the mobility edge). In the Mott case, the states in question are many-body states that have a major contribution due to electron-electron interactions. The electron-electron interaction cost associated with moving electrons around is again an energy gap (a Mott gap), in the ladder of many-body (rather than single-particle) states.
I could also turn this around and talk in terms of the local vs. extended character of the highest occupied states (as Peter points out). In the ideal (infinite periodic solid) band insulator case, all (single-particle) electronic states are extended, and it's the particular lattice arrangement and electronic population that determines whether the highest occupied state is far from the nearest unoccupied state. In the Anderson case, quantum interference + disorder leads to the highest occupied states looking like standing waves - localized in space. In the Mott case, it's tricky to try to think about many-body states in terms of projections onto single-particle states, but you can do so, and you again find that the highest relevant states are localized (due, it turns out, to interactions). Like Peter, I also have been meaning to spend more time thinking hard about insulators.
Coming soon: a discussion of "metals".
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Sunday, December 28, 2008
Faraday
It's a small world. Just last week I finished reading this book, a very nice biography of Michael Faraday, possibly the greatest experimental physicist ever. Lo and behold, this week there are two long blog postings (here and here) also talking about Faraday. What an impressive scientist. Now I need to find a good bio of Maxwell....
Friday, December 26, 2008
What does it mean for a material to be an "insulator"?
I've been thinking for a while about trying to explain some physics concepts on, well, a slightly more popular level. This is a first pass at this, focusing on electrical insulators. Feedback is invited. I know that this won't be perfect for nonscientists at a first cut.
Very often we care about the electrical properties of materials. Conceptually, we imagine hooking the positive terminal of a battery up to one end of a material, hooking the negative terminal up to the other end, and checking to see if any current is flowing. We broadly lump solids into two groups, those that conduct electricity and those that don't. Materials in the latter category are known as insulators, and it turns out that there are at least three different kinds.
Very often we care about the electrical properties of materials. Conceptually, we imagine hooking the positive terminal of a battery up to one end of a material, hooking the negative terminal up to the other end, and checking to see if any current is flowing. We broadly lump solids into two groups, those that conduct electricity and those that don't. Materials in the latter category are known as insulators, and it turns out that there are at least three different kinds.
- Band insulators. One useful way of thinking about electrons in solids is to think about the electrons as filling up single-particle states (typically with two electrons per state). This is like what you learn in high school chemistry, where you're taught that there are certain orbitals within atoms that get filled up, two electrons per orbital. Helium has two electrons in the 1s orbital, for example. In solids, there are many, many states, each one with an associated energy cost for being occupied by an electron, and the states are grouped into bands separated by intervals of energy (band gaps) with no states. (Picture a ladder with groups of closely spaced rungs, and each rung has two little divots where marbles (the electrons) can sit.) Now, in clean materials, you can think of some states as corresponding to electrons moving to the left. Some states correspond to electrons moving to the right. In order to get a net flow of electrons when a battery is used to apply a voltage difference across a slab of material, there have to be transitions that, for example, take electrons out of left-moving states and put them into right-moving states, so that more electrons are going one way than the other. For this to happen, there have to be empty states available for the electrons to occupy, and the net energy cost of shifting the electrons around has to be low enough that it's supplied by the battery or by thermal energy. In a band insulator, all of the states in a particular band (usually called the valence band) are filled, and the energetically closest empty states are too far away energetically to be reached. (In the ladder analogy, the next empty rung is waaay far up the ladder.) This is the situation in materials like diamond, quartz, and sapphire.
- Anderson insulators. These are materials where disorder is responsible for insulating behavior. In the ladder analogy above, each rung of the ladder corresponded to what we would call an "extended" state. To get a picture of what this means, consider looking at a smooth, grooved surface, like a freshly plowed field, and filling it partially with water. Each furrow would be an extended state, since on a level field water would extend along the furrow from one end of the field to the other. Now, a disordered system in this analogy would look more like a field pockmarked with hills and holes. Water (representing the electrons) would pool in the low spots rather than forming a continuous line from one end of the field to the other. These local low spots are defects, and the puddles of water correspond to localized states. In the real quantum situation things are a bit more complicated. Because of the wavelike nature of electrons, even weak disorder (shallow dips rather than deep holes in the field) can lead to reflections and interference effects that can cause states to be localized on a big enough "field". Systems like this are insulating (at least at low temperatures) because it takes energy to hop electrons from one puddle to another puddle. For small applied voltages, nothing happens (though clearly if one imagines tilting the whole field enough, all the water will run down hill - this would correspond to applying a large electric field.). Examples of this kind of insulating behavior include doped polymer semiconductors.
- Mott insulators. Notice that nowhere in the discussion of band or Anderson insulators did I say anything at all about the fact that electrons repel each other. Electron-electron interactions were essentially irrelevant to those two ways of having an insulator. To understand Mott insulators, think about trying to pack ping-pong balls closely in a 2d array. The balls form a triangular lattice. Now the repulsion of the electrons is represented by the fact that you can't force two ping-pong balls to occupy the same site in the 2d lattice. Even though you "should" be able to put two balls (electrons) per site, the repulsion of the electrons prevents you from doing so without comparatively great energetic cost (associated with smashing a ping-pong ball). The result is, for exactly 1 ball (electron) per site ("half-filled band") in this situation dominated by ball-ball interactions ("on-site repulsion"), no balls are able to move in response to an applied push (electric field). To get motion (conduction) in this case, one approach is to remove some of the balls (electrons) to create vacancies in the lattice. This can be done via chemical doping. Examples of Mott insulators are some transition metal oxides like V2O3 and the parent compounds of the high temperature superconductors.
Tuesday, December 23, 2008
Quantum dots in graphene
The progress in graphene experiments continues. Unsurprisingly, many people are interested in using graphene, a natural 2d electronic system, to make what some would call quantum dots: localized puddles of electrons separated from "bulk" leads by tunnel barriers. Step one in making graphene quantum dots is to etch graphene into a narrow constriction. You can end up with localized electronic states due to disorder (from the edges and the underlying substrate). This Nano Letter shows examples of this situation. Alternately, you can make structures using gates that can define local regions of p-type (local chemical potential biased into the valence band) and n-type (conduction band) conduction, separated by tunnel junctions formed when p and n regions run into each other. That situation is described here. Neat stuff. I would imagine that low temperature transport measurements through such structures in the presence of applied magnetic fields should be very revealing, given the many predictions about exotic magnetic properties in edge states of graphene ribbons.
Saturday, December 20, 2008
The new science and technology team
The President-elect has named his science team. Apart from the fact that these folks are all highly qualified (the new administration will have two Nobel laureates advising it directly), I'm told by a senior colleague well-versed in policy that the real good news is the re-promotion of the science advisor position back to the level of authority that it had prior to 2001, and the reinvigoration of PCAST.
Friday, December 19, 2008
At the risk of giving offense....
I see that New Scientist has an article effectively making fun of the Department of Defense for asking their major scientific advisory panel, JASON, to look into a company's claim that it could use gravity waves as an imaging tool. JASON rightly determined that this was not something to worry about. Seems like a non-story to me. Thank goodness New Scientist has never actively promoted something manifestly scientifically wacky on their front cover, like a microwave cavity that violates conservation of momentum. Oh wait.
Wednesday, December 17, 2008
Outside shot
Since the President-Elect has not yet named his science advisor (though his transition team has named point people, and the nominee for Secretary of Energy has impeccable credentials), I thought I'd point out another crucial way that I would fit in. Sure, I'm under 5' 8" tall, but I can (sometimes) shoot; as some of my college friends can attest, I won a gift certificate in undergrad days by sinking a shot from the top of the key at a women's basketball game halftime promo. (For the humor-impaired: I'm not really in the running to be part of the Obama administration.)
A couple of ACS papers
Two recent papers in the ASAP section of Nano Letters caught my eye.
The first is van der Molen et al., "Light-controlled conductance switching of ordered metal−molecule−metal devices". I've written a blurb about this for the ACS that will eventually show up here. The Schönenberger group has been working for a while on an approach for measuring molecular conductances that is based on networks of metal nanoparticles linked by molecules of interest. The idea is to take metal nanoparticles and form an ordered array of them with neighbors linked by molecules of interest covalently bound to the particle surfaces. The conductance of the array tells you something about the conductance of the particle-molecule-particle junctions. This is simple in concept and extremely challenging in execution, in part because when the metal nanoparticles are made by chemical means they are already coated with some kind of surfactant molecules to keep them suspended in solution. Performing the linking chemistry in a nice way and ending up with an ordered array of particles rather than a blob of goo requires skill and expertise. These folks have now made arrays incorporating molecules that can change reversibly change their structure upon exposure to light of the appropriate wavelength. The structural changes show up in photo-driven changes in the array conductance.
The second is Ryu et al., "CMOS-Analogous Wafer-Scale Nanotube-on-Insulator Approach for Submicrometer Devices and Integrated Circuits Using Aligned Nanotubes". Lots of people talk a good game about trying to make large-scale integrated circuits using nanotubes, but only a couple of groups have made serious progress. This paper by Chongwu Zhou's group shows that they can take arrays of tubes (grown by chemical vapor deposition on quartz or sapphire substrates), transfer them to Si wafers via a clever method involving gold, pattern the tubes, put down electrodes for devices, burn out the metallic tubes, and dope the semiconductor tubes chemically to do either p or n-type conduction. They are also working on fault-tolerant architectures to deal with the fact that each transistor (which in this case incorporates an ensemble of tubes) has slightly different characteristics.
The first is van der Molen et al., "Light-controlled conductance switching of ordered metal−molecule−metal devices". I've written a blurb about this for the ACS that will eventually show up here. The Schönenberger group has been working for a while on an approach for measuring molecular conductances that is based on networks of metal nanoparticles linked by molecules of interest. The idea is to take metal nanoparticles and form an ordered array of them with neighbors linked by molecules of interest covalently bound to the particle surfaces. The conductance of the array tells you something about the conductance of the particle-molecule-particle junctions. This is simple in concept and extremely challenging in execution, in part because when the metal nanoparticles are made by chemical means they are already coated with some kind of surfactant molecules to keep them suspended in solution. Performing the linking chemistry in a nice way and ending up with an ordered array of particles rather than a blob of goo requires skill and expertise. These folks have now made arrays incorporating molecules that can change reversibly change their structure upon exposure to light of the appropriate wavelength. The structural changes show up in photo-driven changes in the array conductance.
The second is Ryu et al., "CMOS-Analogous Wafer-Scale Nanotube-on-Insulator Approach for Submicrometer Devices and Integrated Circuits Using Aligned Nanotubes". Lots of people talk a good game about trying to make large-scale integrated circuits using nanotubes, but only a couple of groups have made serious progress. This paper by Chongwu Zhou's group shows that they can take arrays of tubes (grown by chemical vapor deposition on quartz or sapphire substrates), transfer them to Si wafers via a clever method involving gold, pattern the tubes, put down electrodes for devices, burn out the metallic tubes, and dope the semiconductor tubes chemically to do either p or n-type conduction. They are also working on fault-tolerant architectures to deal with the fact that each transistor (which in this case incorporates an ensemble of tubes) has slightly different characteristics.
Saturday, December 13, 2008
Manhattan and Apollo project metaphors
Relatively regularly these days there are calls for a Manhattan or Apollo style project to address our energy challenges. While this may sound good, it's worth considering what such a project would actually mean. I found this article (pdf) to be helpful. For the Manhattan project, peak annual funding reached about 1% of total federal outlays and 0.4% of GDP. For Apollo, the peak annual numbers were 2.2% of the federal budget and also 0.4% of GDP. What would that mean in today's numbers? Well, the federal budget is on the order of $3T, and the GDP is around $14T. If we use the GDP numbers, such a commitment of resources would be $56B. That's approximately the combined budgets of DOE, NIH, and NSF. Bear in mind that when we did Apollo, it's not like all other efforts stopped, so really pulling something like this off would require a significant allotment of money.
It's worth pointing out that the government has given three times this amount to AIG alone. It's also worth mentioning that communications technologies are vastly superior to those of the past. Presumably large collaborations can be managed more easily and would eliminate the need to uproot the top researchers in the world from their homes and relocate them in a single central location.
Anyway, those are at least some real numbers, and they're not crazy or unattainable given a strong lead in national priorities from the top. The real challenge is figuring out what the true goal is. It's fine to say "energy independence" or some target number for renewables, but the global energy challenge is a lot more diffuse and multidimensional than either putting people on the moon or developing the atomic bomb.
It's worth pointing out that the government has given three times this amount to AIG alone. It's also worth mentioning that communications technologies are vastly superior to those of the past. Presumably large collaborations can be managed more easily and would eliminate the need to uproot the top researchers in the world from their homes and relocate them in a single central location.
Anyway, those are at least some real numbers, and they're not crazy or unattainable given a strong lead in national priorities from the top. The real challenge is figuring out what the true goal is. It's fine to say "energy independence" or some target number for renewables, but the global energy challenge is a lot more diffuse and multidimensional than either putting people on the moon or developing the atomic bomb.
Thursday, December 11, 2008
Overkill
I know that the journal is called Nano Letters, but using the "nano" prefix three times in the title of a paper is a little extreme.
Wednesday, December 10, 2008
Energy.
As US readers have probably heard by now, Steve Chu has been selected by President-Elect Obama to be the new Secretary of Energy. I think that this is very good news. He's extremely smart, and he's been involved in pretty much the entire research enterprise, from his days at Bell Labs to running a research group at Stanford to serving as department chair to acting as director at LBL. He's been actively worrying about both basic and applied research and understands the actual energy demands of the country and the world. As the finance folks say, past performance is not necessarily indicative of future results, but this appointment gives me real cause for optimism. Steve Chu is extremely qualified for this.
What to do about public perception of science
From the comments on my last post, it's clear that there are some number of science types out there who view the situation as hopeless: the public is poorly informed and his bigger things to worry about; despite having direct evidence every day of the importance of science (ubiquitous computers, lasers, GPS, MRI, DNA testing) the public feels that science is somehow not relevant to their lives and finds even the basic concepts inaccessible; because there is no financial incentive for people to learn about science they won't; etc. While there is a grain of truth to these comments, there is plenty of evidence that is more hopeful. There is no question that certain science topics capture the public imagination: Mars rovers, using genetic technology to cure diseases or identify relationships between individuals or species, the LHC (talk about an effective marketing job, at least to some segment of the population).
Chad Orzel has many good things to say about what scientists can do to help improve the situation, and I won't repeat them here. If you are personally trying to do outreach, I do have one suggestion. Remember that people like a compelling story and interesting characters. The story can be a scientific one (Longitude), a personal one (Genius), or a large-scale drama (The Making of the Atomic Bomb), but it is possible to capture and hold people's attention on scientific subjects. I'm not suggesting that everyone should go out and try to write a popular book on science, but try to remember what makes the best of those books successful.
Chad Orzel has many good things to say about what scientists can do to help improve the situation, and I won't repeat them here. If you are personally trying to do outreach, I do have one suggestion. Remember that people like a compelling story and interesting characters. The story can be a scientific one (Longitude), a personal one (Genius), or a large-scale drama (The Making of the Atomic Bomb), but it is possible to capture and hold people's attention on scientific subjects. I'm not suggesting that everyone should go out and try to write a popular book on science, but try to remember what makes the best of those books successful.
Friday, December 05, 2008
Do people just not care about science and technology?
CNN seems to think that's the case. As others have pointed out (1, 2), they've decided to close down their science and technology division and fold that reporting back into their general news category. So, what is the message here? That the ad revenue CNN can generate from having good science journalism doesn't justify the expense? (I'm sure that they'll claim quality of reporting won't change, but realistically you're not going to get good science journalism if it's a part-time gig for people who are spending more of their time reporting on Jennifer Aniston's latest romantic trevails.) What does it say about our society that we're completely dependent on technology (and therefore science), and yet pursuing science or engineering is viewed as "nerdy" and even accurately reporting on it is judged not worth the expense? Sorry for the buzz-kill of a Friday post, but this really depresses me.
Tuesday, December 02, 2008
This week in cond-mat
Two brief mentions this week.
arxiv:0811.4491 - M. Häfner et al., Anisotropic magnetoresistance in ferromagnetic atomic-sized metal contacts
Over the last few years there's been quite an interest in the effect of magnetic fields on the electrical resistance of atomic-scale contacts between ferromagnetic metals (e.g., Ni). As pointed out by last year's Nobel in physics, if one can create nanostructures with very large magnetoresistive effects, there can be immediate applications in magnetic data storage. Recent investigations (for example, here and here) have shown dramatic variations in the magnetoresistance of such contacts from device to device. It's pretty clear that the atomic-scale details of the structures end up mattering (which is pretty cool, but rather discouraging from the technological side). This paper is a theoretical look at this issue, emphasizing that this sensitivity to details likely results from the fact that those last few atoms at the contact are undercoordinated - that is, they have fewer neighbors than atoms in the bulk of the magnetic metal.
arxiv:0812.0048 - Masubuchi et al., Fabrication of graphene nanoribbon by local anodic oxidation lithography using atomic force microscope
I've been waiting for a paper like this to show up. It's been known for about a decade now that one can use the tip of an atomic force microscope to do very local electrochemistry. This has been exploited to make interesting metal/metal oxide structures, designer surface details on Si, and impressive quantum dot systems in GaAs. These folks have done the same on graphene. Nice.
UPDATE: As was pointed out in the comments, I had overlooked two earlier reports of AFM oxidation of graphene, here and here.
arxiv:0811.4491 - M. Häfner et al., Anisotropic magnetoresistance in ferromagnetic atomic-sized metal contacts
Over the last few years there's been quite an interest in the effect of magnetic fields on the electrical resistance of atomic-scale contacts between ferromagnetic metals (e.g., Ni). As pointed out by last year's Nobel in physics, if one can create nanostructures with very large magnetoresistive effects, there can be immediate applications in magnetic data storage. Recent investigations (for example, here and here) have shown dramatic variations in the magnetoresistance of such contacts from device to device. It's pretty clear that the atomic-scale details of the structures end up mattering (which is pretty cool, but rather discouraging from the technological side). This paper is a theoretical look at this issue, emphasizing that this sensitivity to details likely results from the fact that those last few atoms at the contact are undercoordinated - that is, they have fewer neighbors than atoms in the bulk of the magnetic metal.
arxiv:0812.0048 - Masubuchi et al., Fabrication of graphene nanoribbon by local anodic oxidation lithography using atomic force microscope
I've been waiting for a paper like this to show up. It's been known for about a decade now that one can use the tip of an atomic force microscope to do very local electrochemistry. This has been exploited to make interesting metal/metal oxide structures, designer surface details on Si, and impressive quantum dot systems in GaAs. These folks have done the same on graphene. Nice.
UPDATE: As was pointed out in the comments, I had overlooked two earlier reports of AFM oxidation of graphene, here and here.
You've got to be kidding.
Don't take this as a comment one way or the other on D-wave's actual product claims about building adiabatic quantum computers, but check out the image at right from their presentation at a conference on supercomputing about "disruptive technologies". This may be great for raising [edited in response to comment] enthusiasm, but it's almost self-parody. Remember, true disruptive technologies reshape the world. Examples include fire, agriculture, the wheel, the transistor, the digital computer, the laser, and arguably the internet (if you consider it separately from the computer). Must be nice to claim, on the basis of one real data point and one projected data point, that a product is on a disruptive trajectory (in productivity (arbitrary units) vs. time (arbitrary units)). Wow.
Sunday, November 30, 2008
Words of advice about giving talks
I know that there are many many resources out there on the web about how to give scientific talks (see here (pdf), here, here, and here, for example). Still, I have a few pointers to suggest, based on some recent talks that I've seen.
- Know your audience. If you're giving a seminar, remember that you need to give an introduction that is appropriate for first-year graduate students. If you're giving a colloquium, remember that you're facing a diverse crowd that could include (in a physics department) astrophysicists, biophysicists, high energy physicists, etc., as well as their graduate students. Pitch your talk appropriately. This is (at least) doubly important if you're giving a job talk, and as my postdoctoral mentor used to point out, every talk you give is potentially a job talk.
- Know your time constraints. Don't bring 140 slides for a 50 minute talk, and don't go way over the allotted time. In fact, for an hour talk slot I'd say aim for 50 minutes.
- Avoid jargon; if acronyms are necessary, define them. Just because an acronym or term may be common in your sub-field, don't assume that everyone knows it. Just like most condensed matter people don't know what pseudorapidity means to a high energy physicist, most high energy physicists don't know what ARPES or XAFS are.
- Minimize equations, even if (especially if) you're a theorist. You can always have a backup slide with enough math on it to make people's eyes bleed, if you want. For a main slide in a talk, no one (not even the experts) are going to get much out of a ton of equations. If you have to have equations, have a physical interpretation for them.
- Don't show big scanned text passages from papers. No one is going to read them.
- Explain the big picture. Why is this work interesting? You'd better have an answer that will be intelligible to a non-specialist. Even better, think about how you would explain your work and the point behind it to a sophomore.
- If you're giving a ten-minute talk, don't spend two minutes showing and explaining an outline.
- Avoid technology party fouls. Make sure that your technology works. Make sure that your fonts are readable and correct. Too many colors, too much animation, too many cutesy transitions - all of these things are distracting.
- Make sure to repeat a question back to the questioner. This helps everyone - the semisleeping audience gets to hear what was asked, and you get to make sure that you're actually understanding the question correctly. No one wins when the speaker and questioner are talking at cross-purposes.
Wednesday, November 26, 2008
Hard times.
Wow. I guess those rumors about Harvard getting burned playing hedge fund games are true. They're putting in place a staff hiring freeze and getting ready to cancel faculty searches. Steps like that aren't surprising at, e.g., public universities in states hit hard by the housing crunch or ensuing economic crisis, but for a university with an endowment bigger than the GDP of some countries, this seems rather drastic. Still, it's hard to have too much sympathy for them, since their endowment is coming off a high of nearly $37B.
Tuesday, November 25, 2008
A (serious) modest proposal
Hopefully someone in the vast (ahem.) readership of this blog will pass this along to someone with connections in the Obama transition team. I've already submitted this idea to change.gov, but who knows the rate at which that gets read.
As part of the forthcoming major economic stimulus package, I propose that the Obama administration fully fund the America Competes initiative immediately. If the goal of the package is to stimulate the economy while doing something for the long-term health of the country (e.g., creating jobs while fixing roads, bridges, etc.), then funding basic research via the various agencies is a great thing to do. Think about it: the US spends less as a percentage of GDP than most of the rest of the developed world on science research. Rectifying that to some degree would (a) help the long-term prospects for technological innovation in the US; (b) create jobs; (c) support the goal of developing energy-related technologies; (d) support our universities, many of which are getting hammered by falling state revenues and/or poor endowment returns. Best of all, you could do all of this and it would be a freakin' bargain! You could double the research funding in NSF, NIH, DOE, NASA, and NIST, and not even come close to the amount of money we've already given to AIG. I'm suggesting something far more modest and much less disruptive. Seriously, ask yourself what's better for the long-term health of the country. Cutting basic science to pay for propping up Goldman Sachs is perverse.
Update: If you think that this is a good idea, I encourage you to submit your suggestion here, here, and/or here.
As part of the forthcoming major economic stimulus package, I propose that the Obama administration fully fund the America Competes initiative immediately. If the goal of the package is to stimulate the economy while doing something for the long-term health of the country (e.g., creating jobs while fixing roads, bridges, etc.), then funding basic research via the various agencies is a great thing to do. Think about it: the US spends less as a percentage of GDP than most of the rest of the developed world on science research. Rectifying that to some degree would (a) help the long-term prospects for technological innovation in the US; (b) create jobs; (c) support the goal of developing energy-related technologies; (d) support our universities, many of which are getting hammered by falling state revenues and/or poor endowment returns. Best of all, you could do all of this and it would be a freakin' bargain! You could double the research funding in NSF, NIH, DOE, NASA, and NIST, and not even come close to the amount of money we've already given to AIG. I'm suggesting something far more modest and much less disruptive. Seriously, ask yourself what's better for the long-term health of the country. Cutting basic science to pay for propping up Goldman Sachs is perverse.
Update: If you think that this is a good idea, I encourage you to submit your suggestion here, here, and/or here.
Monday, November 24, 2008
Spin
Many particles possess an internal degree of freedom called "spin" that is an intrinsic amount of angular momentum associated with that particle. The name is meant to evoke a spinning top, which has some rotational angular momentum about its axis when, well, spinning. Electrons have "spin 1/2", meaning that if you pick a convenient axis of reference ("quantization axis") that we'll call z, the z-component of the electron's spin angular momentum is either +1/2 hbar or -1/2 hbar. All too often we treat spin in a rather cavalier way. When people talk about "spintronics", they are interested in using the spin degree of freedom of electrons to store and move information, rather than using the charge as in conventional electronics. One complication is that while charge is strictly conserved, spin is not. If you start off with a population of spin-aligned electrons and inject them into a typical solid, over time the spin orientation of those electrons will become randomized. Now, angular momentum is strictly conserved, so this relaxation of the electron spins must coincide with a transfer of angular momentum to the rest of the solid. Feynman pointed this out (somewhere in vol. III of his lectures on physics) - if you fire a stream of spin-polarized electrons into a brick hanging on the end of a thread, you are really applying a torque to the brick since you are supplying a flow of angular momentum into it, and the thread will twist to apply a balancing torque. Well, Zolfagharkhani et al. have actually gone and done this experiment. They use a ferromagnetic wire to supply a polarized spin current and an extremely sensitive nanomechanical torsional oscillator to measure the resulting torque. Very nice stuff.
Thursday, November 20, 2008
Nature Journal Club
My media onslaught continues. This past week I had a Journal Club contribution in Nature, which was fun and a nice opportunity for a wider audience. Here's a version of it before it was (by necessity) trimmed and tweaked, with added hyperlinks....
Tunable charge densities become very large, with super consequences
The electronic properties of materials depend dramatically on the density of mobile charge carriers. One way to tune that density is through doping, the controlled addition of impurity atoms or molecules that either donate or take up an electron from the rest of the material. Unfortunately, doping also leads to charged dopants that can act as scattering sites.
Fortunately, there is a way to change the carrier concentration without doping. In 1925 J. E. Lilienfeld first proposed what is now called the “field effect”, in which the sample material of interest is used as one electrode of a capacitor. When a voltage is applied to the other (“gate”) electrode, equal and opposite charge densities accumulate on the gate and sample surfaces, provided charge can move in the sample without getting trapped. While the density of charge that can be accumulated this way is rather limited by the properties of the insulating spacer between the gate and the sample, the field effect has been incredibly useful in transistors, serving as the basis for modern consumer electronics.
Recently it has become clear that another of Lilienfeld’s inventions, the electrolytic capacitor, holds the key to achieving much higher field effect charge densities. The dramatic consequences of this were made clear by researchers at Tohoku University in Sendai, Japan (K. Ueno et al., Nature Mater. 7, 856-858 (2008)), who used a polymer electrolyte to achieve gated charge densities at a SrTiO3 surface sufficiently large to produce superconductivity. While superconductivity had been observed previously in highly doped SrTiO3, this new approach allows the exploration of the 2d superconducting transition without the disorder inherent in doping.
The most exciting aspect of this work is that this approach, using mobile ions in an electrolyte for gating, can reach charge densities approaching those in chemically doped, strongly correlated materials such as the high temperature superconductors. As an added bonus, this approach should also be very flexible, not needing special substrates. Tuning the electronic density in strongly correlated materials without the associated pain of chemical doping would, indeed, be super.
Tunable charge densities become very large, with super consequences
The electronic properties of materials depend dramatically on the density of mobile charge carriers. One way to tune that density is through doping, the controlled addition of impurity atoms or molecules that either donate or take up an electron from the rest of the material. Unfortunately, doping also leads to charged dopants that can act as scattering sites.
Fortunately, there is a way to change the carrier concentration without doping. In 1925 J. E. Lilienfeld first proposed what is now called the “field effect”, in which the sample material of interest is used as one electrode of a capacitor. When a voltage is applied to the other (“gate”) electrode, equal and opposite charge densities accumulate on the gate and sample surfaces, provided charge can move in the sample without getting trapped. While the density of charge that can be accumulated this way is rather limited by the properties of the insulating spacer between the gate and the sample, the field effect has been incredibly useful in transistors, serving as the basis for modern consumer electronics.
Recently it has become clear that another of Lilienfeld’s inventions, the electrolytic capacitor, holds the key to achieving much higher field effect charge densities. The dramatic consequences of this were made clear by researchers at Tohoku University in Sendai, Japan (K. Ueno et al., Nature Mater. 7, 856-858 (2008)), who used a polymer electrolyte to achieve gated charge densities at a SrTiO3 surface sufficiently large to produce superconductivity. While superconductivity had been observed previously in highly doped SrTiO3, this new approach allows the exploration of the 2d superconducting transition without the disorder inherent in doping.
The most exciting aspect of this work is that this approach, using mobile ions in an electrolyte for gating, can reach charge densities approaching those in chemically doped, strongly correlated materials such as the high temperature superconductors. As an added bonus, this approach should also be very flexible, not needing special substrates. Tuning the electronic density in strongly correlated materials without the associated pain of chemical doping would, indeed, be super.
Tuesday, November 18, 2008
This week in cond-mat
One paper today in the arxiv:
arxiv:0811.2914 - Zwanenburg et al., Spin states of the first four holes in a silicon nanowire quantum dot
This is another typically exquisite paper by the Kouwenhoven group at Delft, in collaboration with Charlie Lieber at Harvard. The Harvard folks have grown a Si wire segment in the middle of a long NiSi wire. The NiSi ends act as source and drain electrodes for conduction measurements, and the Si segment acts as a quantum dot, with the underlying substrate acting as a gate electrode. As usual, the small size of the Si segment leads to a discrete level spectrum, and the weak electronic coupling of the Si segment to the NiSi combined with the small size of the Si segment results in strong charging effects (Coulomb blockade, which I'll explain at length for nonexperts sometime soon). By measuring at low temperatures very carefully, the Delft team can see, in the conductance data as a function of source-drain voltage and gate voltage, the energy level spectrum of the dot. By looking at the spectrum as a function of magnetic field, they can deduce the spin states of the ground and excited levels of the dot for each value of dot charge. That's cute, but the part that I found most interesting was the careful measurement of excited states of the empty dot. The inelastic excitations that they see are not electronic in nature - they're phonons. They have been able to see evidence for the launching (via inelastic tunneling) of quantized acoustic vibrations. Figure 5 is particularly nice.
arxiv:0811.2914 - Zwanenburg et al., Spin states of the first four holes in a silicon nanowire quantum dot
This is another typically exquisite paper by the Kouwenhoven group at Delft, in collaboration with Charlie Lieber at Harvard. The Harvard folks have grown a Si wire segment in the middle of a long NiSi wire. The NiSi ends act as source and drain electrodes for conduction measurements, and the Si segment acts as a quantum dot, with the underlying substrate acting as a gate electrode. As usual, the small size of the Si segment leads to a discrete level spectrum, and the weak electronic coupling of the Si segment to the NiSi combined with the small size of the Si segment results in strong charging effects (Coulomb blockade, which I'll explain at length for nonexperts sometime soon). By measuring at low temperatures very carefully, the Delft team can see, in the conductance data as a function of source-drain voltage and gate voltage, the energy level spectrum of the dot. By looking at the spectrum as a function of magnetic field, they can deduce the spin states of the ground and excited levels of the dot for each value of dot charge. That's cute, but the part that I found most interesting was the careful measurement of excited states of the empty dot. The inelastic excitations that they see are not electronic in nature - they're phonons. They have been able to see evidence for the launching (via inelastic tunneling) of quantized acoustic vibrations. Figure 5 is particularly nice.
Sunday, November 16, 2008
Workshop on new iron arsenide superconductors
This weekend is a big workshop at the University of Maryland on the new iron arsenide high temperature superconductors. Since it's not really my area, I didn't go. Anyone want to give a little update? Any cool news?
Tuesday, November 11, 2008
Poor Doug's Almanack
Welcome, readers of Discover Magazine! Thanks for coming by, and I hope that you find the discussion here interesting. The historical target audience of this blog has been undergrads, grad students, and faculty interested in condensed matter (solid state) physics and nanoscience. The readership also includes some science journalists and other scientific/engineering professionals. I would like very much to reach a more general lay-audience as well, since I think we condensed matter types historically have been pretty lousy at explaining the usefulness and intellectually richness of our discipline. Anyway, thanks again.
(By the way, I don't compare in any serious way with Ben Franklin - that was a bit of hyperbole from Discover that I didn't know was coming. Fun science fact: Franklin's to blame that the electron charge is defined to be negative, leading to the unfortunate annoyance that current flow and electron flow point in opposite directions. He had a 50/50 chance, and in hindsight his choice of definition could've been better.)
(By the way, I don't compare in any serious way with Ben Franklin - that was a bit of hyperbole from Discover that I didn't know was coming. Fun science fact: Franklin's to blame that the electron charge is defined to be negative, leading to the unfortunate annoyance that current flow and electron flow point in opposite directions. He had a 50/50 chance, and in hindsight his choice of definition could've been better.)
Sunday, November 09, 2008
This week in cond-mat
This week the subject is boundary conditions. When we teach about statistical physics (as I am this semester), we often need to count allowed states of quantum particles or waves. The standard approach is to show how boundary conditions (for example, the idea that the tangential electric field has to go to zero at the walls of a conducting cavity) lead to restrictions on the wavelengths allowed. Boundary conditions = discrete list of allowed wavelengths. We then count up those allowed modes, converting the sum to an integral if we have to count many. The integrand is the density of states. One remarkable feature crops up when doing this for confined quantum particles: the resulting density of states is insensitive to the exact choice of boundary conditions. Hard wall boundary conditions (all particles bounce off the walls - no probability for finding the particle at or beyond the walls) and periodic boundary conditions (particles that leave one side of the system reappear on the other side, as in Asteroids) give the same density of states. The statistical physics in a big system is then usually relatively insensitive to the boundaries.
There are a couple of physical systems where we can really test the differences between the two types of boundary conditions.
arxiv:0811.1124 - Pfeffer and Zawadzki, "Electrons in superlattices: birth of the crystal momentum"
This paper considers semiconductor superlattices of various sizes. These structures are multilayers of nanoscale thickness semiconductor films that can be engineered with exquisite precision. The authors consider how the finite superlattice result (nonperiodic potential; effective hardwall boundaries) evolves toward the infinite superlattice result (immunity to details of boundary conditions). Very pedagogical.
arxiv:0811.0565, 0811.0676, 0811.0694 all concern themselves with graphene that has been etched laterally into finite strips. Now, we already have a laboratory example of graphene with periodic boundary conditions: the carbon nanotube, which is basically a graphene sheet rolled up into a cylinder. Depending on how the rolling is done, the nanotube can be metallic or semiconducting. In general, the larger the diameter of a semiconducting nanotube, the smaller the bandgap. This makes sense, since the infinite diameter limit would just be infinite 2d graphene again, which has no band gap. So, the question naturally arises, if we could cut graphene into narrow strips (hardwall boundary conditions transverse to the strip direction), would these strips have an electronic structure resembling that of nanotubes (periodic boundary conditions transverse to the tube direction), including a bandgap? The experimental answer is, yes, etched graphene strips to act like they have a bandgap, though it's clear that disorder from the etching process (and from having the strips supported by an underlying substrate) can dominate the electronic properties.
There are a couple of physical systems where we can really test the differences between the two types of boundary conditions.
arxiv:0811.1124 - Pfeffer and Zawadzki, "Electrons in superlattices: birth of the crystal momentum"
This paper considers semiconductor superlattices of various sizes. These structures are multilayers of nanoscale thickness semiconductor films that can be engineered with exquisite precision. The authors consider how the finite superlattice result (nonperiodic potential; effective hardwall boundaries) evolves toward the infinite superlattice result (immunity to details of boundary conditions). Very pedagogical.
arxiv:0811.0565, 0811.0676, 0811.0694 all concern themselves with graphene that has been etched laterally into finite strips. Now, we already have a laboratory example of graphene with periodic boundary conditions: the carbon nanotube, which is basically a graphene sheet rolled up into a cylinder. Depending on how the rolling is done, the nanotube can be metallic or semiconducting. In general, the larger the diameter of a semiconducting nanotube, the smaller the bandgap. This makes sense, since the infinite diameter limit would just be infinite 2d graphene again, which has no band gap. So, the question naturally arises, if we could cut graphene into narrow strips (hardwall boundary conditions transverse to the strip direction), would these strips have an electronic structure resembling that of nanotubes (periodic boundary conditions transverse to the tube direction), including a bandgap? The experimental answer is, yes, etched graphene strips to act like they have a bandgap, though it's clear that disorder from the etching process (and from having the strips supported by an underlying substrate) can dominate the electronic properties.
Thursday, November 06, 2008
Two new papers in Nano Letters
Two recent papers in Nano Letters caught my eye.
Kuemmeth et al., "Measurement of Discrete Energy-Level Spectra in Individual Chemically Synthesized Gold Nanoparticles"
One of the first things that I try to teach student in my nano courses is the influence of nanoscale confinement on the electronic properties of metals. We learn in high school chemistry about the discrete orbitals in atoms and small molecules, and how we can think about filling up those orbitals. The same basic idea works reasonably well in larger systems, but the energy difference between subsequent levels becomes much smaller as system size increases. In bulk metals the single-particle levels are so close together as to be almost continuous. In nanoparticles at low temperatures, however, the spacing is reasonably large compared to the available thermal energy that one can do experiments which probe this discrete spectrum. Now, in principle the detailed spectrum depends on the exact arrangement of metal atoms, but in practice one can look at the statistical distribution of levels and compare that distribution with a theory (in this case, "random matrix theory") that averages in some way over possible configurations. This paper is a beautiful example of fabrication skill and measurement technique. There are no big physics surprises here, but the data are extremely pretty.
Xiao et al., "Flexible, stretchable, transparent carbon nanotube thin film loudspeakers"
This is just damned cool. The authors take very thin films of carbon nanotubes and are able to use them as speakers even without making the films vibrate directly. The idea is very simple: convert the acoustic signal into current (just as you would to send it through an ordinary speaker) and run that current through the film. Because of the electrical resistance of the film (low, but nonzero), the film gets hot when the current is at a maximum. Because the film is so impressively low-mass, it has a tiny heat capacity, meaning that small energy inputs result in whopping big temperature changes. The film locally heats the air adjacent to the film surface, launching acoustic waves. Voila. A speaker with no moving parts. This is so simple it may well have real practical implementation. Very clever.
Kuemmeth et al., "Measurement of Discrete Energy-Level Spectra in Individual Chemically Synthesized Gold Nanoparticles"
One of the first things that I try to teach student in my nano courses is the influence of nanoscale confinement on the electronic properties of metals. We learn in high school chemistry about the discrete orbitals in atoms and small molecules, and how we can think about filling up those orbitals. The same basic idea works reasonably well in larger systems, but the energy difference between subsequent levels becomes much smaller as system size increases. In bulk metals the single-particle levels are so close together as to be almost continuous. In nanoparticles at low temperatures, however, the spacing is reasonably large compared to the available thermal energy that one can do experiments which probe this discrete spectrum. Now, in principle the detailed spectrum depends on the exact arrangement of metal atoms, but in practice one can look at the statistical distribution of levels and compare that distribution with a theory (in this case, "random matrix theory") that averages in some way over possible configurations. This paper is a beautiful example of fabrication skill and measurement technique. There are no big physics surprises here, but the data are extremely pretty.
Xiao et al., "Flexible, stretchable, transparent carbon nanotube thin film loudspeakers"
This is just damned cool. The authors take very thin films of carbon nanotubes and are able to use them as speakers even without making the films vibrate directly. The idea is very simple: convert the acoustic signal into current (just as you would to send it through an ordinary speaker) and run that current through the film. Because of the electrical resistance of the film (low, but nonzero), the film gets hot when the current is at a maximum. Because the film is so impressively low-mass, it has a tiny heat capacity, meaning that small energy inputs result in whopping big temperature changes. The film locally heats the air adjacent to the film surface, launching acoustic waves. Voila. A speaker with no moving parts. This is so simple it may well have real practical implementation. Very clever.
Wednesday, November 05, 2008
To quell speculation....
Yes, if asked, I would serve as President Obama's science advisor. (Come on - you would, too, right? Of course, it's easy for me to joke about this since it's about as probable as me being asked to serve as head of the National Science Board.)
Monday, November 03, 2008
This one's easy.
Has Bush been good for science? I agree with ZapperZ: No. How Marburger can argue that research funding has kept pace with inflation is beyond me, given the last three years of continuing resolutions, unless one (a) fudges the definition of research to include a lot of military development, and (b) fudges the definition of inflation to ignore things like food, fuel, and health care costs.
Could Bush have been even worse? Yes.
Could Bush have been even worse? Yes.
Statistical physics
This fall I'm teaching Statistical and Thermal Physics, a senior (in the most common Rice physics curriculum, anyway) undergraduate course, and once again I'm struck by the power and profundity of the material. Rather like quantum, stat mech can be a difficult course to teach and to take; from the student perspective, you're learning a new vocabulary, a new physical intuition, and some new mathematical tools. Some of the concepts are rather slippery and may be difficult to absorb at a first hearing. Still, the subject matter is some of the best intellectual content in physics: you learn about some of the reasons for the "demise" of classical physics (the Ultraviolet Catastrophe; the heat capacity problem), major foundational issues (macroscopic irreversibility and the arrow of time; the precise issue where quantum mechanics and general relativity are at odds (or, as I like to call it, "Ultraviolet Catastrophe II: Electric Boogaloo")), and the meat of some of the hottest topics in current physics (Fermi gases and their properties; Bose Einstein condensation). Beyond all that you also get practical, useful topics like thermodynamic cycles, how engines and refrigerators work, chemical equilibria, and an intro to phase transitions. Someone should write a popular book about some of this, along the lines of Feynman's QED. If only there were enough hours in the day (and my nano book was further along). Anyway, I bring this up because over time I'm thinking about doing a series of blog posts at a popular level about some of these topics. We'll see how it goes.
Sunday, November 02, 2008
Ahh, Texas, again.
Stories like this one depress me. Is it really any wonder that our state has a difficult time attracting large high-tech companies from, e.g., California and Illinois, even though corporate taxation policies are very friendly here?
Thursday, October 30, 2008
Local news
You can learn all sorts of things reading your local paper. For example, I read yesterday that Rice and Baylor College of Medicine are talking about a possible merger. Unsurprisingly, everyone on campus already knew about that, but it's interesting to see the reporter's take on things, including various quotes from unnamed professors.
Today, I read about this. For those who didn't or can't click the link, it's an article about plagiarism. A professor at Texas Southern University's physics department had apparently asked a University of Houston physics professor for an example of a successful grant a couple of years ago. The UH prof gave him a copy of a grant that had been funded by DARPA in 2002-3. The TSU prof then allegedly sent in the same proposal, word for word (though editing out references to the UH prof's work) to the Army Research Lab, which then funded it to the tune of $800K. Lovely. TSU refused the grant, and the investigation is "ongoing".
Today, I read about this. For those who didn't or can't click the link, it's an article about plagiarism. A professor at Texas Southern University's physics department had apparently asked a University of Houston physics professor for an example of a successful grant a couple of years ago. The UH prof gave him a copy of a grant that had been funded by DARPA in 2002-3. The TSU prof then allegedly sent in the same proposal, word for word (though editing out references to the UH prof's work) to the Army Research Lab, which then funded it to the tune of $800K. Lovely. TSU refused the grant, and the investigation is "ongoing".
Tuesday, October 28, 2008
Boo hoo.
So, Merrill Lynch advisors don't like their retention offers. Maybe I'll go into the lab and fab a nano-violin to play the world's saddest song for them.
Sunday, October 26, 2008
This week in cond-mat
While there are several interesting recent papers, one in particular touches on a nice piece of physics that I'd like to describe.
arxiv:0810.4384 - Bluhm et al., Persistent currents in normal metal rings
One of the remarkable properties that makes superconductors "super" is their capability to sustain currents that flow in closed loops indefinitely, without dissipation. We exploit this all the time in, for example, MRI magnets. What many people do not realize, however, is that normal metals (e.g., gold, silver) can also sustain persistent currents at very low temperatures, at least over length scales comparable to the coherence length. Think of electrons as waves for a minute. The coherence length is the distance that electrons can propagate in a metal and still have a well-defined phase (that is, the crests and troughs of the electron waves have a reproducible location relative to some initial point). At non-zero temperatures, inelastic interactions between the electrons and other degrees of freedom (including other electrons) fuzz out this phase relationship, suppressing quantum interference effects over a characteristic distance scale (the coherence length). Anyway, suppose you have a metal loop smaller than the coherence length. The phase of the electronic wave when you do one complete lap around the loop must increase (or decrease) by an integer multiple of 2pi (that is, there must be an integer multiple of wavelengths going around the loop) for the wave picture to make sense. The gradient of that phase is related to the current. It turns out that as T goes to zero, the allowed electronic states of such a loop have to have nonzero currents so that this phase winding picture holds. These currents also lead to a magnetic response - tiny current loops are magnetic dipoles that can be detected, and trying to thread external magnetic flux through these loops changes in the persistent currents (via the Aharonov-Bohm effect). These persistent currents have been measured before (see here, for a great example). However, there has been ongoing controversy concerning the magnitude and sign of these currents. In this experiment, Kam Moler's group at Stanford has used the incredibly sensitive scanning SQUID microscope to look at this phenomenon, one ring at a time, as a function of temperature and external magnetic field. This is a very pretty experiment probing some extremely finicky physics.
arxiv:0810.4384 - Bluhm et al., Persistent currents in normal metal rings
One of the remarkable properties that makes superconductors "super" is their capability to sustain currents that flow in closed loops indefinitely, without dissipation. We exploit this all the time in, for example, MRI magnets. What many people do not realize, however, is that normal metals (e.g., gold, silver) can also sustain persistent currents at very low temperatures, at least over length scales comparable to the coherence length. Think of electrons as waves for a minute. The coherence length is the distance that electrons can propagate in a metal and still have a well-defined phase (that is, the crests and troughs of the electron waves have a reproducible location relative to some initial point). At non-zero temperatures, inelastic interactions between the electrons and other degrees of freedom (including other electrons) fuzz out this phase relationship, suppressing quantum interference effects over a characteristic distance scale (the coherence length). Anyway, suppose you have a metal loop smaller than the coherence length. The phase of the electronic wave when you do one complete lap around the loop must increase (or decrease) by an integer multiple of 2pi (that is, there must be an integer multiple of wavelengths going around the loop) for the wave picture to make sense. The gradient of that phase is related to the current. It turns out that as T goes to zero, the allowed electronic states of such a loop have to have nonzero currents so that this phase winding picture holds. These currents also lead to a magnetic response - tiny current loops are magnetic dipoles that can be detected, and trying to thread external magnetic flux through these loops changes in the persistent currents (via the Aharonov-Bohm effect). These persistent currents have been measured before (see here, for a great example). However, there has been ongoing controversy concerning the magnitude and sign of these currents. In this experiment, Kam Moler's group at Stanford has used the incredibly sensitive scanning SQUID microscope to look at this phenomenon, one ring at a time, as a function of temperature and external magnetic field. This is a very pretty experiment probing some extremely finicky physics.
Wednesday, October 22, 2008
Voted.
I did early voting this morning here at a nearby supermarket. The line stretched halfway around the store - there must've been a hundred people in front of me, and I showed up right when they opened the polls. The line moved fairly quickly, but its length seemed unchanged by the time I voted half an hour later. I know that Houston is a big city, but if this is any indication of overall turnout, the number of voters this year is going to be enormous.
Tuesday, October 21, 2008
Fortuitous physics
Every now and then you stumble across a piece of physics, some detail about how the universe works, that is extremely lucky in some sense. For example, it's very convenient that Si is a great semiconductor, and at the same time SiO2 is an incredibly good insulator - in terms of the electric field that it can sustain before breakdown, SiO2 is about as good as it gets. Another example is GaAs. While it doesn't have a nice oxide, it does have some incredibly nice crystal growth properties. I've been told that through some fortunate happenstance of growth kinetics, you can do growth (e.g., in a molecular beam epitaxy system - a glorified evaporator) under nearly arbitrarily As-rich conditions and still end up with stoichiometric GaAs. Somehow the excess As just doesn't stick around. A third example is the phase diagram of 3He/4He mixtures. Mixtures of the two helium isotopes phase separate at low temperatures (T below 600 mK) >3He-rich phase that's almost pure, and a dilute phase with about 6% 3He in 4He. If you pump the 3He atoms out of the dilute phase, more 3He atoms are pulled from the concentrated phase to maintain the 6% concentration in the dilute phase. There is a latent heat associated with removing a 3He atom from the concentrated phase. The result is a form of evaporative cooling: the temperature of the concentrated phase decreases as the pumping continues, and unlike real evaporative cooling, the effective vapor pressure of the 3He in the dilute phase remains fixed even as T approaches zero. This happy piece of physics is the basis for the dilution refrigerator, which lets us cool materials down to within a few mK of absolute zero.
Any suggestions for other fortunate, useful pieces of physics?
Any suggestions for other fortunate, useful pieces of physics?
Sunday, October 19, 2008
Faculty searches, 2008 version
As I did last year, I'm revising a past post of mine about the faculty search process. I know that the old posts are still find-able via google, but it never hurts to present this topic again at this time of the year.
Here are the steps in the faculty search process:
- The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The danger with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire. As the economy slides and state budgets are hammered, this can become more challenging. I know anecdotally of public universities having to cancel searches even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some leadtime and not have to back-track.
- The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.
- The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Historically MIT and Berkeley run the same ad every year, trolling for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
- In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
- The dean meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
- Applications come in and are sorted; rec letters are collated. Each candidate has a folder. Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later. I understand this perspective, but I largely disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)
- The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least wean out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab.
- Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is arrived at, and the committee meets to hash that down to, in the end, five or so to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within the committee. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
Tips for candidates:
- Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture, but don't feel like this is some evaluation of you as a human being.
- Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons. The Incoherent Ponderer posted about this recently. I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision.
- Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
- Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.
- Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require an experienced grad student putting in 60 hours a week.
- Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to a mid-career scientist.
- Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
Monday, October 13, 2008
This week in cond-mat
Three interesting papers from this past week....
arxiv:0810.1308 - Chen et al., Non-equilibrium tunneling spectroscopy in carbon nanotubes
One persistent challenge in condensed matter physics is the fact that we are often interested in physical quantities (e.g., the entropy of some system) that are extremely challenging or impossible to measure directly (e.g., you can't call up Agilent and buy an entropy measuring box). For example, in nanoscale systems, particularly those with strong electron-electron interactions, we would love to be able to study the energy transfer in nonequilibrium situations. As a thought experiment, this might involve injecting particular electrons at known energies and following them through the nanostructure, watching them scatter. Well, we can't do that, but we can do something close. Using a superconducting electrode as a probe (because it has a particularly sharp feature in its density of states near the edge of the superconducting gap), we can perform tunneling spectroscopy on a system of interest. This assumes that the measured tunneling current is proportional to the product of the tunneling probe's density of states and that of the system of interest. It also assumes that the tunneling process is weak enough that it doesn't strongly influence the system. From the resulting data it is possible to back out what the distribution of electrons as a function of energy is in the system. Here a collaboration between UIUC and Michigan State apply this technique to study electrons moving in carbon nanotubes. I need to think a bit about the technical details, but the experimental data look very nice and quite intriguing.
arxiv:0810.1873 - Tal et al., The molecular signature of highly conductive metal-molecule-metal junctions
This is another in a series of extremely clean experiments by the van Ruitenbeek group, looking at conduction through single molecules via the mechanical break junction technique. Using Pt electrodes, they routinely see extremely strong coupling (and consequently high conductance, approaching the conductance quantum) for small molecules bridging the junctions. They further are able to confirm that the molecule of interest is in the junction via inelastic electron tunneling spectroscopy; molecular vibrational modes show up as features in the conductance as a function of bias voltage. They can see isotope effects in those modes (comparing normal vs. deuterated molecules, for example), and they can see nontrivial changes in the vibrational energies as the junctions are stretched. Neat stuff.
arxiv:0810.1890 - Gozar et al., High temperature interface superconductivity between metallic and insulating cuprates
There is no doubt that the ability to grow complex materials one unit cell at a time is a tremendously powerful technique. Here, a collaboration anchored by growth capabilities at Brookhaven have succeeded in creating a novel superconducting state that lives at the interface between two different cuprate oxides, neither of which are superconducting by themselves. Remember, all kinds of wild things can happen at interfaces (e.g., spontaneous charge transfer, band bending) even without the strong electronic correlations present in this class of materials. There is a real possibility here that with appropriate understanding of the physics, it may be possible to engineer superconductivity above previously accessible temperatures. That would be huge.
arxiv:0810.1308 - Chen et al., Non-equilibrium tunneling spectroscopy in carbon nanotubes
One persistent challenge in condensed matter physics is the fact that we are often interested in physical quantities (e.g., the entropy of some system) that are extremely challenging or impossible to measure directly (e.g., you can't call up Agilent and buy an entropy measuring box). For example, in nanoscale systems, particularly those with strong electron-electron interactions, we would love to be able to study the energy transfer in nonequilibrium situations. As a thought experiment, this might involve injecting particular electrons at known energies and following them through the nanostructure, watching them scatter. Well, we can't do that, but we can do something close. Using a superconducting electrode as a probe (because it has a particularly sharp feature in its density of states near the edge of the superconducting gap), we can perform tunneling spectroscopy on a system of interest. This assumes that the measured tunneling current is proportional to the product of the tunneling probe's density of states and that of the system of interest. It also assumes that the tunneling process is weak enough that it doesn't strongly influence the system. From the resulting data it is possible to back out what the distribution of electrons as a function of energy is in the system. Here a collaboration between UIUC and Michigan State apply this technique to study electrons moving in carbon nanotubes. I need to think a bit about the technical details, but the experimental data look very nice and quite intriguing.
arxiv:0810.1873 - Tal et al., The molecular signature of highly conductive metal-molecule-metal junctions
This is another in a series of extremely clean experiments by the van Ruitenbeek group, looking at conduction through single molecules via the mechanical break junction technique. Using Pt electrodes, they routinely see extremely strong coupling (and consequently high conductance, approaching the conductance quantum) for small molecules bridging the junctions. They further are able to confirm that the molecule of interest is in the junction via inelastic electron tunneling spectroscopy; molecular vibrational modes show up as features in the conductance as a function of bias voltage. They can see isotope effects in those modes (comparing normal vs. deuterated molecules, for example), and they can see nontrivial changes in the vibrational energies as the junctions are stretched. Neat stuff.
arxiv:0810.1890 - Gozar et al., High temperature interface superconductivity between metallic and insulating cuprates
There is no doubt that the ability to grow complex materials one unit cell at a time is a tremendously powerful technique. Here, a collaboration anchored by growth capabilities at Brookhaven have succeeded in creating a novel superconducting state that lives at the interface between two different cuprate oxides, neither of which are superconducting by themselves. Remember, all kinds of wild things can happen at interfaces (e.g., spontaneous charge transfer, band bending) even without the strong electronic correlations present in this class of materials. There is a real possibility here that with appropriate understanding of the physics, it may be possible to engineer superconductivity above previously accessible temperatures. That would be huge.
Saturday, October 11, 2008
What's interesting about condensed matter physics
Inspired by this post and this one over at Uncertain Principles, I thought that I should explain what think is interesting about condensed matter physics. Clearly Chad's main observation is that condensed matter has historically had a major industrial impact, but he wants to understand why the science is interesting, and what draws people to it.
Condensed matter physics largely exists at the junction between statistical physics and quantum mechanics. Statistical physics tries to understand the emergence of collective phenomena (whether that's crystalline order, magnetic order, the concept of temperature, or the whole idea of phase transitions and broken symmetry) from a large number of particles obeying relatively simple rules. Throw in the fact that the rules of quantum mechanics are rich and can have profound consequences (e.g., the Pauli principle, which says that no two identical fermions can have identical quantum numbers, leads both to the stability of white dwarf stars and the major properties of most metals), and you get condensed matter physics. It's amazing how many complicated phenomena result from just simple quantum mechanics + large numbers of particles, especially when interactions between the particles become important. It's this richness, which we still do not fully understand, that is a big part of the intellectual appeal of the subject, at least for me.
I will also shamelessly crib Chad's list of points that he likes about AMO physics, and point out that CM physics is also well-described by them:
Condensed matter physics largely exists at the junction between statistical physics and quantum mechanics. Statistical physics tries to understand the emergence of collective phenomena (whether that's crystalline order, magnetic order, the concept of temperature, or the whole idea of phase transitions and broken symmetry) from a large number of particles obeying relatively simple rules. Throw in the fact that the rules of quantum mechanics are rich and can have profound consequences (e.g., the Pauli principle, which says that no two identical fermions can have identical quantum numbers, leads both to the stability of white dwarf stars and the major properties of most metals), and you get condensed matter physics. It's amazing how many complicated phenomena result from just simple quantum mechanics + large numbers of particles, especially when interactions between the particles become important. It's this richness, which we still do not fully understand, that is a big part of the intellectual appeal of the subject, at least for me.
I will also shamelessly crib Chad's list of points that he likes about AMO physics, and point out that CM physics is also well-described by them:
- "AMO physics is cool because it's the best field for exploring quantum effects." Well, while AMO is a nice, clean area for studying quantum effects, CM is just as good for some topics, and better for others. There's probably just as many people studying quantum computation using solid state systems, for example, as AMO systems.
- "AMO physics is cool because it's concrete." Again, it doesn't get much more concrete that CM physics; it's all atoms and electrons. One fascinating area of study is how bulk properties arise from atomic properties - one gold atom is not a metal, but 1000 gold atoms together are distinctly "metallic". One carbon atom is not an insulator, but 1000 of them together can be a nanodiamond on one hand, or a piece of graphene on the other, How does this work? That's part of what CM is about.
- "Experimental AMO physics is cool because it's done on a human scale." Experimental CM physics is the same way. Sure, occasionally people need big user facilities (synchrotrons, e.g.). Still, you can often do experiments in one room with only one or two people. Very different than Big Science.
- "AMO physics has practical applications." So does CM, and personally that's something that I like quite a bit. The computer and monitor that I'm using right now are applied CM physics.
- "AMO physics provides technologies that enable amazing discoveries in lots of other fields." Again, so does CM. Silicon strip detectors for particle physics, anyone? CCD detectors for all the imaging that the AMO folks do? Superconducting magnets for MRI? Solid-state lasers? Photon-counting detectors for astro?
Wednesday, October 08, 2008
Rant - updated.
I was going to post about some neat new papers on the arxiv, and mention the very good talk on science policy that I heard today from Norm Augustine, former CEO of Lockheed-Martin and leader of the National Academy committee that wrote the now-famous Gathering Storm report. Instead, I read some news about which I must rant.
Remember the $85B (or, as I like to think of it, the 17 years worth of NSF budgets) that the US government used to "save" reinsurer AIG? Turns out, that wasn't enough. They've already blown through it without actually liquidating their assets as everyone was expecting. Now they've managed to get another $37.8B (7.5 years worth of NSF budgets) from the Federal Reserve. I'm sure they're all done now - after all, they've already spent $440K on a luxury retreat for executives (including $150K for meals and $23K for spa charges) after the $85B bailout. (You know you've gone over the line when the Bush administration calls you "despicable".) In the mean time, the US government is considering taking an ownership stake in a number of banks basically to convince the banks that yes, it's ok to lend money to each other since everyone would be backed by the feds. Of course, that plan may meet with resistance from the banks because it may limit executive compensation for the people who run the banks. Right, because unless we pay top dollar for these geniuses, there's a risk that the banks may not be well-run. Heaven forbid. If this keeps up, it'll be time to invest in tar and feather suppliers. At least I can refer you to a handy guide on how the economic meltdown may affect you.
UPDATE: You've got to be kidding me. AIG is planning another gathering, this time at the Ritz-Carlton in Half Moon Bay, CA, for 150 of their agents. Here's a clue, AIG: When you're so desperate for money that the taxpayers have to keep you afloat, maybe you should, I don't know, consider cutting back on ridiculous luxury expenditures? Don't tell me that you need to pamper your agents or they'll quit. I'm done. I don't care what it does to the global financial system: AIG needs to fail, and their executives need to lose their compensation, and the shareholders of AIG should sue those same executives for the last 10 years worth of compensation.
UPDATE 2: If you want to hear the most lucid explanation I've come across for this whole mess, particularly the problem of credit default swaps, listen to this. It's informative. And scary.
Remember the $85B (or, as I like to think of it, the 17 years worth of NSF budgets) that the US government used to "save" reinsurer AIG? Turns out, that wasn't enough. They've already blown through it without actually liquidating their assets as everyone was expecting. Now they've managed to get another $37.8B (7.5 years worth of NSF budgets) from the Federal Reserve. I'm sure they're all done now - after all, they've already spent $440K on a luxury retreat for executives (including $150K for meals and $23K for spa charges) after the $85B bailout. (You know you've gone over the line when the Bush administration calls you "despicable".) In the mean time, the US government is considering taking an ownership stake in a number of banks basically to convince the banks that yes, it's ok to lend money to each other since everyone would be backed by the feds. Of course, that plan may meet with resistance from the banks because it may limit executive compensation for the people who run the banks. Right, because unless we pay top dollar for these geniuses, there's a risk that the banks may not be well-run. Heaven forbid. If this keeps up, it'll be time to invest in tar and feather suppliers. At least I can refer you to a handy guide on how the economic meltdown may affect you.
UPDATE: You've got to be kidding me. AIG is planning another gathering, this time at the Ritz-Carlton in Half Moon Bay, CA, for 150 of their agents. Here's a clue, AIG: When you're so desperate for money that the taxpayers have to keep you afloat, maybe you should, I don't know, consider cutting back on ridiculous luxury expenditures? Don't tell me that you need to pamper your agents or they'll quit. I'm done. I don't care what it does to the global financial system: AIG needs to fail, and their executives need to lose their compensation, and the shareholders of AIG should sue those same executives for the last 10 years worth of compensation.
UPDATE 2: If you want to hear the most lucid explanation I've come across for this whole mess, particularly the problem of credit default swaps, listen to this. It's informative. And scary.
Sunday, October 05, 2008
2008 Nobel Prize in Physics
Surprisingly, I haven't seen the usual blogfest of speculation about the Nobel Prize in Physics. The announcement will be this Tuesday. I will throw out my same suggestion as last year, Michael Berry and Yakir Aharonov for nonclassical phase factors in quantum mechanics, though the fact that the prize went to condensed matter folks last year means that this is probably less likely. Another pair I've heard floated every couple of years is Guth and Linde for inflationary cosmology. Any ideas out there?
Open faculty positions
It's that time of year again. My department is conducting faculty searches in three areas: solar physics (which I won't discuss here because it's not my area and I doubt many solar physics types read this), condensed matter theory, and cold atoms/optical lattices theory. There is a joint search committee for the latter two (yes, I'm on it), and here's the ad, which is running in Physics Today and Physics World:
The Department of Physics and Astronomy at Rice University invites applications for two anticipated tenure-track Assistant Professor positions in theoretical physics. One of the positions is in condensed matter physics, with emphasis on fundamental theory, while the other is in ultra-cold atom physics, with a focus on connections to condensed matter. These positions will complement and extend our existing experimental and theoretical strengths in condensed matter and ultra-cold atom physics (for information on the existing efforts, see http://physics.rice.edu/). Applicants should send a dossier that includes a curriculum vitae, statements of research and teaching interests, a list of publications, and two or three selected reprints, and arrange for at least three letters of recommendation to be sent to R. G. Hulet or Q. Si, Co-Chairs, Faculty Search Committee, Dept. of Physics and Astronomy - MS 61, Rice University, 6100 Main Street, Houston, TX 77005 or by email to Valerie Call (vcall@rice.edu). Applications will be accepted until the positions are filled, but only those received by November 15, 2008 will be assured full consideration. The appointments are expected to start in July 2009.
To be completely clear: There are two distinct positions available, and the total number of interviews will reflect this. If you have questions I'll try to answer them or refer you to my colleagues who cochair the committee.
The Department of Physics and Astronomy at Rice University invites applications for two anticipated tenure-track Assistant Professor positions in theoretical physics. One of the positions is in condensed matter physics, with emphasis on fundamental theory, while the other is in ultra-cold atom physics, with a focus on connections to condensed matter. These positions will complement and extend our existing experimental and theoretical strengths in condensed matter and ultra-cold atom physics (for information on the existing efforts, see http://physics.rice.edu/). Applicants should send a dossier that includes a curriculum vitae, statements of research and teaching interests, a list of publications, and two or three selected reprints, and arrange for at least three letters of recommendation to be sent to R. G. Hulet or Q. Si, Co-Chairs, Faculty Search Committee, Dept. of Physics and Astronomy - MS 61, Rice University, 6100 Main Street, Houston, TX 77005 or by email to Valerie Call (vcall@rice.edu). Applications will be accepted until the positions are filled, but only those received by November 15, 2008 will be assured full consideration. The appointments are expected to start in July 2009.
To be completely clear: There are two distinct positions available, and the total number of interviews will reflect this. If you have questions I'll try to answer them or refer you to my colleagues who cochair the committee.
Monday, September 29, 2008
Incredibly pointless paper
This has to be one of the most useless things I've ever seen on the arxiv. Basically, the authors point out that there is absolutely no chance of the helium coolant of the LHC magnet system suddenly deciding to explode. Gee, really?! It is just sad that someone felt compelled to write this.
This paper reminds me of the old Annals of Improbable Research article, "The Effect of Peanut Butter on the Rotation of the Earth".
This paper reminds me of the old Annals of Improbable Research article, "The Effect of Peanut Butter on the Rotation of the Earth".
Sunday, September 28, 2008
A subtle statistical mechanics question
A faculty colleague of mine posed a statistical physics question for me, since I'm teaching that subject to undergraduates this semester, and I want to throw it out there to my readership. I'll give some context, explain the question, and then explain why it's actually rather subtle. If someone has a good answer or a reference to a good (that is, rigorous) answer, I'd appreciate it.
In statistical physics one of the key underlying ideas is the following: For every macroscopic state (e.g., a pressure of 1 atmosphere and a temperature of around 300 K for the air in your room), there are many microstates (in this example, there are many possible arrangements of positions and momenta of oxygen and nitrogen molecules in the room that all look macroscopically about the same). The macroscopic states that we observe are those that have the most possible microstates associated with them. There is nothing physically forbidden about having all of the air in your room just in the upper 1 m of space; it's just that there are vastly more microstates where the air is roughly evenly distributed, so that's what we end up seeing.
Crucial to actually calculating anything using this idea, we need to be able to count microstates. For pointlike particles, that means that we want to count up how many possible positions and momenta they can have. Classically this is awkward because position and momentum are continuous variables - there are an infinite number of possible positions and momenta even for one particle. Quantum mechanically, the uncertainty principle constrains things more, since we can never know the position and momentum precisely at the same time. So, the standard way of dealing with this is to divide up phase space (position x momentum) into "cells" of size hd, where h is Planck's constant and d is the dimensionality. For 3d, we use h3. Planck's constant comes into it via the uncertainty principle. Here's an example of a typical explanation.
Here's the problem: why h3, when we learn in quantum mechanics that the uncertainty relation is, in 1d, (delta p)(delta x) >= hbar/2 (which is h/4 pi, for the nonexperts), not h ? Now, for many results in classical and quantum statistical mechanics, the precise number used here is irrelevant. However, that's not always the case. For example, when one calculates the temperature at which Bose condensation takes place, the precise number used here actually matters. Since h3 really does work for 3d, there must be some reason why it's right, rather than hbar3 or some related quantity. I'm sure that there must be a nice geometrical argument, or some clever 3d quantum insight, but I'm having trouble getting this to work. If anyone can enlighten me, I'd appreciate it!
UPDATE: Thanks to those commenting on this. I'm afraid that I wasn't as clear as I'd wanted to be in the above; let me try to refine my question. I know that one can start from particle-in-a-box quantum mechanics, or assume periodic boundary conditions, and count up the allowed plane-wave modes within a volume. This is equivalent to Igor(the first response post)'s discussion of applying the old-time Bohr-Sommerfeld quantization condition (that periodic orbits have actions quantized by h). My question is, really, why does h show up here, when we know that the minimal uncertainty product is actually hbar/2. Or, put another way, should all of the stat mech books that argue that the h3 comes from uncertainty be reworded instead to say that it comes from Bohr-Sommerfeld quantization?
In statistical physics one of the key underlying ideas is the following: For every macroscopic state (e.g., a pressure of 1 atmosphere and a temperature of around 300 K for the air in your room), there are many microstates (in this example, there are many possible arrangements of positions and momenta of oxygen and nitrogen molecules in the room that all look macroscopically about the same). The macroscopic states that we observe are those that have the most possible microstates associated with them. There is nothing physically forbidden about having all of the air in your room just in the upper 1 m of space; it's just that there are vastly more microstates where the air is roughly evenly distributed, so that's what we end up seeing.
Crucial to actually calculating anything using this idea, we need to be able to count microstates. For pointlike particles, that means that we want to count up how many possible positions and momenta they can have. Classically this is awkward because position and momentum are continuous variables - there are an infinite number of possible positions and momenta even for one particle. Quantum mechanically, the uncertainty principle constrains things more, since we can never know the position and momentum precisely at the same time. So, the standard way of dealing with this is to divide up phase space (position x momentum) into "cells" of size hd, where h is Planck's constant and d is the dimensionality. For 3d, we use h3. Planck's constant comes into it via the uncertainty principle. Here's an example of a typical explanation.
Here's the problem: why h3, when we learn in quantum mechanics that the uncertainty relation is, in 1d, (delta p)(delta x) >= hbar/2 (which is h/4 pi, for the nonexperts), not h ? Now, for many results in classical and quantum statistical mechanics, the precise number used here is irrelevant. However, that's not always the case. For example, when one calculates the temperature at which Bose condensation takes place, the precise number used here actually matters. Since h3 really does work for 3d, there must be some reason why it's right, rather than hbar3 or some related quantity. I'm sure that there must be a nice geometrical argument, or some clever 3d quantum insight, but I'm having trouble getting this to work. If anyone can enlighten me, I'd appreciate it!
UPDATE: Thanks to those commenting on this. I'm afraid that I wasn't as clear as I'd wanted to be in the above; let me try to refine my question. I know that one can start from particle-in-a-box quantum mechanics, or assume periodic boundary conditions, and count up the allowed plane-wave modes within a volume. This is equivalent to Igor(the first response post)'s discussion of applying the old-time Bohr-Sommerfeld quantization condition (that periodic orbits have actions quantized by h). My question is, really, why does h show up here, when we know that the minimal uncertainty product is actually hbar/2. Or, put another way, should all of the stat mech books that argue that the h3 comes from uncertainty be reworded instead to say that it comes from Bohr-Sommerfeld quantization?
Thursday, September 25, 2008
A mini book review
Recently I acquired a copy of Electrical Transport in Nanoscale Systems by Max Di Ventra, a new textbook aimed at graduate students. I haven't yet had time to read through it in detail, but what I've seen so far is impressive. The book provides a thorough intro to various formalisms appropriate for understanding nanoscale transport, including the usual stuff (Drude, Kubo, Landauer-Buttiker, nonequilibrium Green's function (NEGF)) and other sophisticated approaches that focus on transport fundamentally as a nonequilibrium quantum statistical mechanics problem (dynamic density functional theory, a hydrodynamic approximation for the electron liquid, and a detailed look at the interactions between the electrons and the ions). I also appreciate the effort to point out that truly nanoscale systems really are more complicated and different than "ordinary" mesoscopic systems. The only significant omission (intentional, in large part to avoid doubling the size of the book) is a comparative lack of discussion of strong correlation effects (e.g. Kondo physics). (A good complementary book for those interested in the latter topic is that by Bruus and Flensberg.) It's not exactly light entertainment, but the writing is clear and pedagogical.
Update: By coincidence, Supriyo Datta just put up a nice long review of the NEGF approach. He also has a full book-length treatment written with a very pedagogical focus.
(For those curious about my own book efforts, it's slowly coming along. Slowly.)
Update: By coincidence, Supriyo Datta just put up a nice long review of the NEGF approach. He also has a full book-length treatment written with a very pedagogical focus.
(For those curious about my own book efforts, it's slowly coming along. Slowly.)
Saturday, September 20, 2008
Science funding.
This article confirms my previous impressions, and is very depressing. This past week the government promised roughly 200 years worth of the entire NSF annual budget to bail out the banking system. Since 2003 the US government has spent another 200 years worth of the the entire NSF annual budget in Iraq. After two years of "level funding", and the certainty that there will be no real budget passed before the election, what we really need is the prospect of another year of frozen budgets.
In related news, I've come to the realization that my research program is "too big to fail".
Update: I might as well put all of my nonscience stuff in one posting. Looking at the text of the proposed financial bailout bill here, I am aghast because of this section:
Let me get this straight. The Secretary of the Treasury gets incredibly broad authority to use up to $700 billion to prop up the financial markets in essentially any way he decides is appropriate, and his decisions are explicitly not reviewable ever by anyone, including the judicial branch?! I'm no lawyer, but isn't this, umm, absolutely insane?
In related news, I've come to the realization that my research program is "too big to fail".
Update: I might as well put all of my nonscience stuff in one posting. Looking at the text of the proposed financial bailout bill here, I am aghast because of this section:
Decisions by the Secretary pursuant to the authority of this Act are non-reviewable and committed to agency discretion, and may not be reviewed by any court of law or any administrative agency.
Let me get this straight. The Secretary of the Treasury gets incredibly broad authority to use up to $700 billion to prop up the financial markets in essentially any way he decides is appropriate, and his decisions are explicitly not reviewable ever by anyone, including the judicial branch?! I'm no lawyer, but isn't this, umm, absolutely insane?
Wednesday, September 17, 2008
Because I'm a big musical nerd...
... I couldn't pass this up. Very well done, though someone should point out to the Obama supporters behind this that things didn't work out too well for most of the characters singing this in Les Miserables.
I will return to actual physics blogging soon, once the immediate disarray settles out.
I will return to actual physics blogging soon, once the immediate disarray settles out.
Sunday, September 14, 2008
Ike follow-up
Well, that was interesting. Thankfully we're all fine and our house is undamaged. The prospect of being without power for an extended period continues to suck, to put it bluntly. 90 degree weather, near 100% humidity, and no air conditioning or refrigeration. On the plus side, my university still has power and AC. On the downside, they've disabled access (card keys) to most buildings and water service (i.e. sanitary plumbing) is spotty on campus.
Friday, September 12, 2008
Hurricane Ike
Hello - for those readers who don't know, I live in Houston, which is about to get hit by Hurricane Ike. I'm hopeful that this won't be a big deal, but there's always the chance that I'll be without electricity for a few days. So, blogging may be slow. In the mean time, check out this cool site for following tropical storm systems, and this explanation of how hurricanes are heat engines.
Thursday, September 11, 2008
Ahh, the Gulf coast.
You know, I lived the first twenty-nine years of my life without having to pay close attention to stuff like this.
Wednesday, September 10, 2008
Important online resource
The internet is definitely the best way to keep up with current events. Check here often (look at the link text). (Thanks, Dan.)
Tuesday, September 09, 2008
Final Packard highlights + amusing article
One of my former professors, Michael Peskin, has a nice article about why the LHC will not destroy the earth. He taught me graduate-level mechanics, and my brain still hurts from his take-home final.
A last few things I learned at the Packard meeting:
A last few things I learned at the Packard meeting:
- The stickleback is a very useful fish for addressing the question, if natural selection removes variation in phenotypes, then why do we still see so much variation?
- There are structures on the membranes of many cells (the primary cilium; the protein known as rhomboid) that seem to have really profound effects on many cellular processes. Understanding how and why they do what they do demonstrates why systems biology is hard.
- It may be possible to do some kind of "safe" cryptographic key exchange based on functions that are not algebraic (as opposed to usual RSA-type encryption which is based on the asymmetry in difficulty between multiplication and factorization).
- There are deep connections between random permutations and the distribution of the number of prime factors.
- It's possible to run live small animals (zebrafish, c. elegans) through microfluidic assay systems in massively parallel fashion.
- Stem cell differentiation can apparently be influenced by the mechanical properties (e.g., squishy vs. hard) of the substrate. Weird.
- Artificial sieve structures can be very useful for electrophoresis of long segments of DNA.
- There may be clever ways to solve strongly correlated electronic structure problems using tensor networks.
- Natural synthesis of useful small molecules (e.g., penicillin, resveratrol) is pretty amazing. Makes me want to learn more about bacteria, actomycetes, and fungi.
- By clever trading of time and statistics for intensity, 3d superresolution imaging is possible under some circumstances.
- DNA can be used as a catalyst.
- Some bacteria in biofilms secrete molecules that look like antibiotic byproducts, but may actually serve as a way of carrying electrons long distances so that the little buggers far from the food source can still respirate.
- Virus chips are awesome.
- Don't ever get botfly larvae growing in your scalp. Ever.
- Tensegrity structures can be very useful for biomimetic machines.
- Sub-mm arrays are going to be a boon for astronomy.
- It looks like much of the Se and Br in the universe was actually produced by the same compact object mergers that give short gamma ray bursts.
- Dark energy remains a major enigma in physics and astrophysics. It's a big one.