I just had about the best possible experience with a collaboration that one can expect to have. Indeed, I worry that I've now used up my "collaboration karma". Here's how these things are supposed to work....
Back at the APS March Meeting in 2006, my student Zach was presenting his masters work on electronic conduction through atomic-scale Ni junctions. Specifically, he had been doing some experiments to try and examine whether atomic-scale contacts between ferromagnetic metals had unusually large changes in electrical resistance when placed in a changing magnetic field, as had been reported in the literature. (We found that the answer is "No", but the magnetoresistance does depend in detail on the precise atomic configuration of the device. This work was independently confirmed simultaneously by Dan Ralph's group at Cornell.) Anyway, at the end of the session, I met Carlos Untiedt, who was just getting going as a faculty member at the University of Alicante in Spain. I'd read some of Carlos' earlier work on metal junctions made using mechanical means, and he'd read our work, too. He mentioned to me that the Spanish government has a program that allows Spanish graduate students to spend time abroad working in other labs, and suggested that we try this at some point. I said that this sounded like a good idea, and we should do it.
Fast forward to the beginning of 2008, when Carlos and I got back in touch. He had a very good student eager and interested to come and visit, and, even better, they had some exciting data that they'd been taking in mechanically-controlled (STM-style, middle of this page) atomic-scale metal junctions. The main advantage of mechanical junctions is that you can break and re-form them many times, giving you serious statistical information about junction properties. Now, in my lab we often use an alternative technique for making atomic-scale junctions that doesn't involve mechanical motion. While our method (electromigration) is more time-consuming and therefore not well suited to really large statistical samples, it has one main advantage: the junctions we make have enough geometric stability that we can look at a single junction over many temperatures. This can't really be done in STM-style junctions. This was a relatively rare situation: there was an ideal point of scientific collaboration, and we had the person and the resources to make things happen.
So, we did it. Carlos' student, M. Reyes Calvo, came and spent a little under four months working in my lab with my group. She was able to make junctions with our approach that were analogous to the ones that she'd been studying in Spain, and measured them as a function of temperature in our system. The results were very nicely consistent with her data from Spain, and the whole scientific story hung together well. After her visit and a number of fun conversations with theorist colleagues at Alicante, a paper was written that came out today in Nature. It just doesn't work any better than that. I'll write about the science in a separate post....
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, April 30, 2009
Monday, April 27, 2009
Nice speech.
Saturday, April 25, 2009
Just stop.
Attention TX state and federal officials with R next to their names. Let me clue you in on a couple of points.
1) Secession is not an option. See the US Civil War.
2) TX does not have the authority to break up into smaller states autonomously. That went out the window when TX was re-admitted to the Union after the Civil War.
Bloviating about this pointless drivel makes the entire state look bad. Don't you realize that this garbage makes it difficult to convince smart people to move here, because it looks like the state is governed by idiots?
(This is my last Texas post for a long while - I promise.)
1) Secession is not an option. See the US Civil War.
2) TX does not have the authority to break up into smaller states autonomously. That went out the window when TX was re-admitted to the Union after the Civil War.
Bloviating about this pointless drivel makes the entire state look bad. Don't you realize that this garbage makes it difficult to convince smart people to move here, because it looks like the state is governed by idiots?
(This is my last Texas post for a long while - I promise.)
Friday, April 24, 2009
Random favor....
I use the free version of google analytics to do some simple tracking of page views, etc. on both this blog and on my group webpage, just for fun. For some strange reason, the little javascript doodad that allows google to track hits works just fine on all of my group-related pages (like this one and this one), but fails on my publications page. If someone out there with greater expertise or sharper eyes than me could take a look at the html source and explain to me what's wrong with my publications page, I'd greatly appreciate it. Thanks. UPDATE: Thanks - all fixed, I think. Behold the power of teh intarwebs.
Sunday, April 19, 2009
Cold fusion, the longer story.
I fully expect angry comments about this....
Here's how a cold fusion experiment is supposed to work, broadly. One takes an electrochemical cell containing either regular water or D2O, and as one electrode uses palladium (prepared in some meticulous way, to be discussed further below). Then one sets the electrochemical conditions such that hydrogen (or deuterium) ions are electrochemically favored to go into the palladium lattice, up to some very high loading. It's been known for decades that Pd likes to take up hydrogen, so the fact that one can do this is of no surprise. Now, while all this is going on, one carefully monitors the temperatures of the electrodes, the water, etc. The experimental claim, coarsely described, is that after some time, cells containing heavy water under these conditions begin to get hot (but not cells containing ordinary water!). Ideally one does good calorimetry and can measure the amount of energy that comes out of the cell in the form of heat, vs. the amount of energy put in in the form of integrated electrochemical current times voltage. The claim is that in some such experiments, the inferred amount of energy out is much larger than the electrical energy in. This is "excess heat".
So, what's the problem? Well, there are several issues.
1) Calorimetry can be a tricky business. This was the main criticism of the original Pons and Fleischmann work. From what I can tell, people have been much more careful about this than twenty years ago.
2) The experiments just aren't reproducible, in many senses of the term. For example, the temperature-vs-time evolutions of nominally identical cells are completely different, and all over the map. There are big fluctuations on many timescales all over the place. Sometimes the thermal output is big, sometimes it's small. This is generally swept aside by those doing the experiments, who take a wildly fluctuating response, integrate it, and claim reproducibility because the net integral ends up having the desired sign. Not the desired magnitude, just the desired sign. What would I expect to see in a well-controlled experiment? Take one large piece of palladium, cut it into thirds, and set up three identical cells. The temperature-time histories of these things should really reproduce. If you can't do that, then you don't have a controlled experiment. This isn't a small thing.
3) The cells stop working after a while. Unsurprisingly the time period varies from cell to cell. Now, why should this happen unless the underlying process is chemical in some way? By the way, some cells (but not all) "revive" when the electrochemical conditions are changed. Again, all of this is massively variable, even between nominally identical cells in the same labs.
4) The claim of excess heat assumes that there's no chemistry taking place. For example, what if I made that assumption and looked at my car engine? The amount of electrical power input by each spark plug is miniscule compared to the total power out. If I neglected chemical reactions, I'd come to the conclusion that something amazing was going on. Furthermore, if I normalized the output power by, say, the number of platinum atoms at the tips of the spark plugs, I might then conclude that the only way of achieving such power out was something like nuclear. That's the hazard of ignoring possible chemical channels. The issue here is that palladium is known to be highly catalytic, and there are certainly diffusion processes within solids that can be strongly influenced by isotopic differences. Moreover, the claim is also that surface prep of the Pd is of absolutely critical importance. Again, this sounds to me like catalysis, not a bulk effect. Now, you'd think this could all be resolved by analytical chemistry - look at the cell materials before and after running. Look at the water before and after running. However, remember that the different folks doing this disagree on basic analytical chemistry issues like the possible production of helium, tritium, etc. That has to make you wonder about how trustworty their collective analyses are.
Now, I'm not saying that there's nothing worth examining here. The DOD clearly thinks its worth looking into, and it would be nice to get this straightened out once and for all. However, 60 Minutes notwithstanding, the work is just not reproducible in the sense that most experimental physicists would use.
Here's how a cold fusion experiment is supposed to work, broadly. One takes an electrochemical cell containing either regular water or D2O, and as one electrode uses palladium (prepared in some meticulous way, to be discussed further below). Then one sets the electrochemical conditions such that hydrogen (or deuterium) ions are electrochemically favored to go into the palladium lattice, up to some very high loading. It's been known for decades that Pd likes to take up hydrogen, so the fact that one can do this is of no surprise. Now, while all this is going on, one carefully monitors the temperatures of the electrodes, the water, etc. The experimental claim, coarsely described, is that after some time, cells containing heavy water under these conditions begin to get hot (but not cells containing ordinary water!). Ideally one does good calorimetry and can measure the amount of energy that comes out of the cell in the form of heat, vs. the amount of energy put in in the form of integrated electrochemical current times voltage. The claim is that in some such experiments, the inferred amount of energy out is much larger than the electrical energy in. This is "excess heat".
So, what's the problem? Well, there are several issues.
1) Calorimetry can be a tricky business. This was the main criticism of the original Pons and Fleischmann work. From what I can tell, people have been much more careful about this than twenty years ago.
2) The experiments just aren't reproducible, in many senses of the term. For example, the temperature-vs-time evolutions of nominally identical cells are completely different, and all over the map. There are big fluctuations on many timescales all over the place. Sometimes the thermal output is big, sometimes it's small. This is generally swept aside by those doing the experiments, who take a wildly fluctuating response, integrate it, and claim reproducibility because the net integral ends up having the desired sign. Not the desired magnitude, just the desired sign. What would I expect to see in a well-controlled experiment? Take one large piece of palladium, cut it into thirds, and set up three identical cells. The temperature-time histories of these things should really reproduce. If you can't do that, then you don't have a controlled experiment. This isn't a small thing.
3) The cells stop working after a while. Unsurprisingly the time period varies from cell to cell. Now, why should this happen unless the underlying process is chemical in some way? By the way, some cells (but not all) "revive" when the electrochemical conditions are changed. Again, all of this is massively variable, even between nominally identical cells in the same labs.
4) The claim of excess heat assumes that there's no chemistry taking place. For example, what if I made that assumption and looked at my car engine? The amount of electrical power input by each spark plug is miniscule compared to the total power out. If I neglected chemical reactions, I'd come to the conclusion that something amazing was going on. Furthermore, if I normalized the output power by, say, the number of platinum atoms at the tips of the spark plugs, I might then conclude that the only way of achieving such power out was something like nuclear. That's the hazard of ignoring possible chemical channels. The issue here is that palladium is known to be highly catalytic, and there are certainly diffusion processes within solids that can be strongly influenced by isotopic differences. Moreover, the claim is also that surface prep of the Pd is of absolutely critical importance. Again, this sounds to me like catalysis, not a bulk effect. Now, you'd think this could all be resolved by analytical chemistry - look at the cell materials before and after running. Look at the water before and after running. However, remember that the different folks doing this disagree on basic analytical chemistry issues like the possible production of helium, tritium, etc. That has to make you wonder about how trustworty their collective analyses are.
Now, I'm not saying that there's nothing worth examining here. The DOD clearly thinks its worth looking into, and it would be nice to get this straightened out once and for all. However, 60 Minutes notwithstanding, the work is just not reproducible in the sense that most experimental physicists would use.
Cold fusion.
Tonight 60 Minutes is airing a report on cold fusion research. I haven't seen the report, but this whole business is what I was referring to obliquely back when I wrote this. I'll write more about this soon, since it provides a nice vehicle for talking about good experiments and what we mean when we say that something is "reproducible" and "controlled". The short version: (1) Don't go out and buy palladium futures just yet. (2) There is something weird going on in the experiments, but it's not at all clear that it has anything to do with nuclear processes, since that would require at least two "miracles" (= totally unexpected deviations from established physics). (3) A careful look at this by careful people is probably worthwhile, but massive hype is a bad idea.
Thursday, April 16, 2009
Monday, April 13, 2009
What is temperature?
Another in my off-and-on series about condensed matter physics concepts.
Everyone has an intuitive grasp of what they mean by "temperature", but for the most part only physicists know the rigorous definition. Temperature, colloquially, is some measure of the energy stored in a system. If two systems having different temperatures are placed "in contact" (so that energy can flow between them via microscopic interactions like atoms vibrating into each other), there is a net flow of energy from the high temperature system to the low temperature system, until the temperatures equilibrate. Fun fact: the nerves in your skin don't actually sense temperature; rather, they sense the flow of thermal energy. Metals below body temperature generally feel cold to the touch because they are very effective at conducting away thermal energy. Plastics at the same temperature feel warmer because they are much worse thermal conductors.
Anyway, temperature is defined more rigorously than this touchy-feely business. Consider a system (e.g., a gas) that has a certain amount of total energy. That system can have many configurations (e.g., positions and momenta of the gas molecules) that all have the same total energy. Often there are so many configurations in play that we keep track of the log of the number of available configurations, which we call (to within a constant that gives us nice units) S. Now, what happens if we give the system a little more total energy? Well, (almost always) this changes the number of available configurations. (In the gas example, now more of the molecules have access to higher momenta, for example.) How many more configurations? A simple assumption here is that the change in E is linearly proportional to the change in S. The proportionality factor is exactly T, the temperature. At low temperatures, a given change in E implies a comparatively large change in the number of available configurations; conversely, at high temperatures, a given change in E really doesn't increase the number of available configurations very much. (I know that someone is going to object to my gas example, since even for a single molecule in a box there are, classically, an infinite number of possible configurations for the molecule even if we only keep track of positions. Don't worry about that too much - we have mathematically solid ways to deal with this.)
Those in the know will already be aware that S is the entropy of the system. The requirement that the total S always increase or stay the same for a closed system ends up implying just the familiar properties of temperature and average energy flow that we know from everyday experience.
Everyone has an intuitive grasp of what they mean by "temperature", but for the most part only physicists know the rigorous definition. Temperature, colloquially, is some measure of the energy stored in a system. If two systems having different temperatures are placed "in contact" (so that energy can flow between them via microscopic interactions like atoms vibrating into each other), there is a net flow of energy from the high temperature system to the low temperature system, until the temperatures equilibrate. Fun fact: the nerves in your skin don't actually sense temperature; rather, they sense the flow of thermal energy. Metals below body temperature generally feel cold to the touch because they are very effective at conducting away thermal energy. Plastics at the same temperature feel warmer because they are much worse thermal conductors.
Anyway, temperature is defined more rigorously than this touchy-feely business. Consider a system (e.g., a gas) that has a certain amount of total energy. That system can have many configurations (e.g., positions and momenta of the gas molecules) that all have the same total energy. Often there are so many configurations in play that we keep track of the log of the number of available configurations, which we call (to within a constant that gives us nice units) S. Now, what happens if we give the system a little more total energy? Well, (almost always) this changes the number of available configurations. (In the gas example, now more of the molecules have access to higher momenta, for example.) How many more configurations? A simple assumption here is that the change in E is linearly proportional to the change in S. The proportionality factor is exactly T, the temperature. At low temperatures, a given change in E implies a comparatively large change in the number of available configurations; conversely, at high temperatures, a given change in E really doesn't increase the number of available configurations very much. (I know that someone is going to object to my gas example, since even for a single molecule in a box there are, classically, an infinite number of possible configurations for the molecule even if we only keep track of positions. Don't worry about that too much - we have mathematically solid ways to deal with this.)
Those in the know will already be aware that S is the entropy of the system. The requirement that the total S always increase or stay the same for a closed system ends up implying just the familiar properties of temperature and average energy flow that we know from everyday experience.
Monday, April 06, 2009
Writing and word limits
I'm working on an article for nonspecialists about a nano topic, and all I can say is, writing concisely for nonexperts is much more difficult than writing concisely for experts.