Now this is a modest proposal whose time has come. It's common that experimental physicists gradually (or not so gradually) accumulate some pieces of equipment over the years that are laboratory "white elephants". These items are typically acquired for some specific research project or direction, and then over the years as research goals and priorities change, they can end up sitting around gathering dust. Surely someone else somewhere could make productive use of these items. Maybe someone should set up a trading post, where we could list these things and arrange reasonable trades or purchases.
Yes, used equipment vendors exist to address these needs, but they're not not always easy to do deal with, and frequently they offer pennies on the dollar. (For example, at Rice we have an 11-year old electron microscope. It's got a problem that is likely to cost about $10K to fix, though annoyingly the microscope vendor refuses to have a reasonable return policy - if we bought the relevant part and that didn't fix the problem, they'd refuse to take it back even for a restocking fee. Two different used equipment vendors have offered around $8K for the whole SEM (!), while they both offer the same item for sale on their sites for more than $100K.) I also know that people buy and sell scientific equipment on E-bay, but that suffers from some of the same problems. The point here isn't for scientists or universities to make money on this - it's to match up scientists/engineers with equipment that they could use and that currently has the wrong home.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Saturday, July 26, 2008
Wednesday, July 23, 2008
Two papers in Nano Letters
Two new papers in Nano Letters caught my eye.
Danilov et al., "Nanoelectromechanical switch operating by tunneling of an entire C60 molecule"
This is a single-molecule electronic experiment, and it's a pretty neat example of using careful measurements to deduce a fair bit of information about a nanoscale system without a direct microscopic imaging probe. In their experiment the authors find bistable switching between two different conducting configurations of a junction thought to contain a single C60 molecule (as inferred by a signature in the electronic conduction of a well-known vibrational mode of the fullerene). Now, telegraph-like switching isn't new by any stretch, and many nanoscale systems exhibit discrete changes in their properties due to motion of one or a few atoms or molecules. Here, by carefully analyzing the voltage and temperature dependence of the switching, they are able to deduce that the most likely mechanism for the change in configuration is motion of the fullerene between two different sites. I'm not sure that I'm 100% convinced by their interpretation, but the data are quite pretty and the analysis is clever.
Stampfer et al., "Tunable graphene single-electron transistor"
In this work, Klaus Ensslin's group starts from a graphene flake (the now-usual scotch-tape approach) and uses standard patterning methods to make a little graphene region connected by narrow graphene constrictions to source and drain electrodes. Using gate electrodes near the constrictions, they could electrostatically shift the local chemical potential there, shifting the graphene from having electrons charge carriers to holes as charge carriers. By further tuning with an additional gate, they could make a complete single-electron transistor - essentially a puddle of confined charges weakly connected by tunnel barriers to larger reservoirs, with the confinement sufficiently strong that the electrostatic energy cost of changing the puddle population by one charge exceeded the available thermal energy. (This was all done at cryogenic temperatures, since the charging energy of the dot was around 3 meV, or about 35 K.). This is a particularly nice, clean example of using graphene and its relatively unique properties as a platform for nanoscale device fabrication. Several groups are getting very good at this, and if efforts to grow large area, high quality, single-layer graphene succeed, there could be some genuine technological implications.
Danilov et al., "Nanoelectromechanical switch operating by tunneling of an entire C60 molecule"
This is a single-molecule electronic experiment, and it's a pretty neat example of using careful measurements to deduce a fair bit of information about a nanoscale system without a direct microscopic imaging probe. In their experiment the authors find bistable switching between two different conducting configurations of a junction thought to contain a single C60 molecule (as inferred by a signature in the electronic conduction of a well-known vibrational mode of the fullerene). Now, telegraph-like switching isn't new by any stretch, and many nanoscale systems exhibit discrete changes in their properties due to motion of one or a few atoms or molecules. Here, by carefully analyzing the voltage and temperature dependence of the switching, they are able to deduce that the most likely mechanism for the change in configuration is motion of the fullerene between two different sites. I'm not sure that I'm 100% convinced by their interpretation, but the data are quite pretty and the analysis is clever.
Stampfer et al., "Tunable graphene single-electron transistor"
In this work, Klaus Ensslin's group starts from a graphene flake (the now-usual scotch-tape approach) and uses standard patterning methods to make a little graphene region connected by narrow graphene constrictions to source and drain electrodes. Using gate electrodes near the constrictions, they could electrostatically shift the local chemical potential there, shifting the graphene from having electrons charge carriers to holes as charge carriers. By further tuning with an additional gate, they could make a complete single-electron transistor - essentially a puddle of confined charges weakly connected by tunnel barriers to larger reservoirs, with the confinement sufficiently strong that the electrostatic energy cost of changing the puddle population by one charge exceeded the available thermal energy. (This was all done at cryogenic temperatures, since the charging energy of the dot was around 3 meV, or about 35 K.). This is a particularly nice, clean example of using graphene and its relatively unique properties as a platform for nanoscale device fabrication. Several groups are getting very good at this, and if efforts to grow large area, high quality, single-layer graphene succeed, there could be some genuine technological implications.
Friday, July 18, 2008
LHC publicity machine
I understand that the folks at CERN feel like it's important for people to be aware of the LHC and get excited about it - at this point, it looks like it's going to be the only game in town in a few years for the frontier of high energy physics. Still, the steady stream of publicity (much of it arguing that they're going to unlock the secrets of the universe, prove string theory, find evidence of extra dimensions, etc.) is getting to be a bit much. Today comes this article discussing the cooldown of the magnets for the collider and the detector. Technologically impressive to be sure, but the whole "colder than deep space" angle is pretty lame - people have been able to reach these temperatures for nearly 100 years, and superconducting magnets are used in thousands of MRI machines the world over. We get it - it's a big machine. If this is the level of publicity hounding that's going on before they even have a single piece of data, the coverage of the actual physics runs is going to be really oppressive.
Wednesday, July 16, 2008
Scientists, the media, and desperation
I could've predicted this. Given current energy concerns, it's not at all surprising that the various media are ready to give airtime and column space to wacky stories like this one. The temptation must be irresistible: the public is desperate; the story itself is great TV - the lone inventor, persevering in the face of opposition from those stodgy old scientists; they can even put in quotes from the would-be inventor and the scientists and claim to be covering "both sides". You know the drill: "This conventional scientist says that if he drops this pencil it will fall to the ground. Others disagree! The controversy, up next after this commercial message." News flash: sometimes it doesn't make any sense to cover "both sides".
I think the part that frustrates me the most is the misperception by part of the public and some of the media that scientists want these alleged breakthroughs to fail. Nothing could be further from the truth! If someone discovered cheap, inexhaustible energy because of a remarkable revolutionary breakthrough, we'd love it - it'd be the most exciting time in science since the quantum revolution. The problem is, though, that keeping an open mind doesn't mean lowering your scientific standards because you'd like to believe the result. I, for one, am not holding my breath about hydrino power.
I think the part that frustrates me the most is the misperception by part of the public and some of the media that scientists want these alleged breakthroughs to fail. Nothing could be further from the truth! If someone discovered cheap, inexhaustible energy because of a remarkable revolutionary breakthrough, we'd love it - it'd be the most exciting time in science since the quantum revolution. The problem is, though, that keeping an open mind doesn't mean lowering your scientific standards because you'd like to believe the result. I, for one, am not holding my breath about hydrino power.
Sunday, July 06, 2008
slow blogging + pnictide fun
I'll be traveling (work + vacation), so blogging will be slow until July 15 or so.
Before I go, I wanted to point out that the plot continues to thicken regarding the pairing symmetry of the new iron pnictide superconductors. For example, this paper reports scanning SQUID microscopy on a sample of one of the compounds, with no apparent evidence for sign flips in the order parameter that you might expect if the material was, e.g., d-wave like the cuprates. In contrast, this paper argues that scanning tunneling spectroscopy data resemble d-wave expectations. This paper reports photoemission studies showing that the compounds have quite a complicated band structure and suggests that different parts of the Fermi surface may have different phenomenology. That sounds reminiscent of this theory paper, but I haven't read them in detail.
Before I go, I wanted to point out that the plot continues to thicken regarding the pairing symmetry of the new iron pnictide superconductors. For example, this paper reports scanning SQUID microscopy on a sample of one of the compounds, with no apparent evidence for sign flips in the order parameter that you might expect if the material was, e.g., d-wave like the cuprates. In contrast, this paper argues that scanning tunneling spectroscopy data resemble d-wave expectations. This paper reports photoemission studies showing that the compounds have quite a complicated band structure and suggests that different parts of the Fermi surface may have different phenomenology. That sounds reminiscent of this theory paper, but I haven't read them in detail.
Tuesday, July 01, 2008
What makes an experiment "good"
Recently I've had some conversations with a couple of people, including someone involved in journalism, about what makes a physics experiment good. I've been trying to think of a good way to explain my views on this; I think it's important, particularly since the lay public (and many journalists) don't have the background to judge realistically for themselves the difference between good and bad scientific results.
There are different kinds of experiments, of course, each with its own special requirements. I'll limit myself to condensed matter/AMO sorts of work, rather than high energy or nuclear. Astro is a whole separate issue, where one is often an observer rather than an experimenter, per se. In the world of precision measurement, it's absolutely critical to understand all sources of error, since the whole point of such experiments is to establish new limits of precision (like the g factor of the electron, which serves as an exquisite test of quantum electrodynamics) or bounds on quantities (like the electric dipole moment of the electron, which is darned close to zero as far as anyone can tell, and if it was nonzero there would be some major implications). Less stringent but still important is the broad class of experiments where some property is measured and compared quantitatively with theoretical expectations, either to demonstrate a realization of a prediction or, conversely, to show that a theoretical explanation now exists that is consistent with some phenomenon. A third kind of experiment is more phenomenological - demonstrating some new effect and placing bounds on it, showing the trends (how the phenomenon depends on controllable parameters), and advancing a hypothesis of explanation. This last type of situation is moderately common in nanoscale science.
One major hallmark of a good experiment is reproducibility. In the nano world this can be challenging, since there are times when measured properties can depend critically on parameters over which we have no direct control (e.g., the precise configuration of atoms at some surface). Still, in macroscopic systems at the least, one should reasonably expect that the same experiment with identical sample preparation run multiple times should give the same quantitative results. If it doesn't, that means (a) you don't actually have control of all the parameters that are important, and (b) it will be very difficult to figure out what's going on. If someone is reporting a surprising finding, how often is it seen? How readily is it reproduced, especially by independent researchers? This is an essential component of good work.
Likewise, clarity of design is nice. How are different parameters in the experiment inferred? Is the procedure to find those values robust? Are there built-in crosschecks that one can do to ensure that the measurements and related calculations make sense? Can the important independent variables be tweaked without affecting each other? Are the measurements really providing information that is useful?
Good analysis is also critical. Are there hidden assumptions? Are quantities normalized in sensible ways? Do trends make sense? Are the data plotted in ways that are fair? In essence, are apples being compared to apples? Are the conclusions consistent with the data, or truly implied by the data?
I know that some of this sounds vague. Anyone more eloquent than me want to try to articulate this more clearly?
There are different kinds of experiments, of course, each with its own special requirements. I'll limit myself to condensed matter/AMO sorts of work, rather than high energy or nuclear. Astro is a whole separate issue, where one is often an observer rather than an experimenter, per se. In the world of precision measurement, it's absolutely critical to understand all sources of error, since the whole point of such experiments is to establish new limits of precision (like the g factor of the electron, which serves as an exquisite test of quantum electrodynamics) or bounds on quantities (like the electric dipole moment of the electron, which is darned close to zero as far as anyone can tell, and if it was nonzero there would be some major implications). Less stringent but still important is the broad class of experiments where some property is measured and compared quantitatively with theoretical expectations, either to demonstrate a realization of a prediction or, conversely, to show that a theoretical explanation now exists that is consistent with some phenomenon. A third kind of experiment is more phenomenological - demonstrating some new effect and placing bounds on it, showing the trends (how the phenomenon depends on controllable parameters), and advancing a hypothesis of explanation. This last type of situation is moderately common in nanoscale science.
One major hallmark of a good experiment is reproducibility. In the nano world this can be challenging, since there are times when measured properties can depend critically on parameters over which we have no direct control (e.g., the precise configuration of atoms at some surface). Still, in macroscopic systems at the least, one should reasonably expect that the same experiment with identical sample preparation run multiple times should give the same quantitative results. If it doesn't, that means (a) you don't actually have control of all the parameters that are important, and (b) it will be very difficult to figure out what's going on. If someone is reporting a surprising finding, how often is it seen? How readily is it reproduced, especially by independent researchers? This is an essential component of good work.
Likewise, clarity of design is nice. How are different parameters in the experiment inferred? Is the procedure to find those values robust? Are there built-in crosschecks that one can do to ensure that the measurements and related calculations make sense? Can the important independent variables be tweaked without affecting each other? Are the measurements really providing information that is useful?
Good analysis is also critical. Are there hidden assumptions? Are quantities normalized in sensible ways? Do trends make sense? Are the data plotted in ways that are fair? In essence, are apples being compared to apples? Are the conclusions consistent with the data, or truly implied by the data?
I know that some of this sounds vague. Anyone more eloquent than me want to try to articulate this more clearly?
Subscribe to:
Posts (Atom)