We've all been there: You wash your hands after using the restroom facilities, and turn away from the sink only to find one of those sad, completely ineffectual, old-style hot-air hand dryers bolted to the wall. You know, the kind with the info graphic shown to the right (image credit: nyulocal.com). Why do these things work so poorly compared to paper towels? What insight did Excel and Dyson have that makes their systems so much better?
It all comes down to the physics of trying to dry your hands. At a rough estimate, the surface area of your hands is around 430 cm2. If your hands, when wet, are coated on average by a layer of water 100 microns thick (seems not crazy), that's a total volume of water of 4.3 cm3. How can you get that water off of you? One approach, apparently the one pursued by the original hot air dryers, is to convert that water into vapor. Clearly the idea is not to do this by raising the temperature of your hands to the boiling point of water. Rather, the idea is to flow hot, dry air over your hands, with the idea that the water molecules in question will acquire the necessary latent heat of vaporization (the energy input required to pull water molecules out of the condensed (liquid) phase and into the vapor phase) from their surroundings - the dry air, your hands, etc. This "borrowing" of energy is the principle behind evaporative cooling, why you feel cold when you step out of the shower.
[A digression in fancy thermodynamic language: When liquid water is in contact with dry air, the chemical potential for the water molecules is much higher in the liquid than in the air. While the water molecules are attracted to each other via hydrogen bonds and polar interactions, there are so many many more ways that the water molecules could be arranged if they were diluted out into vapor in the air that they will tend to leave the liquid, provided each molecule can, through a thermal fluctuation of some sort, acquire enough energy to sever its bonds from the liquid. The departing molecules leave behind a liquid with a lower average total energy, cooling it. Note that water molecules can come from the vapor phase and land in the liquid, too, depositing that same latent heat per molecule back into the liquid. When the departure and arrival processes balance, the vapor is said to be at the "saturated vapor pressure", and evaporative cooling stops. This is why sweating a whole bunch on a super humid day does not cool you off.]
Back to your hands. Converting 4.3 cm3 of water into vapor requires about 9700 Joules of energy. If you wanted to do this with the heat supplied by the hot air dryer, and to do it in about a minute (which is far longer than most people are willing to stand there rubbing their hands as some feeble fan wheezes along), the dryer would have to be imparting about 160 W of power into the water. Clearly that's not happening - you just can't get that much power into the water without cooking your hands! Instead, you give up in disgust and wipe your hands discreetly on your pants.
In contrast, paper towels use thermodynamics much more effectively. Rather than trying to convert the water to vapor, paper towels take great advantage of (1) the very large surface area of paper towels, and (2) capillary forces, the fact that the liquid-solid surface interaction between water and paper towel fibers is so attractive that it's energetically favorable for the water to spread out (even at the cost of increasing more liquid-vapor interface) and coat the fibers, soaking into the towel. [Bonus physics lesson: the wet paper towel looks darker because the optical properties of the water layer disfavor the scattering processes on micron-scale bits of fluff that tend to make the towel look white-ish.] Yes, it takes energy to make paper towels, and yes, they must then be disposed. However, they actually get your hands dry!
What about Excel and Dyson? They realized very clearly that trying to vaporize the water on your hands is a fool's errand. Instead, they try to use actual momentum transfer from the air to the water to blow the water off your hands. Basically they accelerate a stream of air up to relatively high velocity (400 miles per hour, allegedly, though that sounds high to me). That air, through its viscosity, transfers momentum to the water and that shear force drives the water off your hands. They seem to have found a happy regime where they can blow the water off your hands in 10-15 seconds without the force from the air hurting you. The awesome spectacle of those good dryers just shows how sad and lame the bad ones are by comparison.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Monday, March 30, 2015
Sunday, March 29, 2015
Cleanrooms - what is new and exciting?
Cleanrooms - basically climate-controlled, dust-mitigated environments filled with equipment useful for micro/nanoscale fabrication and associated characterization - are a staple of modern research universities. What kind of tool set and facilities you need depends on what you're trying to do. For example, if you want to teach/do research on the fabrication of high performance Si transistors or large-scale integrated circuits, you probably want a dedicated facility that deals primarily with Si CMOS processing. That might include large-area photolithography or wafer-scale e-beam lithography or nanoimprint lithography tools, evaporators/sputtering systems/PECVD/RIE/ALD systems able to service 150 mm or 200 mm substrates, and you might want to keep non-Si-friendly metals like Au far far away. On the flip side, if you are more interested in supporting microfluidics or MEMS work, you might be more interested in smaller substrates but diverse materials, and tools like deep etchers and critical point dryers.
We're about to embark on a cleanroom upgrade at my institution, and I would appreciate input from my relevant readers: What in your view is the latest and greatest in micro/nanofab tools? What can't you do without? Any particularly clever arrangements of facilities/ Assume we are already going to have the obvious stuff, and that we're not trying to create a production line that can handle 200 mm substrates. Conversely, if you have suggestions of particular tools to avoid, that would also be very helpful. Insights would be greatly appreciated.
We're about to embark on a cleanroom upgrade at my institution, and I would appreciate input from my relevant readers: What in your view is the latest and greatest in micro/nanofab tools? What can't you do without? Any particularly clever arrangements of facilities/ Assume we are already going to have the obvious stuff, and that we're not trying to create a production line that can handle 200 mm substrates. Conversely, if you have suggestions of particular tools to avoid, that would also be very helpful. Insights would be greatly appreciated.
Tuesday, March 24, 2015
Brief items, public science outreach edition
Here are a couple of interesting things I've come across in terms of public science outreach lately:
- I generally f-ing love "I f-ing love science" - they reach a truly impressive number of people, and they usually do a good job of conveying why science itself (beyond just particular results) is fun. That being said, I've started to notice lately that in the physics and astro stories they run they sometimes either use inaccurate/hype-y headlines or report what is basically a press release completely uncritically. For instance, while it fires the mind of science fiction fans everywhere, I don't think it's actually good that IFLS decided to highlight a paper from the relatively obscure journal Phys. Lett. B and claim in a headline that the LHC could detect extra spatial dimensions by making mini black holes. Sure. And SETI might detect a signal next week. What are the odds that this will actually take place? Similarly, the headline "Spacetime foam discovery proves Einstein right" implies that someone has actually observed signatures of spacetime foam. In fact, the story is the exact opposite: Observations of photons from gamma ray bursts have shown no evidence of "foaminess" of spacetime, meaning that general relativity (without any exotic quantumness) can explain the results. A little improved quality control on the selection and headlines particularly on the high energy/astro stories would be great, thanks.
- There was an article in the most recent APS News that got me interested in Alan Alda's efforts at Stony Brook on communicating science to the public. Alda, who hosted Scientific American Frontiers and played Feynman on Broadway, has dedicated a large part of his time in recent years to the cause of trying to spread the word to the general public about what science is, how it works, how it often involves compelling narratives, and how it is in many ways a pinnacle of human achievement. He is a fan of "challenge" contests, where participants are invited to submit a 300-word non-jargony explanation of some concept or phenomenon (e.g., "What is a flame?", "What is sleep?"). This is really hard to do well!
- Vox has an article that isn't surprising at all: Uncritical, hype-filled reporting of medical studies leads to news articles that give conflicting information to the public, and contributes to a growing sense among the lay-people that science is untrustworthy or a matter of opinion. Sigh.
- Occasionally deficit-hawk politicians realize that science research can benefit them by, e.g., curing cancer. If only they thought that basic research itself was valuable.
Saturday, March 21, 2015
"Flip chip" approach to nanoelectronics
Most people who aren't experts in the field don't really appreciate how amazing our electronic device capabilities are in integrated circuits. Every time some lithographic patterning, materials deposition, or etching step is performed on an electrically interesting substrate (e.g., a Si chip), there is some amount of chemical damage or modification to the underlying material. In the Si industry, we have gotten extremely good over the last five decades at either minimizing that collateral damage, or making sure that we can reverse its effects. However, other systems have proven more problematic. Any surface processing on GaAs-based structures tends to reduce the mobility of charge in underlying devices, and increases the apparent disorder in the material. For more complex oxides like the cuprate or pnictide superconductors, even air exposure under ambient conditions (let alone much lithographic processing) can alter the surface oxygen content, affecting the properties of the underlying material.
However, for both basic science and technological motivations, we sometimes want to apply electrodes on small scales onto materials where damage from traditional patterning methods is unavoidable and can have severe consequences for the resulting measurements. For example, this work used electrodes patterned onto PDMS, a soft silicone rubber. The elastomer-supported electrodes were then laminated (reversibly!) onto the surface of a single crystal of rubrene, a small molecule organic semiconductor. Conventional lithography onto such a fragile van der Waals crystal is basically impossible, but with this approach the investigators were able to make nice transistor devices to study intrinsic charge transport in the material.
One issue with PDMS as a substrate is that it is very squishy with a large thermal expansion coefficient. Sometimes that can be useful (read this - it's very clever), but it means that it's very difficult to put truly nanoscale electrodes onto PDMS and have them survive without distortion, wrinkling, cracking of metal layers, etc. PDMS also really can't be used at temperatures much below ambient. A more rigid substrate that is really flat would be great, with the idea that one could do sophisticated fab of electrode patterns, and then "flip" the electrode substrate into contact with the material of interest, which could remain untouched or unblemished by lithographic processes.
In this recent preprint, a collaboration between the Gervais group at McGill and the CINT at Sandia, the investigators used a rigid sapphire (Al2O3) substrate to support patterned Au electrodes separated by a sub-micron gap. They then flipped this onto completely unpatterned (except for large Ohmic contacts far away) GaAs/AlGaAs heterostructures. With this arrangement, cleverly designed to remain in intimate contact even when the device is cooled to sub-Kelvin temperatures, they are able to make a quantum point contact while in principle maintaining the highest possible charge mobility of the underlying semiconductor. It's very cool, though making truly intimate contact between two rigid substrates over mm-scale areas is very challenging - the surfaces have to be very clean, and very flat! This configuration, while not implementable for too many device designs, is nonetheless of great potential use for expanding the kinds of materials we can probe with nanoscale electrode arrangements.
Friday, March 13, 2015
Tunneling two-level systems in solids: Direct measurements
Back in the ancient mists of time, I did my doctoral work studying tunneling two-level systems (TLS) in disordered solids. What do these words mean? First, read this post from 2009. TLS are little, localized excitations that were conjectured to exist in disordered materials. Imagine a little double-welled potential, like this image from W. A. Phillips, Rep. Prog. Phys. 50 (1987) 1657-1708.
The low temperature thermal, acoustic, and dielectric properties of glasses, for example, appear to be dominated by these little suckers, and because of the disordered nature of those materials, they come in all sorts of flavors - some with high barriers in the middle, some with low barriers; some with nearly symmetric wells, some with very asymmetric wells. These TLS also "couple to strain" (that's how they talk to lattice vibrations and influence thermal and acoustic properties), meaning that if you stretch or squish the material, you raise one well and lower the other by an amount proportional to the stretching or squishing.
When I was a grad student, there were a tiny number of experiments that attempted to examine individual TLS, but in most disordered materials they could only be probed indirectly. Fast forward 20 years. It turns out that superconducting structures developed for quantum computing can be extremely sensitive to the presence of TLS, which typically exist in the glassy metal oxide layers used as tunnel barriers or at the surfaces of the superconductors. A very cool new paper on the arxiv shows this extremely clearly. If you look at Figure 2d, they are able to track the energy splittings of the TLS while straining the material (!), and they can actually see direct evidence of TLS talking coherently to each other. There are "avoided crossings" between TLS levels, meaning that occasionally you end up with TLS pairs that are close enough to each other that energy can slosh coherently back and forth between them. I find this level of information very impressive, and the TLS case continues to be an impressive example of theorists concocting a model based on comparatively scant information, and then experimentalists validating it well beyond the original expectations. From the quantum computing perspective, though, these little entities are not a good thing, and demonstrate a maxim I formulated as a grad student: "TLSs are everywhere, and they're evil."
(On the quantitative side: If the energy difference between the bottoms of the two wells is \(\Delta\), and the tunneling matrix element that would allow transitions between the two wells is \(\Delta_{0}\), then a very simple calculation says that the energy difference between the ground state of this system and the first excited state is given by \(\sqrt{\Delta^{2} + \Delta_{0}^{2}}\). If coupling to strain linearly tunes \(\Delta\), then that energy splitting should trace out a shape just like the curves seen in Fig. 2d of the paper.)
The low temperature thermal, acoustic, and dielectric properties of glasses, for example, appear to be dominated by these little suckers, and because of the disordered nature of those materials, they come in all sorts of flavors - some with high barriers in the middle, some with low barriers; some with nearly symmetric wells, some with very asymmetric wells. These TLS also "couple to strain" (that's how they talk to lattice vibrations and influence thermal and acoustic properties), meaning that if you stretch or squish the material, you raise one well and lower the other by an amount proportional to the stretching or squishing.
When I was a grad student, there were a tiny number of experiments that attempted to examine individual TLS, but in most disordered materials they could only be probed indirectly. Fast forward 20 years. It turns out that superconducting structures developed for quantum computing can be extremely sensitive to the presence of TLS, which typically exist in the glassy metal oxide layers used as tunnel barriers or at the surfaces of the superconductors. A very cool new paper on the arxiv shows this extremely clearly. If you look at Figure 2d, they are able to track the energy splittings of the TLS while straining the material (!), and they can actually see direct evidence of TLS talking coherently to each other. There are "avoided crossings" between TLS levels, meaning that occasionally you end up with TLS pairs that are close enough to each other that energy can slosh coherently back and forth between them. I find this level of information very impressive, and the TLS case continues to be an impressive example of theorists concocting a model based on comparatively scant information, and then experimentalists validating it well beyond the original expectations. From the quantum computing perspective, though, these little entities are not a good thing, and demonstrate a maxim I formulated as a grad student: "TLSs are everywhere, and they're evil."
(On the quantitative side: If the energy difference between the bottoms of the two wells is \(\Delta\), and the tunneling matrix element that would allow transitions between the two wells is \(\Delta_{0}\), then a very simple calculation says that the energy difference between the ground state of this system and the first excited state is given by \(\sqrt{\Delta^{2} + \Delta_{0}^{2}}\). If coupling to strain linearly tunes \(\Delta\), then that energy splitting should trace out a shape just like the curves seen in Fig. 2d of the paper.)
Wednesday, March 11, 2015
Table-top particle physics
We had a great colloquium here today by Dave DeMille from Yale University. He spoke about his group's collaborative measurements (working with John Doyle and Gerry Gabrielse at Harvard) trying to measure the electric dipole moment of the electron. When we teach students, we explain that as far as we have been able to determine, an electron is a truly pointlike particle (infinitesimal in size) with charge -e and spin 1/2. That is, it has no internal structure (though somehow it contains intrinsic angular momentum, but that is a story for another day), and that means that attempts to probe the charge distribution of the electron (e.g., scattering measurements) indicate that its charge is distributed in a spherically symmetric way.
We know, though, that from the standpoint of quantum field theory like quantum electrodynamics that we should actually think of the electron as being surrounded by a cloud of "virtual" particles of various sorts. In Feynman-like language, when an electron goes from here to there, we need to consider not just the direct path, but also the quantum amplitudes for paths with intermediate states (that could be classically forbidden), like spitting out and reabsorbing a photon between here and there. Those paths give rise to important, measurable consequences, like the Lamb shift, so we know that they're real. Where things get very interesting is when you wonder about more complicated corrections involving particles that break time reversal symmetry (like B and K mesons). If you throw in what we know from the Standard Model of particle physics, those corrections lead to the conclusion that there actually should be a non-zero electric dipole moment of the electron. That is, along its axis of "spin", there should be a slight deficit of negative charge at the north pole and excess of negative charge at the south pole, corresponding to a shift of the charge of the electron by about \(10^{-40}) cm. That is far too small to measure.
However, suppose that there are more funky particles out there (e.g., dark matter candidates like the supersymmetric particles that many people predict should be seen at the LHC or larger colliders). If those particles have masses on the TeV scale (that'd be convenient), there is then an expectation that there should be a detectable electric dipole moment. DeMille and collaborators have used extremely clever atomic physics techniques involving optical measurements on beams of ThO molecules in magnetic and electric fields to look, and they've pushed the bound on any such moment (pdf) to levels that already eliminate many candidate theories.
Two comments. First, this talk confirmed for me once again that you really have to have a special kind of personality to do truly precision measurements. The laundry list of systematic error sources that they considered is amazing, as are the control experiments. Second, I love this kind of thing, using "table-top" experiments (for certain definitions of "table") to get at particle physics questions. Note that the entire cost of the whole experiment over several years so far as been around $2M. That's not even a rounding error on the LHC budget. Sustained investing at a decent level in this kind of work may have enormous bang-for-the-buck compared with building ever-larger colliders.
We know, though, that from the standpoint of quantum field theory like quantum electrodynamics that we should actually think of the electron as being surrounded by a cloud of "virtual" particles of various sorts. In Feynman-like language, when an electron goes from here to there, we need to consider not just the direct path, but also the quantum amplitudes for paths with intermediate states (that could be classically forbidden), like spitting out and reabsorbing a photon between here and there. Those paths give rise to important, measurable consequences, like the Lamb shift, so we know that they're real. Where things get very interesting is when you wonder about more complicated corrections involving particles that break time reversal symmetry (like B and K mesons). If you throw in what we know from the Standard Model of particle physics, those corrections lead to the conclusion that there actually should be a non-zero electric dipole moment of the electron. That is, along its axis of "spin", there should be a slight deficit of negative charge at the north pole and excess of negative charge at the south pole, corresponding to a shift of the charge of the electron by about \(10^{-40}) cm. That is far too small to measure.
However, suppose that there are more funky particles out there (e.g., dark matter candidates like the supersymmetric particles that many people predict should be seen at the LHC or larger colliders). If those particles have masses on the TeV scale (that'd be convenient), there is then an expectation that there should be a detectable electric dipole moment. DeMille and collaborators have used extremely clever atomic physics techniques involving optical measurements on beams of ThO molecules in magnetic and electric fields to look, and they've pushed the bound on any such moment (pdf) to levels that already eliminate many candidate theories.
Two comments. First, this talk confirmed for me once again that you really have to have a special kind of personality to do truly precision measurements. The laundry list of systematic error sources that they considered is amazing, as are the control experiments. Second, I love this kind of thing, using "table-top" experiments (for certain definitions of "table") to get at particle physics questions. Note that the entire cost of the whole experiment over several years so far as been around $2M. That's not even a rounding error on the LHC budget. Sustained investing at a decent level in this kind of work may have enormous bang-for-the-buck compared with building ever-larger colliders.
Tuesday, March 03, 2015
March Meeting, days 1 and 2
I am sufficiently buried in work, it's been difficult to come up with my annual March Meeting blog reports. Here is a very brief list of some cool things I've seen:
- Jen Dionne from Stanford showed a very neat combination of tomography and cathodoluminescence, using a TEM with tilt capability to map out the plasmon modes of individual asymmetric "nanocup" particles (polystyrene core, gold off-center shell).
- Shilei Zhang presented what looks to me like a very clever idea, a "magnetic abacus" memory, that uses the spin Hall effect in a clever readout scheme as well as a spin transfer torque way to flip bits.
- I've seen a couple of talks about using interesting planar structures for optical purposes. Harry Atwater spoke about using plasmons in graphene to make tunable resonant elements for, e.g., photodetection and modified emissivity (tuning black body radiation!). My former Bell Labs department head Federico Capasso spoke about using designer dielectric resonator arrays to make "metasurface" optical elements (basically optical phased arrays) to do wild things like achromatic beam steering.
- Chris Adami had possibly the most ambitious title, "The Evolutionary Path Toward Sentient Robots". Spoiler: we are far from having to worry about this.
- Michael Coey spoke about magnetism at interfaces, including a weird result in CeO2 nanoparticles that appears to have its origins in giant orbital paramagnetism.
- There was a neat talk by Ricardo Ruiz from HGST about the amazing nanofabrication required for future hard disk storage. Patterned media (with 10 nm half-pitch of individual magnetic islands) looks like it's on the way.