A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Wednesday, September 29, 2010
Reductionism, emergence, and Sean Carroll
In the last couple of weeks, Sean Carroll has made two separate posts (here and here) on his widely read Cosmic Variance blog at Discover magazine, in which he points out, in a celebratory tone, that we fully understand the laws of physics that govern the everyday world. In a reductionist sense, he's right, in that nonrelativistic quantum mechanics + electricity and magnetism (+ quantum electrodynamics and a little special relativity) are the basic rules underlying chemistry, biology, solid state physics, etc. This is not a particularly new observation. Fifteen years ago, when I was a grad student at Stanford, Bob Laughlin was making the same comments, but for a different reason: to point out that this reductionist picture is in many ways hollow. I think that Sean gets this, but the way he has addressed this topic, twice, really makes me wonder whether he believes it, since beneath all of the talk about how impressive it is that humanity has this much understanding, lurks the implication that all the rest of non-high energy physics (or non-cosmology) is somehow just detail work that isn't getting at profound, fundamental questions. The emergence of rich, complex, often genuinely "new" physics out of systems that obey comparatively simple underlying rules is the whole point of condensed matter these days. For example, the emergence, in 2d electronic systems in semiconductors, of low energy excitations that have fractional charge and obey non-Abelian statistics, is not just a detail - it's really wild stuff, and has profound connections to fundamental physics. So while Sean is right, and we should be proud as a species of how much we've learned, not everything deep comes out of reductionism, and some fraction of physicists need to stop acting like it does.
Grad school, rankings, and geniuses
At long last, the National Research Council has finally released their rankings of graduate programs, the first such ranking since 1993. Their methodology is extremely complicated, and the way they present the data is almost opaque. This is a side effect of an effort to address the traditional problem with rankings, the ridiculousness of trying to assign a single number to something as complex and multivariate as a graduate program. The NRC has gone out of their way to make it possible to compare programs on many issues, and that's generally a good thing, but at the same time it makes navigating the data painful. The best aid I've seen in this is this flash app by the Chronicle of Higher Education. It does a great job of showing, graphically, the range of rankings that is relevant for a particular program, and you can do side-by-side comparisons of multiple programs. As I had suspected, most programs have a fairly broad range of possible rankings, except those at the very top (e.g., Harvard's physics department is, according to the "S" rankings, which are those based on the metrics that faculty members themselves identified as important to them, somewhere between 1 and 3 in the country.). One thing to note: the "S" rankings probably mean more about department quality than the pure research "R" rankings, since the "R" rankings will naturally bias in favor of larger departments. The other thing that becomes obvious when playing with this app for a few minutes is that some departments had clear data entry problems in their NRC data. As an example, my own department appears to have "zero" interdisciplinary faculty, which is just wrong, and undoubtedly didn't help our ranking.
In other news, the MacArthur Foundation has released their 2010 list of Fellows, known colloquially as recipients of "Genius Grants". I'm only familiar with some of the ones that touch on physics, and the people involved are all very good and extremely creative, which is exactly the point, I guess! Congratulations, all. Now let the speculation begin on the Nobel Prizes, which are going to be announced next week.
Finally, I wanted to link to this great post by my friend Jennifer Rexford, who has intelligent advice for first-year graduate students.
Monday, September 20, 2010
Nanostructures as optical antennas
My student (with theorist collaborators) had a paper published online in Nature Nanotechnology yesterday, and this gives me an excuse to talk about using metal nanostructures as optical antennas. The short version: using metal electrodes separated by a sub-nanometer gap as a kind of antenna, we have been able to get local enhancement of the electromagnetic intensity by roughly a factor of a million (!), and we have been able to determine that enhancement experimentally via tunneling measurements.
As I've discussed previously, light can excite collective excitations (plasmons) of the electronic fluid in a metal. Because these plasmons involve displacing the electrons relative to the ions, they are associated with local electric fields at the metal surface. When the incident light is resonant with the natural frequency of these modes, the result can be local electromagnetic fields near the metal that can significantly exceed the fields from the incident light. These enhanced local fields can be useful for many things, from spectroscopy to nonlinear optics. One way to get particularly large field enhancements is to look at the region separating two very closely spaced plasmonic structures. For example, closely spaced metal nanoparticles have been used to enhance fields sufficiently in the interparticle gap to allow single-molecule Raman spectroscopy (see here and here).
A major challenge, however, has been to get an experimental measure of those local fields in such gaps. That is where tunneling comes in. In a tunnel junction, electrons are able to "tunnel" quantum mechanically from one electrode to the other. The resulting current as a function of voltage may be slightly nonlinear, meaning that (unlike in a simple resistor) the second derivative of current with respect to voltage (d2I/dV2) is non-zero. From a simple math argument, the presence of a nonlinearity like this means that an AC voltage applied across the junction gives rise to a DC current proportional to the nonlinearity, a process called "rectification". What we have done is turned this around. We use low frequency (kHz) electronic measurements to determine the nonlinearity. We then measure the component of the DC current due to light shining on the junction (for experts: we can do this with lock-in methods at the same time as measuring the nonlinearity). We can then use the measured nonlinearity and photocurrent to determine the optical-frequency voltage that must be driving the tunneling photocurrent. From the tunneling conductance, we can also estimate the distance scale over which tunneling takes place. Dividing the optical frequency voltage by that distance gives us the optical-frequency electric field at the tunneling gap, which may be compared with the field from the incident light to get the enhancement.
It's not at all obvious on the face of it that this should work. After all, the analysis relies on the idea that the tunneling nonlinearity measured at kHz frequencies is still valid at frequencies nearly 1012 times higher. Experimentally, the data show that this does work, however, and our theorist colleagues are able to explain why.
When you think about it, it's pretty amazing. The radiation intensity in the little nanogap between our electrodes can be hundreds of thousands or millions of times higher than that from the incident laser. Wild stuff, and definitely food for thought.
Thursday, September 16, 2010
Interesting links - nonphysics, mostly.
Nothing as interesting as this happens around here (at least, not to my knowledge), and I'm kind of glad.
xkcd has once again done a far better job demonstrating some aspect of my existence than I ever could have myself.
Fascinating photography of nuclear weapons explosions here.
Tangentially related to nuclear weapons, I got a big kick out of Stephen Colbert's Dr. Strangelove tribute.
xkcd has once again done a far better job demonstrating some aspect of my existence than I ever could have myself.
Fascinating photography of nuclear weapons explosions here.
Tangentially related to nuclear weapons, I got a big kick out of Stephen Colbert's Dr. Strangelove tribute.
Monday, September 13, 2010
Gravity
There has been a good deal of talk lately about gravity. We're all taught early on in our science education about the remarkable insight of Isaac Newton, that the force that causes, e.g., apples to fall from trees is, in fact, the same force that keeps the moon in orbit about the earth (or rather about a common center of gravity relatively close to the center of the earth). The Newtonian gravitational constant, G, is the least precisely known of all the fundamental constants, in part because gravity is a shockingly weak force and therefore difficult to measure. (As I demonstrated to my freshmen students, gravity is so weak that even with the feeble muscles in my legs I can jump up in the air in defiance of the opposing force of the gravitational pull of the entire earth.) More frustrating than the difficulty in precision measurement of G is the fact that different research groups using different techniques come up with experimental estimates of G that differ by surprisingly large amounts. This paper (published last week in Phys. Rev. Lett.) is another example. The authors sweated over the details of their systematic uncertainties for two years before publishing this result, which disagrees with the "official" CODATA value for G by 10 sigma (!). This is a classic showcase for the art, elegance, and necessary attention to detail required in precision measurement physics.
Also making many waves during 2010 is this paper by Erik Verlinde. The claim of this paper is that gravity is emergent, rather than a "real" force. It's been argued since Einstein published general relativity that gravity is different at a deep level than traditional forces. GR says that we should think of gravity as a deformation of spacetime due to the presence of stress/energy. Freely falling particles always travel on geodesics (locally straight lines), and those geodesics are determined by the distribution of mass and energy (including that due to spacetime deformation). In the appropriate limit, GR reduces to Newtonian gravity. Verlinde, striking out in a completely different direction, argues that one can start from very general considerations, and gravity emerges as an "entropic" force. An entropic force is an apparent force that results from the tendency of matter and energy to explore all available microscopic states. For example, a polymer will tend to ball up because there are many more microscopic states that describe the polymer wadded up than extended. Pulling on the two ends of the polymer chain to straighten it out will require overcoming this entropic tendency, and the result is a tension force. Verlinde argues that gravity arises similarly. I need to re-read the paper - it's slippery in places, especially on what underlying background assumptions are made about time and space, and what really plays the role of temperature here. Still, intriguing food for thought, and it's elegant that he can get both something GR-like and something Newtonian to fall out of such an analysis.
Regardless of how you may feel about Verlinde's speculations and the difficulty of measuring G, at least you can laugh in shocked disbelief that these people are serious. (I should be careful making jokes. Knowing Rick Perry, they'll start pushing this in Texas public schools next year.)
Tuesday, September 07, 2010
Two for the price of one.
I had noticed (and it was also pointed out by a colleague) the essentially simultaneous publication of this paper and this paper (which appear to have been submitted within a week of each other as well). In both papers, the authors have created short-channel graphene-based transistors in a clever way. They take a conductive nanowire (doped GaN in the Nano Letters paper; CoSi in the Nature paper), coat it with thin aluminum oxide via atomic-layer deposition, and then lay it down on top of a piece of exfoliated graphene. Then they evaporate Pt on top of the device. On either side of the nanowire, the Pt lands on the graphene, making source and drain electrodes. The nanowire shadows part of the graphene (the channel), and then the nanowire itself acts as the gate. This is a nice, self-aligned process, and the resulting graphene devices appear to be very fast (the Nature paper has actual high frequency measurements). Looks like they managed to get two papers in good journals for the price of one technique advance.
Sunday, September 05, 2010
Arguing from authority? Hawking, you're supposed to be better than that.
In Saturday's Wall Street Journal, there was an article by Stephen Hawking and Leonard Mlodinow clearly designed as a naked promotion of their new book. In the article, they argue that modern physics removes the need for a divine being to have created the universe. Religious arguments aside (seriously, guys, is that particular argument even news anymore?), one thing in the article especially annoyed me. Toward the end, the authors state:
As recent advances in cosmology suggest, the laws of gravity and quantum theory allow universes to appear spontaneously from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
Our universe seems to be one of many, each with different laws.
You know what's wrong with this? It states, as if it is established fact, that we understand cosmology well enough to declare that universes spontaneously self-create. It states that the multiverse is a prediction of "many" theories, implying strongly that it's on firm ground. The problem is, this isn't science. It's not falsifiable, and in its present form it's not even close to being falsifiable in the foreseeable future. Seriously, name one PREdiction (as opposed to retrodiction) of these cosmological models, or more seriously, the multiverse/landscape idea, that is testable. Don't claim that our existence is such a test - the anthropic principle is weak sauce and is by no means evidence of the multiverse. Man, it annoys me when high profile theorists (it always seems to be theorists who do this) forget that physics is actually an experimental science that rests on predictive power.
Friday, September 03, 2010
This won't end well, because it's blindingly idiotic.
According to the Chronicle of Higher Education, my Texas A&M colleagues up the road in College Station now get the privilege of being evaluated based on their bottom-line "financial value" to the university. Take how much money the professor brings in (including some $ from tuition of the number of students taught), subtract their salary, and there you go. This raises problematic points that should be obvious to anyone with two brain cells to rub together. First, I guess it sucks to be in the humanities and social sciences - you almost certainly have negative value in this ranking. Congratulations, you leeches who take salary and don't bring in big research funding! Second, it firmly establishes that the service contributions of faculty to the university are worthless in this ranking scheme. Third, it establishes that the only measure of your educational contribution is how many students you teach - purely quantity, so if you teach large intro classes you're somehow valuable, but if you teach smaller upper division courses, you're less valuable. Gee, that's not simplistic at all. Now, the article doesn't actually say how these rankings will be used, but I'm having a hard time imagining ways that this metric is a good idea.
Wednesday, September 01, 2010
Silicon oxide and all that.
It's been a busy week work-wise; hence the low rate of blogging. However, I would be remiss if I failed to talk about the science behind a paper (on which I am a coauthor) that was mentioned on the front page of the New York Times yesterday. A student, Jun Yao, co-advised by my colleagues Jim Tour and Lin Zhong, did a really elegant experiment that has gotten a lot of attention, and the science is pretty neat. Here's the deal. Lots of people have done experiments where they've seen what appears to be nonvolatile switching of the electrical resistance in various nanoscale systems (e.g., junctions in nanotubes and other nanomaterials). That is, what is observed is that, with the use of voltage pulses, the electrical resistance of a device may be programmed to be comparatively high or comparatively low, and that state is preserved for a looooong time. Long story short: sometimes this behavior has nothing in particular to do with the nanoscale system being studied, and really results from the properties of the underlying or nearby silicon oxide, which is generally treated as inert and boring. Well, as people in the Si industry can tell you at length, it turns out that silicon oxide isn't necessarily inert and boring. What Jun showed via some elegant cross-sectional transmission electron microscopy is that when big voltage pulses are applied across small distances, it is possible to modify the oxide, effectively doing electrochemistry, and turning some of the oxide back into Si nanocrystals. When those nanocrystals give a hopping path from one electrode to the other, the device is "on". When that path is broken, the device is "off". The resulting nanocrystals themselves are quite small, on the order of a few nm. Hence the excitement about possibly using this approach for very dense, nonvolatile memory. There are, of course, a great many engineering issues to be overcome (there's no need to tell me about that in the comments....), but it is definitely a pretty science result.
Subscribe to:
Posts (Atom)