I was just able to help out my postdoc by pulling an old Bell Labs notebook from 11.5 years ago off my bookshelf and showing him a schematic of an electrical measurement technique. This is an object lesson in why it is a good idea to keep a clear, complete lab notebook! I try very hard to impress upon undergrad and graduate students alike that it's critically important to keep good notes, even (perhaps especially) in these days of electronic data acquisition and analysis. I've never once looked back and regretted how much time I spent writing things down, or how much paper I used - good record keeping has saved my bacon (and lots of time) on multiple occasions. Unfortunately, with rare exceptions, students come in to the university (at the undergrad or grad levels) and seem determined to write as little as possible down using as few sheets of paper as they can manage. Somewhere along the way (before grad school, though my thesis advisor was outstanding about this), it got pounded into my brain: if you didn't document it, you didn't do it. Perhaps we should make a facebook-like or twitter-like application that would sucker student researchers into obsessively updating their work status....
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Search This Blog
Wednesday, December 29, 2010
Tuesday, December 28, 2010
Statistical mechanics: still work to be done!
Statistical mechanics, the physics of many-particle systems, is a profound intellectual achievement. A statistical approach to systems with many degrees of freedom makes perfect sense. It's ridiculous to think about solving Newton's laws (or the Schroedinger equation, for that matter) for all the gas molecules in this room. Apart from being computationally intractable, it would be silly for the vast majority of issues we care about, since the macroscopic properties of the air in the room are approximately the same now as they were when you began reading this sentence. Instead of worrying about every molecule and their interactions, we characterize the macroscopic properties of the air by a small number of parameters (the pressure, temperature, and density). The remarkable achievement of statistical physics is that it places this on a firm footing, showing how one can go from the microscopic degrees of freedom, through a statistical analysis, and out the other side with the macroscopic parameters.
Monday, December 20, 2010
Science's Breakthrough of the Year for 2010
Science Magazine has named the work of a team at UCSB directed by Andrew Cleland and John Martinis as their scientific breakthrough of the year for 2010. Their achievement: the demonstration of a "quantum machine". I'm writing about this for two reasons. First, it is extremely cool stuff that has a nano+condensed matter focus. Second, this article and this one in the media have so many things wrong with them that I don't even know where to begin, and upon reading them I felt compelled to try to give a better explanation of this impressive work.
One of the main points of quantum mechanics is that systems tend to take in or emit energy in "quanta" (chunks of a certain size) rather than in any old amount. This quantization is the reason for the observation of spectral lines, and mathematically is rather analogous to the fact that a guitar string can ring at a discrete set of harmonics and not any arbitrary frequency. The idea that a quantum system at low energies can have a very small number of states each corresponding to a certain specific energy is familiar (in slightly different language) to every high school chemistry student who has seen s, p, and d orbitals and talked about the Bohr model of the atom. The quantization of energy shows up not just in the case of electronic transitions (that we've discussed so far), but also in mechanical motion. Vibrations in quantum mechanics are quantized - in quantum mechanics, a perfect ball-on-a-spring mechanical oscillator with some mechanical frequency can only emit or absorb energy in amounts of size hf, where h is Planck's constant. Furthermore, there is some lowest energy allowed state of the oscillator called the "ground state". Again, this is all old news, and such vibrational quantization is clear as a bell in many spectroscopy techniques (infrared absorption; Raman spectroscopy).
The first remarkable thing done by the UCSB team is to manufacture a mechanical resonator containing millions of atoms, and to put that whole object into its quantum ground state (by cooling it so that the thermal energy scale is much smaller than hf for that resonator). In fact, that's the comparatively easy part. The second (and really) remarkable thing that the UCSB team did was to confirm experimentally that the resonator really was in its ground state, and to deliberately add and take away single quanta of energy from the resonator. This is very challenging to do, because quantum states can be quite delicate - it's very easy to have your measurement setup mess with the quantum system you're trying to study!
What is the point? Well, on the basic science side, it's of fundamental interest to understand just how complicated many particle systems behave when they are placed in highly quantum situations. That's where much of the "spookiness" of quantum physics lurks. On the practical side, the tools developed to do these kinds of experiments are one way that people like Martinis hope to build quantum computers. I strongly encourage you to watch the video on the Science webpage (should be free access w/ registration); it's a thorough discussion of this impressive achievement.
Tuesday, December 14, 2010
Taking temperatures at the molecular scale
As discussed in my previous post, temperature may be associated with how energy is distributed among microscopic degrees of freedom (like the vibrational motion of atoms in a solid, or how electrons in a metal are placed into the allowed electronic energy levels). Moreover, it takes time for energy to be transferred (via "inelastic" processes) among and between the microscopic degrees of freedom, and during that time electrons can actually move pretty far, on the nano scale of things. This means that if energy is pumped into the microscopic degrees of freedom somehow, it is possible to drive those vibrations and electronic distributions way out of their thermal equilibrium configurations.
So, how can you tell if you've done that? With macroscopic objects, you can think about still describing the nonequilibrium situation with an effective temperature, and measuring that temperature with a thermometer. For example, when cooking a pot roast in the oven (this example has a special place in the hearts of many Stanford graduate physics alumni), the roast is out of thermal equilibrium but in an approximate steady state. The outside of the roast may be brown, crisp, and at 350 F, while the inside of the pot roast may be pink, rare, and 135 F. You could find these effective temperatures (effective because strictly speaking temperature is an equilibrium parameter) by sticking a probe thermometer at different points on the roast, and as long as the thermometer is small (little heat capacity compared to the roast), you can measure the temperature distribution.
What about nanoscale systems? How can you look at the effective temperature or how the energy is distributed in microscopic degrees of freedom, since you can't stick in a thermometer? For electrons, one approach is to use tunneling (see here and here), which is a topic for another time. In our newest paper, we use a different technique, Raman spectroscopy.
Monday, December 13, 2010
Temperature, thermal equilibrium, and nanoscale systems
In preparation for a post about a new paper from my group, I realized that it will be easier to explain why the result is cool if I first write a bit about temperature and thermal equilibrium in nanoscale systems. I've tried to write about temperature before, and in hindsight I think I could have done better. We all have a reasonably good intuition for what temperature means on the macroscopic scale: temperature tells us which way heat flows when two systems are brought into "thermal contact". A cool coin brought into contact with my warm hand will get warmer (its temperature will increase) as my hand cools down (its temperature will locally decrease). Thermal contact here means that the two objects can exchange energy with each other via microscopic degrees of freedom, such as the vibrational jiggling of the atoms in a solid, or the particular energy levels occupied by the electrons in a metal. (This is in contrast to energy in macroscopic degrees of freedom, such as the kinetic energy of the overall motion of the coin, or the potential energy of the coin in the gravitational field of the earth.)
We can turn that around, and try to use temperature as a single number to describe how much energy is distributed in the (microscopic) degrees of freedom. This is not always a good strategy. In the coin I was using as an example, you can conceive of many ways to distribute vibrational energy. Number all the atoms in the coin, and have the even numbered atoms moving to the right and the odd numbered atoms moving to the left at some speed at a given instant. That certainly would have a bunch of energy tied up in vibrational motion. However, that weird and highly artificial arrangement of atomic motion is not what one would expect in thermal equilibrium. Likewise, you could imagine looking at all the electronic energy levels possible for the electrons in the coin, and popping every third electron each up to some high unoccupied energy level. That distribution of energy in the electrons is allowed, but not the sort of thing that would be common in thermal equilibrium. There are certain vibrational and electronic distributions of energy that are expected in thermal equilibrium (when the system has sat long enough that it has reached steady-state as far as its statistical properties are concerned).
How long does it take a system to reach thermal equilibrium? That depends on the system, and this is where nanoscale systems can be particularly interesting. For example, there is some characteristic timescale for electrons to scatter off each other and redistribute energy. If you could directly dump in electrons with an energy 1 eV (one electron volt) above the highest occupied electronic level of a piece of metal, it would take time, probably tens of femtoseconds, before those electrons redistributed their energy by sharing it with the other electrons. During that time period, those energetic electrons can actually travel rather far. A typical (classical) electron velocity in a metal is around 106 m/s, meaning that the electrons could travel tens of nanometers before losing their energy to their surroundings. The scattering processes that transfer energy from electrons into the vibrations of the atoms can be considerably slower than that!
The take-home messages:
1) It takes time for electrons and vibrations arrive at a thermal distribution of energy described by a single temperature number.
2) During that time, electrons and vibrations can have energy distributed in a way that can be complicated and very different from thermal distributions.
3) Electrons can travel quite far during that time, meaning that it's comparatively easy for nanoscale systems to have very non-thermal energy distributions, if driven somehow out of thermal equilibrium.
More tomorrow.
Saturday, December 11, 2010
NSF grants and "wasteful spending"
Hat tip to David Bacon for highlighting this. Republican whip Eric Cantor has apparently decided that the best way to start cutting government spending is to have the general public search through NSF awards and highlight "wasteful" grants that are a poor use of taxpayer dollars.
Look, I like the idea of cutting government spending, but I just spent two days in Washington DC sitting around a table with a dozen other PhD scientists and engineers arguing about which 12% of a large group of NSF proposals were worth trying to fund. I'm sure Cantor would brand me as an elitist for what I'm about to write, but there is NO WAY that the lay public is capable of making a reasoned critical judgment about the relative merits of 98% of NSF grants - they simply don't have the needed contextual information. Bear in mind, too, that the DOD budget is ONE HUNDRED TIMES larger than the NSF budget. Is NSF really the poster child of government waste? Seriously?
Tuesday, December 07, 2010
The tyranny of reciprocal space
I was again thinking about why it can be difficult to explain some solid-state physics ideas to the lay public, and I think part of the problem is what I call the tyranny of reciprocal space. Here's an attempt to explain the issue in accessible language. If you want to describe where the atoms are in a crystalline solid and you're not a condensed matter physicist, you'd either draw a picture, or say in words that the atoms are, for example, arranged in a periodic way in space (e.g., "stacked like cannonballs", "arranged on a square grid", etc.). Basically, you'd describe their layout in what a condensed matter physicist would call real space. However, physicists look at this and realize that you could be much more compact in your description. For example, for a 1d chain of atoms a distance a apart from each other, a condensed matter physicist might describe the chain by a "wavevector" k = 2 \pi/a instead. This k describes a spatial frequency; a wave (quantum matter has wavelike properties) described by cos kr would go through a complete period (peak of wave to peak of wave, say) and start repeating itself over a distance a. Because k has units of 1/length, this wavevector way of describing spatially periodic things is often called reciprocal space. A given point in reciprocal space (kx, ky, kz) implies particular spatial periodicities in the x, y, and z directions.
Why would condensed matter physicists do this - purely to be cryptic? No, not just that. It turns out that a particle's momentum (classically, the product of mass and velocity) in quantum mechanics is proportional to k for the wavelike description of the particle. Larger k (shorter spatial periodicity), higher momentum. Moreover, trying to describe the interaction of, e.g., a wave-like electron with the atoms in a periodic lattice is done very neatly by worrying about the wavevector of the electron and the wavevectors describing the lattice's periodicity. The math is very nice and elegant. I'm always blown away when scattering experts (those who use x-rays or neutrons as probes of material structure) can glance at some insanely complex diffraction pattern, and immediately identify particular peaks with obscure (to me) points in reciprocal space, thus establishing the symmetry of some underlying lattice.
The problem is, from the point of view of the lay public (and even most other branches of physics), essentially no one thinks in reciprocal space. One of the hardest things you (as a condensed matter physicist) can do to an audience in a general (public or colloquium) talk is to start throwing around reciprocal space without some preamble or roadmap. It just shuts down many nonexperts' ability to follow the talk, no matter how pretty the viewgraphs are. Extreme caution should be used in talking about reciprocal space to a general audience! Far better to have some real-space description for people to hang onto.
Friday, December 03, 2010
A seasonal abstract
On the anomalous combustion of oleic and linoleic acid mixtures
J. Maccabeus et al., Hebrew University, Jerusalem, Judea
Olive-derived oils, composed primarily of oleic and linoleic fatty acids, have long been used as fuels, with well characterized combustion rates. We report an observation of anomalously slow combustion of such a mixture, with a burn rate suppressed relative to the standard expectations by more than a factor of eight. Candidate explanations for these unexpectedly slow exothermic reaction kinetics are considered, including the possibility of supernatural agencies intervening to alter the local passage of time in the vicinity of the combustion vessel.
(Come on, admit it, this is at least as credible as either this or this.)
Monday, November 29, 2010
Writing exams.
Writing (or perhaps I should say "creating", for the benefit of UK/Canada/Australia/NZ grammarians) good exams is not a trivial task. You want very much to test certain concepts, and you don't want the exam to measure thing you consider comparatively unimportant. For example, the first exam I ever took in college was in honors mechanics; out of a possible 30 points, the mean was a 9 (!), and I got a 6 (!!). Apart from being a real wake-up call about how hard I would have to apply myself to succeed academically, that test was a classic example of an exam that did not do its job. The reason the scores were so low is that the test was considerably too long for the time allotted. Rather than measuring knowledge of mechanics or problem solving ability, the test largely measured people's speed of work - not an unimportant indicator (brilliant, well-prepared people do often work relatively quickly), but surely not what the instructor cared most about, since there usually isn't a need for raw speed in real physics or engineering.
Ideally, the exam will have enough "dynamic range" that you can get a good idea of the spread of knowledge in the students. If the test is too easy, you end up with a grade distribution that is very top-heavy, and you can't distinguish between the good and the excellent. If the test is too difficult, the distribution is soul-crushingly bottom-heavy (leading to great angst among the students), and again you can't tell between those who really don't know what's going on and those who just slipped up. Along these lines, you also need the test to be comparatively straightforward to take (step-by-step multipart problems, where there are still paths forward even if one part is wrong) and to grade.
Thursday, November 18, 2010
Memristors - how fundamental, and how useful?
You may have heard about an electronic device called a memristor, a term originally coined by Leon Chua back in 1971, and billed as the "missing fourth fundamental circuit element". It's worth taking a look at what that means, and whether memristors are fundamental in the physics sense that resistors, capacitors, and inductors are. Note that this is an entirely separate question from whether such devices and their relatives are technologically useful!
In a resistor, electronic current flows in phase with the voltage drop across the resistor (assuming the voltage is cycled in an ac fashion). In the dc limit, current flows in steady state proportional to the voltage, and power is dissipated. In a capacitor, in contrast, the flow of current builds up charge (in the usual parallel plate concept, charge on the plates) that leads to the formation of an electric field between conducting parts, and hence a voltage difference. The current leads the voltage (current is proportional to the rate of change of the voltage); when a constant voltage is specified, the current decreases to zero once that voltage is achieved, and energy is stored in the electric field of the capacitor. In an inductor, the voltage leads the current - the voltage across an inductor, through Faraday's law, is proportional to the rate at which the current is changing. Note that in a standard inductor (usually drawn as a coil of wire), the magnetic flux through the inductor is proportional to the current (flux = L I, where L is the inductance). That means that if a certain current is specified through the inductor, the voltage drops to zero (in the ideal, zero-resistance case), and there is energy stored in the magnetic field of the inductor. Notice that there is a duality between the inductor and capacitor cases (current and voltage swapping roles; energy stored in either electric or magnetic field).
Prof. Chua said that one could think of things a bit differently, and consider a circuit element where the magnetic flux (remember, in an inductor this would be proportional to the time integral of the voltage) is proportional to the charge that has passed through the device (the time integral of the current (rather than the current itself in an inductor)). No one has actually made such a device, in terms of magnetic flux. However, what people have made are any number of devices where the relationship between current and voltage depends on the past history of the current flow through the device. One special case of this is the gadget marketed by HP as a memristor, consisting of two metal electrodes separated by a titanium oxide film. In that particular example, at sufficiently high bias voltage, the flow of current through the device performs electrochemistry on the titanium oxide, either reducing it to titanium metal, or oxidizing it further, depending on the polarity of the flow. The result is that the resistance (the proportionality between voltage and current; in the memristor language, the proportionality between the time integral of the voltage and the time integral of the current) depends on how much charge has flowed through the device. Voila, a memristor.
Monday, November 15, 2010
Great moments in consumer electronics
It's been an extremely busy time of the semester, and there appears to be no end in sight. There will be more physics posts soon, but in the meantime, I have a question for those of you out there that have Nintendo Wii consoles. (The Wii is a great example of micromachining technology, by the way, since the controller contains a 3-axis MEMS accelerometer, and the Wii Motion Plus also contains a micromachined gyroscope.) Apparently, if there is a power glitch, it is necessary to "reset your AC adapter" in order to power on the console. The AC adapter looks for all the world like an ordinary "brick" power supply, which I would think should contain a transformer, some diodes, capacitors, and probably voltage regulators. Resetting it involves unplugging it from both ends (the Wii and the power strip), letting it sit for two solid minutes, and then plugging it back directly into a wall outlet (not a power strip). What the heck did Nintendo put in this thing, and why does that procedure work, when plugging it back into a power strip does not?! Does Nintendo rely on poorly conditioned power to keep the adapter happy? Is this all some scheme so that they can make sure you're not trying to use a gray-market adapter? This is so odd that it seemed like the only natural way to try to get to the bottom of it (without following my physicist's inclination of ripping the adapter apart) was to ask the internet.
Wednesday, November 10, 2010
Paul Barbara
I was shocked and saddened to learn of the death of Paul Barbara, a tremendous physical chemist and National Academy of Sciences member at the University of Texas. Prof. Barbara's research focused largely on electron transfer and single-molecule spectroscopy, and I met him originally because of a mutual interest in organic semiconductors. He was very smart, funny, and a class act all the way, happy to talk science with me even when I was a brand new assistant professor just getting into our field of mutual interest. He will be missed.
Friday, November 05, 2010
Two cool videos, + science funding
Here are two extremely interesting videos related to physics topics. Both combine two things I enjoy in life: physics and coffee. Here is a video made by scientists at the Institut Laue-Langevin, a neutron science laboratory in Grenoble funded by the EU. The scientists decided to use a neutron beam to image through a little espresso maker as it brews. They did this partly for fun, and partly to demonstrate how neutrons may be used to examine materials - for example, one could use this sort of imaging to look for flaws or cracks in turbine blades. The cross-section for absorbing neutrons varies quite strongly from element to element, giving good material contrast. The aluminum housing for the espresso maker shows up as very light gray, while the water (and resulting espresso, which is still mostly water, even when my old friend Sven makes it) shows up as very dark. This is because the hydrogen in the water has a relatively large cross-section for capturing a neutron and becoming deuterium.
The second video I saw thanks to Charles Day's blog. To me, a former mechanical engineer, this is rather jaw-dropping. Hod Lipson and his graduate students at Cornell have managed to leverage a great piece of physics called the jamming transition. Many physics students are surprised to learn that some "simple" problems of classical statistical physics can exhibit complex phenomenon and remain active subjects of research, even though they seem on the surface like they should have been solved by 19th century French mathematician whose name started with L. The jamming transition is one of these problems. Take a bunch of dry grains (in this case, ground coffee). When there is a bit of air mixed in with the grains, the grains can slide over and past each other relatively easily. A latex balloon filled with this mixture is squishy. If the air is removed, however, the grains jam up, and the grain-filled balloon becomes very hard (as if the effective viscosity of the blob of grains diverges). The Cornell researchers have used this phenomenon to make a universal "gripper" for picking up objects. Just watch the movie. It's very impressive.
Wednesday, November 03, 2010
Data and backups
I don't talk too much on here about the university service stuff that I do - frankly, much of it wouldn't be very interesting to most of my readers. However, this year I'm chairing Rice University's Committee on Research, and we're discussing an issue that many of you may care about: data management and preservation. Generally, principal investigators are assumed to be "responsible custodians" of data taken during research. Note that "data" can mean many things in this context - see here, for example. US federal agencies that sponsor research typically expect the PIs to hold on to their data for several years following the conclusion of a project, and that PIs will make their data available if requested. The university is legally responsible to ensure that the data is retained, in fact. There are many issues that crop up here, but the particular one on which I'd like some feedback is university storage of electronic data. If you're at a university, does your institution provide electronic (or physical, for that matter) storage space for the retention of research data? Do they charge the investigators for that storage? What kind of storage is it, and is the transfer of data from a PI's lab, say, to that storage automated? I'd be very interested in hearing either success stories about university or institutional data management, or alternately horror stories.
Monday, October 25, 2010
Wrap-up, Osheroff-fest
The symposium in honor of Doug Osheroff was great fun. It was great to see old friends again, to hear some stories that I didn't know, and to find out what other former group members are up to. The actual talks were generally pretty good, with a number of speakers focusing on how exciting and vibrant the whole field of low temperature physics was in its heyday. There were a total of seven Nobel Laureates there (DDO, Steve Chu, Bob Laughlin, Bob Richardson, Dave Lee, Phil Anderson, and Tony Leggett), and a bunch of other luminaries (Michael Fisher, Daniel Fisher, Bill Brinkman, Ted Geballe, and even a special and unexpected (by me, at least) appearance by Ed Witten). Steve Chu's talk was remarkable in part because he so clearly loved the chance to give an actual technical talk about some of his research, which you get the feeling he doesn't do so much at the DOE. Fun stuff, even when Bob Laughlin was giving me a hard time :-)
Sunday, October 24, 2010
Osheroff-fest
I am currently visiting Stanford for my thesis advisor's big birthday bash/retirement festivities. It's really great to see so many former students, postdocs, and collaborators, and it's more than a little surreal to be back here after so long. It's a shame taht a few couldn't make it - they're sorely missed. There is going to be a day-long symposium tomorrow in his honor that should be very interesting. I'll post some brief description of some of the talks, if they seem like they are of general interest.
Wednesday, October 20, 2010
Excellent talk today + the point of colloquia.
Today I was fortunate to host my department's weekly colloquium, with Prof. Wilson Ho from UC Irvine as the speaker. He gave a great talk about "Visualizing Quantum Mechanics", in which he showed (using experiments from his own group) how scanning tunneling microscopy can be a great teaching tool for illustrating concepts from undergraduate quantum mechanics. He covered the exponential dependence of tunneling on distance, imaging of molecular orbitals, the crossover between classical (activated) diffusion and quantum (tunneling-based) diffusion, particle-in-a-box physics in 1d atomic chains, visualization of Fermi's Golden Rule via light emission experiments, and other neat results. The audience included not just the usual collection of faculty and grad students, but also a bunch of the current undergrad quantum students as well.
The talk was pretty much a letter-perfect example of what a colloquium is supposed to be. It was accessible to a general audience, was genuinely educational, had appealing visuals, and contained enough intellectual "meat" to be satisfying for experts, including some not-yet published stuff. It would be nice if every speaker realized the difference between a colloquium and a seminar....
Monday, October 11, 2010
Buckyball celebration/symposium
In honor of the 25th anniversary of the discovery of C60 at Rice, the university is holding a symposium to celebrate. In addition to the surviving members of the discovery team (laureates Curl and Kroto; Prof. Heath, Dr. O'Brien), there are many big names in the business (Millie Dresselhaus, Marvin Cohen, Phaedon Avouris, Hongjie Dai). Andre Geim is going to skype in, apparently, since getting the Nobel Prize this past week has understandably scrambled his travel plans. Unfortunately I'm flying to Washington, DC later this morning, so I will miss most of the fun, but I'm sure it will be a very interesting and lively event.
Tuesday, October 05, 2010
2010 Physics Nobel for graphene
The 2010 Nobel Prize in Physics has been awarded to Andre Geim and Konstantin Novoselov for graphene. Congratulations to them! Graphene, the single-atomic-layer limit of graphite, has been a very hot topic in consensed matter physics since late 2004, and I've posted about it here and here. There is no question that graphene is a very interesting material, and the possibility of serious technological applications looms large, but as Joerg Haber points out, overhype is a real danger. The prize is somewhat unusual in that it was very fast on the scale of these things. I also find it interesting that only the Manchester group was given the prize, given the impact of the work going on in this area at other places at around the same time (for example, take a look at the first few talks in this session I put together at the 2005 APS March Meeting). I do hope that those in the British scientific funding establishment take note that future prizes and innovations like this are at severe risk if research and educational funding cuts continue.
Monday, October 04, 2010
"Definitively inaccurate": One more comment about NRC rankings
One last post before the Nobel in physics is announced tomorrow.... As many people in the academic blogosphere have reported, there are some serious issues with the NRC rankings of graduate programs. Some of these seem to be related to data entry, and others to nonuniform or overly simplistic interpretations of answers to survey questions. Let me give a couple of examples. I'm in the physics and astronomy department at Rice, and for several years I've helped oversee the interdisciplinary applied physics graduate program here (not a department - applied physics does not have faculty billets or its own courses, for example). I filled out faculty NRC paperwork, and I was also in charge (with a colleague) of filling out the "department"-level NRC paperwork for the applied physics program. I know, with certainty, that some of the stats for the two programs are very very similar, including the allocation of work space to graduate students and the approximate completion rates of the PhD program. However, while these seem to show up correctly in the applied physics NRC data, they are both skewed bizarrely wrong (and very unfavorably, like the completion rate in the NRC data is too low when compared with reality by at least a factor of two!) in the physics & astronomy departmental NRC data. Now, overall the department did reasonably well in the rankings, and if one looks particularly at just the research stuff per faculty member, physics and astronomy did quite well. However, this issue with student data really stinks, because that's what some sites geared toward prospective students emphasize. It's wrong, there's no fixing it, and it looks like it will be "definitively inaccurate" (to borrow a phrase from Douglas Adams) for at least a decade.
Wednesday, September 29, 2010
Reductionism, emergence, and Sean Carroll
In the last couple of weeks, Sean Carroll has made two separate posts (here and here) on his widely read Cosmic Variance blog at Discover magazine, in which he points out, in a celebratory tone, that we fully understand the laws of physics that govern the everyday world. In a reductionist sense, he's right, in that nonrelativistic quantum mechanics + electricity and magnetism (+ quantum electrodynamics and a little special relativity) are the basic rules underlying chemistry, biology, solid state physics, etc. This is not a particularly new observation. Fifteen years ago, when I was a grad student at Stanford, Bob Laughlin was making the same comments, but for a different reason: to point out that this reductionist picture is in many ways hollow. I think that Sean gets this, but the way he has addressed this topic, twice, really makes me wonder whether he believes it, since beneath all of the talk about how impressive it is that humanity has this much understanding, lurks the implication that all the rest of non-high energy physics (or non-cosmology) is somehow just detail work that isn't getting at profound, fundamental questions. The emergence of rich, complex, often genuinely "new" physics out of systems that obey comparatively simple underlying rules is the whole point of condensed matter these days. For example, the emergence, in 2d electronic systems in semiconductors, of low energy excitations that have fractional charge and obey non-Abelian statistics, is not just a detail - it's really wild stuff, and has profound connections to fundamental physics. So while Sean is right, and we should be proud as a species of how much we've learned, not everything deep comes out of reductionism, and some fraction of physicists need to stop acting like it does.
Grad school, rankings, and geniuses
At long last, the National Research Council has finally released their rankings of graduate programs, the first such ranking since 1993. Their methodology is extremely complicated, and the way they present the data is almost opaque. This is a side effect of an effort to address the traditional problem with rankings, the ridiculousness of trying to assign a single number to something as complex and multivariate as a graduate program. The NRC has gone out of their way to make it possible to compare programs on many issues, and that's generally a good thing, but at the same time it makes navigating the data painful. The best aid I've seen in this is this flash app by the Chronicle of Higher Education. It does a great job of showing, graphically, the range of rankings that is relevant for a particular program, and you can do side-by-side comparisons of multiple programs. As I had suspected, most programs have a fairly broad range of possible rankings, except those at the very top (e.g., Harvard's physics department is, according to the "S" rankings, which are those based on the metrics that faculty members themselves identified as important to them, somewhere between 1 and 3 in the country.). One thing to note: the "S" rankings probably mean more about department quality than the pure research "R" rankings, since the "R" rankings will naturally bias in favor of larger departments. The other thing that becomes obvious when playing with this app for a few minutes is that some departments had clear data entry problems in their NRC data. As an example, my own department appears to have "zero" interdisciplinary faculty, which is just wrong, and undoubtedly didn't help our ranking.
In other news, the MacArthur Foundation has released their 2010 list of Fellows, known colloquially as recipients of "Genius Grants". I'm only familiar with some of the ones that touch on physics, and the people involved are all very good and extremely creative, which is exactly the point, I guess! Congratulations, all. Now let the speculation begin on the Nobel Prizes, which are going to be announced next week.
Finally, I wanted to link to this great post by my friend Jennifer Rexford, who has intelligent advice for first-year graduate students.
Monday, September 20, 2010
Nanostructures as optical antennas
My student (with theorist collaborators) had a paper published online in Nature Nanotechnology yesterday, and this gives me an excuse to talk about using metal nanostructures as optical antennas. The short version: using metal electrodes separated by a sub-nanometer gap as a kind of antenna, we have been able to get local enhancement of the electromagnetic intensity by roughly a factor of a million (!), and we have been able to determine that enhancement experimentally via tunneling measurements.
As I've discussed previously, light can excite collective excitations (plasmons) of the electronic fluid in a metal. Because these plasmons involve displacing the electrons relative to the ions, they are associated with local electric fields at the metal surface. When the incident light is resonant with the natural frequency of these modes, the result can be local electromagnetic fields near the metal that can significantly exceed the fields from the incident light. These enhanced local fields can be useful for many things, from spectroscopy to nonlinear optics. One way to get particularly large field enhancements is to look at the region separating two very closely spaced plasmonic structures. For example, closely spaced metal nanoparticles have been used to enhance fields sufficiently in the interparticle gap to allow single-molecule Raman spectroscopy (see here and here).
A major challenge, however, has been to get an experimental measure of those local fields in such gaps. That is where tunneling comes in. In a tunnel junction, electrons are able to "tunnel" quantum mechanically from one electrode to the other. The resulting current as a function of voltage may be slightly nonlinear, meaning that (unlike in a simple resistor) the second derivative of current with respect to voltage (d2I/dV2) is non-zero. From a simple math argument, the presence of a nonlinearity like this means that an AC voltage applied across the junction gives rise to a DC current proportional to the nonlinearity, a process called "rectification". What we have done is turned this around. We use low frequency (kHz) electronic measurements to determine the nonlinearity. We then measure the component of the DC current due to light shining on the junction (for experts: we can do this with lock-in methods at the same time as measuring the nonlinearity). We can then use the measured nonlinearity and photocurrent to determine the optical-frequency voltage that must be driving the tunneling photocurrent. From the tunneling conductance, we can also estimate the distance scale over which tunneling takes place. Dividing the optical frequency voltage by that distance gives us the optical-frequency electric field at the tunneling gap, which may be compared with the field from the incident light to get the enhancement.
It's not at all obvious on the face of it that this should work. After all, the analysis relies on the idea that the tunneling nonlinearity measured at kHz frequencies is still valid at frequencies nearly 1012 times higher. Experimentally, the data show that this does work, however, and our theorist colleagues are able to explain why.
When you think about it, it's pretty amazing. The radiation intensity in the little nanogap between our electrodes can be hundreds of thousands or millions of times higher than that from the incident laser. Wild stuff, and definitely food for thought.
Thursday, September 16, 2010
Interesting links - nonphysics, mostly.
Nothing as interesting as this happens around here (at least, not to my knowledge), and I'm kind of glad.
xkcd has once again done a far better job demonstrating some aspect of my existence than I ever could have myself.
Fascinating photography of nuclear weapons explosions here.
Tangentially related to nuclear weapons, I got a big kick out of Stephen Colbert's Dr. Strangelove tribute.
xkcd has once again done a far better job demonstrating some aspect of my existence than I ever could have myself.
Fascinating photography of nuclear weapons explosions here.
Tangentially related to nuclear weapons, I got a big kick out of Stephen Colbert's Dr. Strangelove tribute.
Monday, September 13, 2010
Gravity
There has been a good deal of talk lately about gravity. We're all taught early on in our science education about the remarkable insight of Isaac Newton, that the force that causes, e.g., apples to fall from trees is, in fact, the same force that keeps the moon in orbit about the earth (or rather about a common center of gravity relatively close to the center of the earth). The Newtonian gravitational constant, G, is the least precisely known of all the fundamental constants, in part because gravity is a shockingly weak force and therefore difficult to measure. (As I demonstrated to my freshmen students, gravity is so weak that even with the feeble muscles in my legs I can jump up in the air in defiance of the opposing force of the gravitational pull of the entire earth.) More frustrating than the difficulty in precision measurement of G is the fact that different research groups using different techniques come up with experimental estimates of G that differ by surprisingly large amounts. This paper (published last week in Phys. Rev. Lett.) is another example. The authors sweated over the details of their systematic uncertainties for two years before publishing this result, which disagrees with the "official" CODATA value for G by 10 sigma (!). This is a classic showcase for the art, elegance, and necessary attention to detail required in precision measurement physics.
Also making many waves during 2010 is this paper by Erik Verlinde. The claim of this paper is that gravity is emergent, rather than a "real" force. It's been argued since Einstein published general relativity that gravity is different at a deep level than traditional forces. GR says that we should think of gravity as a deformation of spacetime due to the presence of stress/energy. Freely falling particles always travel on geodesics (locally straight lines), and those geodesics are determined by the distribution of mass and energy (including that due to spacetime deformation). In the appropriate limit, GR reduces to Newtonian gravity. Verlinde, striking out in a completely different direction, argues that one can start from very general considerations, and gravity emerges as an "entropic" force. An entropic force is an apparent force that results from the tendency of matter and energy to explore all available microscopic states. For example, a polymer will tend to ball up because there are many more microscopic states that describe the polymer wadded up than extended. Pulling on the two ends of the polymer chain to straighten it out will require overcoming this entropic tendency, and the result is a tension force. Verlinde argues that gravity arises similarly. I need to re-read the paper - it's slippery in places, especially on what underlying background assumptions are made about time and space, and what really plays the role of temperature here. Still, intriguing food for thought, and it's elegant that he can get both something GR-like and something Newtonian to fall out of such an analysis.
Regardless of how you may feel about Verlinde's speculations and the difficulty of measuring G, at least you can laugh in shocked disbelief that these people are serious. (I should be careful making jokes. Knowing Rick Perry, they'll start pushing this in Texas public schools next year.)
Tuesday, September 07, 2010
Two for the price of one.
I had noticed (and it was also pointed out by a colleague) the essentially simultaneous publication of this paper and this paper (which appear to have been submitted within a week of each other as well). In both papers, the authors have created short-channel graphene-based transistors in a clever way. They take a conductive nanowire (doped GaN in the Nano Letters paper; CoSi in the Nature paper), coat it with thin aluminum oxide via atomic-layer deposition, and then lay it down on top of a piece of exfoliated graphene. Then they evaporate Pt on top of the device. On either side of the nanowire, the Pt lands on the graphene, making source and drain electrodes. The nanowire shadows part of the graphene (the channel), and then the nanowire itself acts as the gate. This is a nice, self-aligned process, and the resulting graphene devices appear to be very fast (the Nature paper has actual high frequency measurements). Looks like they managed to get two papers in good journals for the price of one technique advance.
Sunday, September 05, 2010
Arguing from authority? Hawking, you're supposed to be better than that.
In Saturday's Wall Street Journal, there was an article by Stephen Hawking and Leonard Mlodinow clearly designed as a naked promotion of their new book. In the article, they argue that modern physics removes the need for a divine being to have created the universe. Religious arguments aside (seriously, guys, is that particular argument even news anymore?), one thing in the article especially annoyed me. Toward the end, the authors state:
As recent advances in cosmology suggest, the laws of gravity and quantum theory allow universes to appear spontaneously from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.
Our universe seems to be one of many, each with different laws.
You know what's wrong with this? It states, as if it is established fact, that we understand cosmology well enough to declare that universes spontaneously self-create. It states that the multiverse is a prediction of "many" theories, implying strongly that it's on firm ground. The problem is, this isn't science. It's not falsifiable, and in its present form it's not even close to being falsifiable in the foreseeable future. Seriously, name one PREdiction (as opposed to retrodiction) of these cosmological models, or more seriously, the multiverse/landscape idea, that is testable. Don't claim that our existence is such a test - the anthropic principle is weak sauce and is by no means evidence of the multiverse. Man, it annoys me when high profile theorists (it always seems to be theorists who do this) forget that physics is actually an experimental science that rests on predictive power.
Friday, September 03, 2010
This won't end well, because it's blindingly idiotic.
According to the Chronicle of Higher Education, my Texas A&M colleagues up the road in College Station now get the privilege of being evaluated based on their bottom-line "financial value" to the university. Take how much money the professor brings in (including some $ from tuition of the number of students taught), subtract their salary, and there you go. This raises problematic points that should be obvious to anyone with two brain cells to rub together. First, I guess it sucks to be in the humanities and social sciences - you almost certainly have negative value in this ranking. Congratulations, you leeches who take salary and don't bring in big research funding! Second, it firmly establishes that the service contributions of faculty to the university are worthless in this ranking scheme. Third, it establishes that the only measure of your educational contribution is how many students you teach - purely quantity, so if you teach large intro classes you're somehow valuable, but if you teach smaller upper division courses, you're less valuable. Gee, that's not simplistic at all. Now, the article doesn't actually say how these rankings will be used, but I'm having a hard time imagining ways that this metric is a good idea.
Wednesday, September 01, 2010
Silicon oxide and all that.
It's been a busy week work-wise; hence the low rate of blogging. However, I would be remiss if I failed to talk about the science behind a paper (on which I am a coauthor) that was mentioned on the front page of the New York Times yesterday. A student, Jun Yao, co-advised by my colleagues Jim Tour and Lin Zhong, did a really elegant experiment that has gotten a lot of attention, and the science is pretty neat. Here's the deal. Lots of people have done experiments where they've seen what appears to be nonvolatile switching of the electrical resistance in various nanoscale systems (e.g., junctions in nanotubes and other nanomaterials). That is, what is observed is that, with the use of voltage pulses, the electrical resistance of a device may be programmed to be comparatively high or comparatively low, and that state is preserved for a looooong time. Long story short: sometimes this behavior has nothing in particular to do with the nanoscale system being studied, and really results from the properties of the underlying or nearby silicon oxide, which is generally treated as inert and boring. Well, as people in the Si industry can tell you at length, it turns out that silicon oxide isn't necessarily inert and boring. What Jun showed via some elegant cross-sectional transmission electron microscopy is that when big voltage pulses are applied across small distances, it is possible to modify the oxide, effectively doing electrochemistry, and turning some of the oxide back into Si nanocrystals. When those nanocrystals give a hopping path from one electrode to the other, the device is "on". When that path is broken, the device is "off". The resulting nanocrystals themselves are quite small, on the order of a few nm. Hence the excitement about possibly using this approach for very dense, nonvolatile memory. There are, of course, a great many engineering issues to be overcome (there's no need to tell me about that in the comments....), but it is definitely a pretty science result.
Tuesday, August 24, 2010
The wisdom of combining complementary techniques
In the September issue of Nature Materials, I have a News and Views piece about a really neat article by Sakanoue and Sirringhaus of the Cambridge University organic electronics group. My apologies to those without subscriptions - here's a brief summary:
Transport in organic semiconductors is generally poor when compared with that in inorganic semiconductors. Disorder and purity are major concerns, and electronic conduction (parametrized by the mobility of the charge carriers) very often is thermally activated, so that decreasing temperature leads to an exponential worsening of charge transport. This is in contrast to the situation in clean, nice materials like Si or GaAs, when lowering T leads to improving mobility, as scattering of carriers by thermal phonons is reduced. The Cambridge investigators have successfully made transistors from high quality spin-cast films of TIPS-pentacene, a small molecule organic semiconductor. These films actually do show improving conduction as T is reduced down to 140 K. At high source-drain electric fields and high carrier densities, transport becomes pretty temperature independent down to cryogenic temperatures.
Most importantly, however, the Cambridge group has also done "charge modulation spectroscopy" - optical spectroscopy measurements on the films as well as on the molecules in solution. By combining the optical measurements with the transport experiments, they are able to make rather strong statements about how localized the charge carriers are. They can thus rule out exotic physics or voltage-driven metal-insulator transitions as the origin of the good conduction regime.
This work shows the power of combining complementary techniques. Relying only on transport, we had made similar arguments here. However, the addition of the optical data greatly enhances the scientific arguments - what we had argued as "consistent" is totally nailed down here, thanks to the additional information from the spectra.
Thursday, August 19, 2010
Deep thoughts....
Pondering introductory mechanics has made me think again about some foundational issues that I've wondered about in the past. Mach's Principle is the idea, put forward by Ernst Mach, that the inertial properties of matter depend somehow on the distribution of matter at far away points in the universe. The classic thought experiment toted out to highlight this idea is "Newton's bucket". Imagine a bucket filled with water. Start rotating the bucket (relative to the "fixed stars") about it's central axis of symmetry. After transients damp away due to viscosity of the water, the water's surface will have assumed a parabolic shape. In a (non-inertial) frame of reference that co-rotates with the bucket, an observer would say that the surface of the liquid is always locally normal to the vector sum of the gravitational force (which wants to pull the liquid down relative to the bucket) and the (fictitious, and present because we're working in a rotating frame) centrifugal force (which is directed radially outward from the rotation axis). [In an inertial frame of reference, the water has arranged itself so that the gradient in hydrostatic forces provides the centripetal force needed to keep the water rotating about the axis at a constant radius.] This rotating bucket business, by the way, is a great way to make parabolic mirrors for telescopes.
Mach was worried about what rotation really means here. What if there were no "fixed stars"? What if there were no other matter in the universe than the bucket and liquid? Moreover, what if the bucket were "still", and we rotated the whole rest of the universe about the bucket? Would that somehow pull the liquid into the parabolic shape? This kind of thinking has been difficult to discuss mathematically, but was on Einstein's mind when he was coming up with general relativity. What does acceleration mean in an otherwise empty universe? There seems to be reason to think that what we see as inertial effects (e.g., the appearance of fictitious forces in rotating reference frames) has some deep connection with the distribution of matter in the far away universe. This is very weird, because a central tenet of modern physics that physics is local (except in certain very well defined quantum mechanical problems).
The thing that's been knawing away at the back of my mind when thinking about this is the following. There is a big overall dipole moment in the cosmic microwave background. That means, roughly speaking, that we are moving relative to the center-of-mass frame of reference of the matter of the universe. We could imagine boosting our velocity just so as to null out the dipole contribution to the CMB; then we'd be in an inertial frame co-moving with the overall mass distribution of the universe. If inertial properties are tied somehow to the overall mass distribution in the universe, then shouldn't the center-of-mass frame of reference of the universe somehow be special? Some high energy theorist may tell me this is all trivial, but I'd like to have that conversation. Ahh well. It's fun that basic undergrad physics can still raise profound (at least to me) issues.
Mach was worried about what rotation really means here. What if there were no "fixed stars"? What if there were no other matter in the universe than the bucket and liquid? Moreover, what if the bucket were "still", and we rotated the whole rest of the universe about the bucket? Would that somehow pull the liquid into the parabolic shape? This kind of thinking has been difficult to discuss mathematically, but was on Einstein's mind when he was coming up with general relativity. What does acceleration mean in an otherwise empty universe? There seems to be reason to think that what we see as inertial effects (e.g., the appearance of fictitious forces in rotating reference frames) has some deep connection with the distribution of matter in the far away universe. This is very weird, because a central tenet of modern physics that physics is local (except in certain very well defined quantum mechanical problems).
The thing that's been knawing away at the back of my mind when thinking about this is the following. There is a big overall dipole moment in the cosmic microwave background. That means, roughly speaking, that we are moving relative to the center-of-mass frame of reference of the matter of the universe. We could imagine boosting our velocity just so as to null out the dipole contribution to the CMB; then we'd be in an inertial frame co-moving with the overall mass distribution of the universe. If inertial properties are tied somehow to the overall mass distribution in the universe, then shouldn't the center-of-mass frame of reference of the universe somehow be special? Some high energy theorist may tell me this is all trivial, but I'd like to have that conversation. Ahh well. It's fun that basic undergrad physics can still raise profound (at least to me) issues.
Friday, August 13, 2010
Memories and The Mechanical Universe
As I get ready to teach honors mechanics to first-year undergrads, I have been scouting the web for various resources. I ran across the complete series run of The Mechanical Universe (streaming for residents of the US and Canada), a great show that I remember watching on PBS occasionally when I was in high school. It's based on first-year physics at Cal Tech, and each episode opens and closes with David Goodstein lecturing to a class in an auditorium. It's very well done, and the computer animation was exceptionally good and informative, considering it was produced in the mid-1980s. Thanks, Annenberg Foundation, for making this show available! (Funny sequel of sorts: I actually had the pleasure of meeting Prof. Goodstein in 2003, and for some irrational reason I was surprised that he didn't look exactly the same as he had in 1984....)
Wednesday, August 11, 2010
What I missed, plus book recommendations
I'm finally back from travel, just in time to immerse myself in prep for the upcoming semester. It's hard to believe that classes start in 10 days.
While I was away from blogging, it looks like I missed some fun posts. For example, the Japanese group that made the first major discovery of the iron pnictide superconductors has found that sake (or something in sake) boosts superconductivity in a related compound. Chad Orzel did a pretty nice job posting about superconductivity as well, though I might do a different post later about this. He also had a post prompted by a reader demanding to know why all statistical physics courses are lame. (The answer is, of course, that the reader had never taking stat mech from me :-). Ahem. Perhaps not.) Along related lines, Charles Day at Physics Today has started a blog, which I will add to the blogroll at right. Glad to see that he leaps into discussing why he likes condensed matter physics. I also missed the excitement about the proposed proof that P != NP. The discussion online about the would-be proof is very impressive - it's always nice to see Fields medalists blogging, especially when they write as well as Terence Tao.
One final remark for now. I strongly recommend reading The Alchemy of Air and The Demon Under the Microscope. These are terrific, interesting books, and they really do a great job of making science (in this case chemistry) as exciting as any novel. Many thanks to Paul Chirik for recommending them to me.
Saturday, July 31, 2010
A cool application and more travel
Apologies for the long break between posts. It's been an incredibly hectic summer, and I'm about to go on a last big trip before the school year starts (and I get to teach honors intro mechanics to ~ 90 frosh - should be exciting, at least).
Before I go, I wanted to point out a very cool application of micromachining and computing power. There are many consumer electronic devices now that contain within them a little 3-axis accelerometer made by micromachining techniques, like this one. The basic gadget consists of a micromachined "test mass" (typically a block of Si) suspended on (silicon) springs. When the whole device is accelerated, the test mass "lags behind" because of its inertia, just as you get pushed back into the seat of your car when the car accelerates forward. Through (often) capacitive sensing, the displacement of the test mass can be transduced into a voltage that the chip then outputs. If the displacement can be detected along three axes, voila, you have a 3-axis accelerometer. This is the widget that tells the Nintendo Wii how you've been swinging the controller, and it tells iPhones and other similar toys how to orient their displays. With added sophistication, it's also possible to make micromachined gyroscopes. They aren't true gyros that spin. Rather, they're micromachined resonators (like tuning forks of particular shapes), and rotation leads to Coriolis forces that twist the resonator in a way that can be detected. (For Wii aficionados, that is how the "Wii Motion Plus" works.) Then you can get angular accelerations, too.
What is the point of this discussion? Well, some people at Microsoft Research had a great insight. You can put a sensor like this on a digital camera. If the acceleration data is logged when a picture is snapped, then it is possible to retroactively unblur photos (at least, pictures that were blurry because the camera was moving). This is the slickest thing I've seen in a while!
Before I go, I wanted to point out a very cool application of micromachining and computing power. There are many consumer electronic devices now that contain within them a little 3-axis accelerometer made by micromachining techniques, like this one. The basic gadget consists of a micromachined "test mass" (typically a block of Si) suspended on (silicon) springs. When the whole device is accelerated, the test mass "lags behind" because of its inertia, just as you get pushed back into the seat of your car when the car accelerates forward. Through (often) capacitive sensing, the displacement of the test mass can be transduced into a voltage that the chip then outputs. If the displacement can be detected along three axes, voila, you have a 3-axis accelerometer. This is the widget that tells the Nintendo Wii how you've been swinging the controller, and it tells iPhones and other similar toys how to orient their displays. With added sophistication, it's also possible to make micromachined gyroscopes. They aren't true gyros that spin. Rather, they're micromachined resonators (like tuning forks of particular shapes), and rotation leads to Coriolis forces that twist the resonator in a way that can be detected. (For Wii aficionados, that is how the "Wii Motion Plus" works.) Then you can get angular accelerations, too.
What is the point of this discussion? Well, some people at Microsoft Research had a great insight. You can put a sensor like this on a digital camera. If the acceleration data is logged when a picture is snapped, then it is possible to retroactively unblur photos (at least, pictures that were blurry because the camera was moving). This is the slickest thing I've seen in a while!
Thursday, July 22, 2010
Why there has been no Carl Sagan or Brian Greene of condensed matter physics
It's impossible to be a condensed matter physicist that cares about outreach and scientific literacy, and not think about why condensed matter physics has taken such a back seat, comparatively, in the popularization of science. It is easy to argue that condensed matter physics has had more direct impact on the daily lives of people living in modern, technological societies than any other branch of physics (we could get into an argument about the relative impacts of the transistor and the laser, but I think the CM folks would win). So, how come there are specials and miniseries on PBS and Discovery Channel about string theory, the LHC, cosmology, and astrophysics with considerable regularity, people like Stephen Hawking, Brian Greene and Neil DeGrasse Tyson show up on The Daily Show, and the closest condensed matter gets to the public consciousness is a BBC special from several years ago about the Schon scandal? Is it just that there is no charismatic, telegenic champion of the cause? I think it's more than that.
First, there is the issue of profundity. High energy physics makes an obvious play toward people's desire for answers to Big Questions. What is mass? What is everything made out of? How many dimensions are there? How did the Universe begin, and how will it end? Likewise, astrophysics talks about the history of the entire Universe, the birth and death of stars, the origin of galaxies, and literally heaven-shaking events like gamma ray bursts. Condensed matter physics has a much tougher sell. In some ways, CM is the physics of the everyday - it's the reason water is wet, metals are shiny, diamond is transparent and sparkly, and the stuff in sand can be used to make quasimagical boxes that let me write text read all over the world. Moreover, CM does look at profound issues (How does quantum mechanics cross over into apparently classical behavior? How do large numbers of particles interacting via simple rules give rise to incredibly rich and sometimes amazingly precise emergent properties?), just ones that are not easy to state in a five word phrase.
Second, there is the problem of accessibility. CM physics is in some sense an amalgam of quantum mechanics and statistical mechanics. People do not have everyday experience with either (at least, the vast majority don't realize that they do). It's very challenging to explain some of the very nonintuitive concepts that crop up in condensed matter to lay-people without either gross oversimplification or distortion. There can be a lot of overhead that must be covered before it's clear why some CM questions really are interesting. An awful lot of CM issues literally cannot be seen by the naked eye, including atoms. Of course, the same can be said for quarks or colliding neutron stars - this is not an insurmountable problem.
Third, there is perceived relevance. This is complementary to profundity. People are naturally interested in Big Questions (the origins of the stars) even if the answers don't affect their daily lives. People are also naturally interested in Relevant Questions - things that affect them directly. For example, while I'm not that into meteorology, I do care quite a bit about whether Tropical Storm Bonnie is going to visit Houston next week. Somehow, people just don't perceive CM physics as important to their daily existence - it's so ubiquitous that it's invisible.
These issues greatly constrain any attempt to popularize CM physics....
First, there is the issue of profundity. High energy physics makes an obvious play toward people's desire for answers to Big Questions. What is mass? What is everything made out of? How many dimensions are there? How did the Universe begin, and how will it end? Likewise, astrophysics talks about the history of the entire Universe, the birth and death of stars, the origin of galaxies, and literally heaven-shaking events like gamma ray bursts. Condensed matter physics has a much tougher sell. In some ways, CM is the physics of the everyday - it's the reason water is wet, metals are shiny, diamond is transparent and sparkly, and the stuff in sand can be used to make quasimagical boxes that let me write text read all over the world. Moreover, CM does look at profound issues (How does quantum mechanics cross over into apparently classical behavior? How do large numbers of particles interacting via simple rules give rise to incredibly rich and sometimes amazingly precise emergent properties?), just ones that are not easy to state in a five word phrase.
Second, there is the problem of accessibility. CM physics is in some sense an amalgam of quantum mechanics and statistical mechanics. People do not have everyday experience with either (at least, the vast majority don't realize that they do). It's very challenging to explain some of the very nonintuitive concepts that crop up in condensed matter to lay-people without either gross oversimplification or distortion. There can be a lot of overhead that must be covered before it's clear why some CM questions really are interesting. An awful lot of CM issues literally cannot be seen by the naked eye, including atoms. Of course, the same can be said for quarks or colliding neutron stars - this is not an insurmountable problem.
Third, there is perceived relevance. This is complementary to profundity. People are naturally interested in Big Questions (the origins of the stars) even if the answers don't affect their daily lives. People are also naturally interested in Relevant Questions - things that affect them directly. For example, while I'm not that into meteorology, I do care quite a bit about whether Tropical Storm Bonnie is going to visit Houston next week. Somehow, people just don't perceive CM physics as important to their daily existence - it's so ubiquitous that it's invisible.
These issues greatly constrain any attempt to popularize CM physics....
Tuesday, July 20, 2010
Wow - look what I missed!
I did some travel + have a busy period at work, and what happens? Scienceblogs implodes, and Chad Orzel laments something I've worried about for a long time: the difficulty of explaining the importance (and basic coolness) of condensed matter physics to a general audience. As for the former, serves 'em right for not inviting me to participate -- kidding! There are enough talented people involved that they'll be fine, and as Dave Bacon points out in his linked post above, mixing up new networks of people interested in communicating science is probably a net good thing. I do think it's a shame, though, that some interesting blogs have seemed to fade away (Incoherent Ponderer, Angry Physicist, you are missed.). Regarding the second topic, I do want to point out a previous post I made about topological insulators (the strawman topic of Chad's post), and once I dig out from under work, I'll write more about why condensed matter is particularly difficult to popularize, and thoughts on how to get around those inherent challenges.
Thursday, July 08, 2010
Symmetries and level-appropriate teaching
This fall I'm going to be teaching honors introductory mechanics to incoming undergraduates - basically the class that would-be physics majors take. Typically when we first teach students mechanics, we start from the point of view of forces and Newton's laws, which certainly parallels the historical development of the subject and allows students to build some physical intuition. Then, in a later class, we point out that the force-based approach to deriving the equations of motion is not really the modern way physicists think about things. In the more advanced course, students are introduced to Lagrangians and Hamiltonians - basically the Action Principle, in which equations of motion are found via the methods of variational calculus. The Hamiltonian mechanics approach (with action-angle variables) was the path pursued when developing quantum mechanics; and the Lagrangian approach generalizes very elegantly to field theories. Indeed, one can make the very pretty argument that the Action Principle method does such a good job giving the classical equations of motion because it's what results when you start from the path integral formulation of quantum mechanics and take the classical limit.
A major insight presented in the upper division course is Noether's Theorem. In a nutshell, the idea is that symmetries of the action (which is a time integral of the Lagrangian) imply conservation laws. The most famous examples are: (1) Time-translation invariance (the idea that the laws of physics governing the Lagrangian do not change if we shift all of our time parameters by some amount) implies energy conservation. (2) Spatial translation invariance (the laws of physics do not change if we shift our apparatus two feet to the left) implies conservation of momentum. (3) Rotational invariance (the laws of physics are isotropic in direction) implies conservation of angular momentum. These classical physics results are deep and profound, and they have elegant connections to operators in quantum mechanics.
So, here's a question for you physics education gurus out there. Does anyone know a way of showing (2) or (3) above from a Newton's law direction, as opposed to Noether's theorem and Lagrangians? I plan to point out the connection between symmetry and conservation laws in passing regardless, but I was wondering if anyone out there had come up with a clever argument about this. I could comb back issues of AJP, but asking my readers may be easier.
A major insight presented in the upper division course is Noether's Theorem. In a nutshell, the idea is that symmetries of the action (which is a time integral of the Lagrangian) imply conservation laws. The most famous examples are: (1) Time-translation invariance (the idea that the laws of physics governing the Lagrangian do not change if we shift all of our time parameters by some amount) implies energy conservation. (2) Spatial translation invariance (the laws of physics do not change if we shift our apparatus two feet to the left) implies conservation of momentum. (3) Rotational invariance (the laws of physics are isotropic in direction) implies conservation of angular momentum. These classical physics results are deep and profound, and they have elegant connections to operators in quantum mechanics.
So, here's a question for you physics education gurus out there. Does anyone know a way of showing (2) or (3) above from a Newton's law direction, as opposed to Noether's theorem and Lagrangians? I plan to point out the connection between symmetry and conservation laws in passing regardless, but I was wondering if anyone out there had come up with a clever argument about this. I could comb back issues of AJP, but asking my readers may be easier.
Science and communication
I've tended to stay away lately from the arguments about scientists-as-communicators that seem to flare up periodically. This recent editorial by Chris Mooney, about how scientists who actively listen to the general public do a better job of communicating and affecting policy, was simultaneously informative and yet blindingly obvious in some ways. (Here's a shock: making it clear to an audience that you're listening to their concerns and considering them seriously gets better results than talking down to them or ignoring them dismissively.) Chad Orzel followed up with a very well-written (as usual) post about scientists and communication skills that is, like Mooney's, really common sense in the end. (Here's another shock: not everyone is Carl Sagan or Neil DeGrasse Tyson, and sometimes our scientific and academic institutions do not value public communication as much as they do utter dedication to scientific research.)
Many people in the general public do have some baseline interest in science and engineering issues, even if they don't label them as such. Lots of people watch Mythbusters. Lots of people read about nutritional information or medical research quasiobsessively. Many people do care about space, and climate, and the environment, and energy, and electronics, and so forth, even if those concerns are not the top of their list all the time. There is a thirst for information, and this is all good for society. I do want to point out one additional issue that seems to get neglected to some degree in this discussion, however. There are people out there who either don't know what they're talking about (the MD who somehow has a column on the Huffington Post who periodically spouts off utter pseudoscientific nonsense), or actively are pushing misleading or inaccurate information (members of the TX Board of Education who grossly mischaracterize the nature of science). Scientists can do as much as possible to "market" ourselves and communicate our enthusiasm and willingness to have an honest and open dialog about scientific issues. However, when anti- or pseudo-science can command at least as big a bully pulpit, and when education and time make it difficult for the average person to discriminate between gold and dross, it's an up hill struggle. Add in to this the mainstream media's love of controversy ("Some say that the earth goes around the sun, but others disagree. Let's look at both sides of this issue!"), and the situation can get downright depressing.
Edit: I realize I left out two other confounding factors: (1) Scientists who end up distorting actual science beyond recognition in a misguided attempt at popularization (Michio Kaku is an example); and (2) Scientists who are so aggressively arrogant and obnoxious that they only hurt their own cause.
Many people in the general public do have some baseline interest in science and engineering issues, even if they don't label them as such. Lots of people watch Mythbusters. Lots of people read about nutritional information or medical research quasiobsessively. Many people do care about space, and climate, and the environment, and energy, and electronics, and so forth, even if those concerns are not the top of their list all the time. There is a thirst for information, and this is all good for society. I do want to point out one additional issue that seems to get neglected to some degree in this discussion, however. There are people out there who either don't know what they're talking about (the MD who somehow has a column on the Huffington Post who periodically spouts off utter pseudoscientific nonsense), or actively are pushing misleading or inaccurate information (members of the TX Board of Education who grossly mischaracterize the nature of science). Scientists can do as much as possible to "market" ourselves and communicate our enthusiasm and willingness to have an honest and open dialog about scientific issues. However, when anti- or pseudo-science can command at least as big a bully pulpit, and when education and time make it difficult for the average person to discriminate between gold and dross, it's an up hill struggle. Add in to this the mainstream media's love of controversy ("Some say that the earth goes around the sun, but others disagree. Let's look at both sides of this issue!"), and the situation can get downright depressing.
Edit: I realize I left out two other confounding factors: (1) Scientists who end up distorting actual science beyond recognition in a misguided attempt at popularization (Michio Kaku is an example); and (2) Scientists who are so aggressively arrogant and obnoxious that they only hurt their own cause.
Subscribe to:
Posts (Atom)