Search This Blog

Sunday, August 14, 2011

Topological insulator question

I have a question, and I'm hoping one of my reader experts might be able to answer it for me.  Let me set the stage.  One reason 3d topological insulators are a hot topic these days is the idea that they have special 2d states that live at their surfaces.  These surface states are supposed to be "topologically protected" - in lay terms, this means that they are very robust; something deep about their character means that true back-scattering is forbidden.  What this means is, if an electron is in such a state traveling to the right, it is forbidden by symmetry for simple disorder (like a missing atom in the lattice) to scatter the electron into a state traveling to the left.  Now, these surface states are also supposed to have some unusual properties when particle positions are swapped around.  These unconventional statistics are supposed to be of great potential use for quantum computation.  Of course, to do any experiments that are sensitive to these statistics, one needs to do quantum interference measurements using these states.   The lore goes that since the states are topologically protected and therefore robust, this should be not too bad.

Here's my question.  While topological protection suppresses 180 degree backscattering, it does not suppress (as far as I can tell) small angle scattering, and in the case of quantum decoherence, it's the small angle scattering that actually dominates.  It looks to me like the coherence of these surface states shouldn't necessarily be any better than that in conventional materials.  Am I wrong about this?  If so, how?  I've now seen multiple papers in the literature (here, here, and here, for example) that show weak antilocalization physics at work in such materials.  In the last one in particular, it looks like the coherence lengths in these systems (a few hundred nanometers at 1 K) are not even as good as what one would see in a conventional metal film (e.g., high purity Ag or Au) at the same temperatures.  That doesn't seem too protected or robust to me....  I know that the situation is likely to be much more exciting if superconductivity is induced in these systems.  Are the normal state coherence properties just not that important?

Tuesday, August 09, 2011

DOE BES CMX PI mtg

Went for the cryptic headline.  I'm off for a Department of Energy Basic Energy Sciences Condensed Matter Experiment principal investigator meeting (the first of its kind, I believe) in the DC area.  This should be really interesting, getting a chance to get a perspective on the variety of condensed matter and materials physics being done out there.  This looks like it will be much more useful than a dog-and-pony show that I went to for one part of another agency a few years ago....

Monday, August 08, 2011

Evolution of blogger spam

Over the last couple of weeks, new forms of spam comments have been appearing on blogger. One type takes a sentence or two from the post itself, and feeds them through a parser reminiscent of ELIZA, to produce a vaguely coherent statement in a comment. Another type that I've noticed grabs a sentence or two from an article that was linked in the original post. A third type combines these two, taking a sentence from a linked article, and chewing on it with the ELIZA-like parser. A few more years of this, and we'll have the spontaneous evolutionary development of generalized natural-language artificial intelligence from blogger spam....

Friday, August 05, 2011

Summer colloquium

Every year at Rice in early August, the Rice Quantum Institute (old website) (shorthand: people who care about interdisciplinary science and engineering involving hbar) has its annual Summer Colloquium. Today is the twenty-fifth such event. It's a day-long miniconference, featuring oral presentations by grad students and posters, by both grad students and undergrad researchers from a couple of REU programs (this year, the RQI REU and the NanoJapan REU). It's a full day, with many talks. It's a friendly way for students to get more presentation experience, and a good way for faculty to learn what their colleagues are doing. I'd be curious to know if other institutions have similar things - my impression has been that this is comparatively unique, particularly its very broad interdisciplinary nature (e.g., talks on spectroscopy for pollution monitoring, topological insulators, plasmons, carbon nanotube composites, batteries) and combination of undergrads and grad students.

Thursday, July 28, 2011

Plutonium: a case study in why CM physics is rich

At the heart of condensed matter physics are two key concepts: the emergence of rich phenomena (including spontaneously occurring order - structural, magnetic, or otherwise) in the many-particle limit; and the critical role played by quantum mechanics in describing the many-body states of the system. I've tried to explain this before to lay persons by pointing out that while complicated electronic structure techniques can do an adequate job of describing the electronic and vibrational properties of a single water molecule at zero temperature, we still have a difficult time predicting really emergent properties, such as phase diagram of liquid, solid, and vapor water, or the viscosity or surface tension of liquid water.

Plutonium is an even more striking example, given that we cannot even understand its properties from first principle when we only have a single type of atom to worry about. The thermodynamic phase diagram of plutonium is very complicated, with seven different crystal structures known, depending on temperature and pressure. Moreover, as a resident of the actinide row of the periodic table, Pu has unpaired 5f electrons, though it is not magnetically ordered. At the same time, Pu is very heavy, with 94 total electrons, so that relativistic spin-orbit effects can't be neglected in trying to understand its structure. The most sophisticated electronic structure techniques out there can't handle this combination of circumstances. It's rather humbling that more than 70 years after its discovery/synthesis, we still can't understand this material, despite the many thousands of person-hours spent on it via various nations' nuclear weapons programs.

Sunday, July 24, 2011

Einstein, thermodynamics, and elegance

Recently, in the course of other writing I've been doing, I again came to the topic of what are called Einstein A and B coefficients, and it struck me again that this has to be one of the most elegant, clever physics arguments ever made.  It's also conceptually simple enough that I think it can be explained to nonexperts, so I'm going to give it a shot.

Ninety-four years ago, one of the most shocking ideas in physics was the concept of the spontaneous, apparently random, breakdown of an atomic system.  Radioactive decay is one example, but even light emission from an atom in an excited state will serve.  Take ten hydrogen atoms, all in their first electronically excited state (electron kicked up into a 2p orbital from the 1s orbital).  These will decay back into the 1s ground state (spitting out a photon) at some average rate, but each one will decay independently of the others, and most likely at a different moment in time.  To people brought up in the Newtonian clockwork universe, this was shocking.  How could truly identical atoms have individually differing emission times?  Where does the randomness come from, and can we ever hope to calculate the rate of spontaneous emission?

Around this time (1917), Einstein made a typically brilliant argument:  While we do not yet know [in 1917] how to calculate the rate at which the atoms transition from the ground state "a" to the excited state "b" when we shine light on them (the absorption rate), we can reason that the rate of atoms going from a to b should be proportional to the number of atoms in the ground state (Na) and the amount of energy density in the light available at the right frequency (u(f)).  That is, the rate of transitions "up" = Bab Na u(f), where B is some number that can at least be measured in experiments.  [It turns out that people figured out how to calculate B using perturbation theory in quantum mechanics about ten years later.].  Einstein also figured that there should be an inverse process (stimulated emission), that causes transitions downward from b to a, with a rate = Bba Nb u(f).  However, there is also the spontaneous emission rate = AbaNb, where he introduced the A coefficient.

Here is the brilliance.  Einstein considered the case of thermal equilibrium between atoms and radiation in some cavity.  In steady state, the rate of transitions from a to b must equal the rate of transitions from b to a - in steady state, no atoms are piling up in the ground or excited states.  Moreover, from thermodynamics, in thermal equilibrium, the ratio of Nb to Na should just be a Boltzmann factor, exp(-Eab/kBT), where Eab is the energy difference between the two states, kB is Boltzmann's constant, and T is the temperature.  From this, Einstein shows that the two Bs were equal, was able to solve for the unknown A in terms of B (which can be measured and nowdays calculated), and to show that the energy density of the radiation (u(f,T)) is Planck's blackbody formula.

My feeble writing here doesn't do this justice.  The point is, from basic thermodynamic reasoning, Einstein made it possible to derive an expression for the spontaneous emission rate of atoms, many years in advance of the theory (quantum electrodynamics) that allows one to calculate it directly.  This is what people mean by the elegance of physics - in a few pages, from proper reasoning on fundamental grounds, Einstein was able to deduce relationships that had to exist between different physical parameters; and these parameters could be measured and tested experimentally.  For more on this, here is a page at MIT that links to a great Physics Today article about the topic, and an English translation of Einstein's 1917 paper.  

Thursday, July 21, 2011

Slackers, coasters, and sherpas, oh my.

This is mostly for my American readers - be forewarned.

I wrote last year about a plan put forward by Rick O'Donnell, a controversial "consultant" hired by the state of Texas (hint: Gov. Rick Perry, apparent 2012 presidential hopeful, wanted this guy.) to study the way public universities work in Texas. Specifically, O'Donnell came from a think tank that had very firm predetermined concept about higher education: Faculty are overpaid slackers that are ripping off students, and research is not of value in the educational environment. O'Donnell has written a report (pdf) about this topic, and he's shocked, shocked to find that he was absolutely right. By his metrics of number of students taught and research dollars brought in, he grouped faculty at UT and Texas A&M into "Dodgers, Coasters, Sherpas, Pioneers, and Stars". Pioneers are the people who bring in big grants and buy out of teaching. Stars are the people who bring in grants and teach large lecture classes. Sherpas are mostly instructors (he doesn't seem to differentiate between instructors and faculty) who lecture to large classes but don't bring in grants. Dodgers teach small classes and don't bring in grant money. Coasters teach small classes and bring in some grant money.

This is the exact incarnation of what I warned about in comments on my old post. This analysis basically declares that all social science and humanities faculty that teach upper division classes are worthless leeches (small classes, no grants) sponging off the university. People in the sciences and engineering who teach upper level classes aren't any better, unless they're bringing in multiple large research grants. Oh, and apparently the only metric for research and scholarship is money.

Nice. Perry, by the way, also appointed Barbara Cargill to run the state board of education. She's a biologist who wants evolution's perceived weaknesses to be emphasized in public schools, and she also was upset because the school board only has "six true conservative Christians" as members. I guess Jews, Muslims, Buddhists, Hindus, and atheists need not apply.  Update:  It looks like Texas has dodged creationism for another couple of years.  Whew.

Wednesday, July 20, 2011

What is so hard about understanding high temperature superconductivity?

As ZZ has pointed out, Nature is running a feature article on the history of high temperature superconductivity over the last 25 years. I remember blogging about this topic five years ago when Nature Physics ran an excellent special issue on the subject. At the time, I wrote a brief summary of the field, and I've touched on this topic a few times in the intervening years. Over that time, it's pretty clear that the most important event was the discovery of the iron-based high temperature superconductors. It showed that there are additional whole families of high temperature superconducting materials that are not all copper oxides.

Now is a reasonable time to ask again, what is so hard about this problem? Why don't we have a general theory of high temperature superconductivity?  Here are my opinions, and I'd be happy for more from the readers.
  • First, be patient.  Low-T superconductivity was discovered in 1911, and we didn't have a decent theory until 1957.  By that metric, we shouldn't start getting annoyed until 2032.  I'm not just being flippant here.  The high-Tc materials are generally complicated (with a few exceptions) structurally, with large unit cells, and lots of disorder associated with chemical doping.  This is very different than the situation in, e.g., lead or niobium.
  • Electron-electron interactions seem to be very important in describing the normal state of these materials.  In the low-Tc superconductors, we really can get very far understanding the normal starting point.  Aluminum is a classic metal, and you can do a pretty good job getting quantitative accuracy on its properties from the theory side even in single-particle, non-interacting treatments (basic band theory).  In contrast, the high-Tc material normal states are tricky.  Heck, the copper oxide parent compound is a Mott insulator - a system that single-particle band structure tells you should be a metal, but is in fact insulating because of the electron-electron repulsion!  
  • Spin seems to be important, too.   In the low-Tc systems, spin is unimportant in the normal state, and the electrons pair up so that each electron is paired with one of opposite spin, so that the net spin of the pair is zero, but that's about it.  In high-Tc systems, on the other hand, very often the normal state involves magnetic order of some sort, and spin-spin interactions may well be important.
  • Sample quality has been a persistent challenge (particularly in the early days).
  • The analytical techniques that exist tend to be indirect or invasive, at least compared to the desired thought experiments.  This is a persistent challenge in condensed matter physics.  You can't just go and yank on a particular electron to see what else moves, in an effort to unravel the "glue" that holds pairs together (though the photoemission community might disagree).  While the order parameter (describing the superconducting state) may vary microscopically in magnitude, sign, and phase, you can't just order up a gadget to measure, e.g., phase as a function of position within a sample.  Instead, experimentalists are forced to be more baroque and more clever.
  • Computational methods are good, but not that good.  Exact solutions of systems of large numbers of interacting electrons remain elusive and computationally extremely expensive.  Properly dealing with strong electronic correlations, finite temperature, etc. are all challenges.
Still, it's a beguiling problem, and now is an exciting time - because of the iron compounds, there are probably more people working on novel superconductors than at any time since the heady days of the late '80s, and they're working with the benefit of all that experience and hindsight.  Maybe I won't have to write something like this for the 30th high-Tc anniversary in 2016....

Monday, July 18, 2011

Updated look.

I finally bit the bullet and updated the look of the blog.  I'm still keeping it ad-free, though.

Sunday, July 17, 2011

google+

I have a nagging feeling that google+ could somehow be used to significantly increase readership of my blog, if only I was appropriately savvy.  Anyone have any suggestions or thoughts on this?  I don't crave the attention per se, but I'd be fibbing if I said I wasn't jealous of the readership numbers of the folks that blog at, e.g., scienceblogs, discovermagazine.com, or scientificamerican.com.  Larger readership would undoubtedly motivate more writing, too, though that's not necessarily great for my time management....

Saturday, July 16, 2011

It's all at the interface. Again.

Over the last decade, there has been a great deal of exciting work in making electronically interesting systems at atomically sharp interfaces between different oxide materials (oxide heterostructures). Analogous efforts at semiconductor-dielectric interfaces have given us the conventional field-effect transistor, something like 109 of which are being used to render this page for you. Likewise, heterointerfaces in compound semiconductor systems (especially the technologically relevant III-V materials like GaAs) have given us two Nobel Prizes and a great deal of quantum electronic fun. Oxides are much trickier beasts from the materials science side, making growth and interfacial control a major challenge. Moreover, with respect to basic science, transition metal oxides can be incredibly rich systems, because in many of them electron-electron interactions lead to competing electronic and magnetic phases, with consequences like the emergence of high temperature superconductivity.

A few years ago, this paper demonstrated that it was possible to get superconductivity at the interface between SrTiO3 and LaAlO3, two oxides that are both insulating if perfectly stoichiometric. Still, SrTiO3 is known to superconduct if highly doped, and therefore this observation, while a great experiment, wasn't hugely shocking, given the existence of a high density electron gas at the STO/LAO interface. More recently, this paper showed that high temperature superconductivity could happen at the interface between a nominally insulating oxide and a metallic (but not superconducting) cuprate related to the high-Tc materials. This past week on the arxiv, a logical successor to these works appeared here. The authors use two nominally insulating oxides (STO again, and CaCuO2. Because of imperfect stoichiometry at the interface (excess oxygen, apparently), there is a conducting layer at the interface, with a superconducting transition around 50 K (in one sample, though others all show transitions exceeding 25 K). Bearing in mind that this is a preprint (and therefore has not been refereed), it is still very exciting. We are finally approaching the ability to engineer complex materials (not just semiconductors) on the atomic layer level, and this should be an incredible playground for basic science and materials engineering. It'd be great to get plugged into a collaboration working in this area.

Thursday, July 14, 2011

Science and the public

I couldn't help but notice that one of my favorite producers of animated films, Aardman Animation, is coming out with a new movie (trailer here). I find it very interesting that the UK version of the movie is "The Pirates! In an Adventure with Scientists!", while the US version is "The Pirates! Band of Misfits!". The film is based on a book with the former title, by the way. I don't want to overanalyze this, but it's hard to escape the conclusion that some marketing drone decided, "scientist" is box-office poison, and that "misfit" was an acceptable and more marketable substitute in the US. Great. Wonderful. In case you're wondering, Charles Darwin shows up as a character in the book/movie. I imagine that the US ads won't be playing that up very much, or there will be protests. Sigh.

(I do have a science post I'll make shortly. I just couldn't let this pass w/o comment. And it's taking enormous self-restraint not to launch into extended political invective about the US, but there are many places where people can read that if they want to.)

Thursday, July 07, 2011

Follow-up, and blogger drop-off

Regarding the story mentioned here, Nature has published both a provocative and interesting article by Eugenie Reich about the larger issues raised, and an editorial. Sorry that these are behind a pay-wall. To summarize in a few sentences: Eugenie Reich points out that the misconduct investigation relevant to this discussion highlights important problems with the US Department of Energy's handling of such cases. To wit: There are issues of independence and chain of authority of the investigators, and lack of proper record keeping, documentation, etc. of investigation reports. The conclusion is that this is a powerful argument for the DOE to establish an Office of Research Integrity, like those in some other agencies. The editorial from Nature chastises the DOE along these lines. Interesting that the Nature editorial makes no mention at all of their own role in not publishing technical comments relevant to this particular matter.

In blogging news, there has been a drop-off in the number of active physical science bloggers. David Bacon's Quantum Pontiff has decohered. The Incoherent Ponderer has gone so far as to apparently delete his entire blog and blogger profile. Other blogs have not been updated in many months. It's likely that this is all part of a natural stabilization of blogging - people run out of things to say, and the novelty of blogging has trailed off. It will be interesting to see where this trend resolves. It'll be a shame to have fewer interesting voices to follow, though. (Clearly we should all switch to Twitter, since 140 characters should be more than sufficient to carry out detailed science discussions or popularizations for the lay audience. Ahem.)

Tuesday, July 05, 2011

Crowd-sourcing, video games, and the world's problems

This past weekend, I caught a snippet of a rebroadcast of this NPR story about Jane McGonigal and the thesis of her recent book. In short, she points out that as a species we have spent literally millions of person-years playing World of Warcraft, an online game that involves teamwork and puzzle-solving (as well as all the usual fun silliness of videogames). Her point is that in the game environment, people have demonstrated great creativity as well as a willingness to keep coming back, over and over, to tackle challenging problems (in part because there is recognition by the players that problems are pitched at a level that is tricky but not insurmountable). She wants to harness this kind of intellectual output for good, rather than just have it as a social (or antisocial) outlet. She's not the first person to have this sort of idea, of course (see, e.g., Ender's Game, or the Timothy Zahn short story "The Challenge"), but the WoW numbers are truly eye-popping.

It would be great if there were certain scientific problems to which this could be applied. The overall concept seems easiest to adapt to logistics (e.g., coming up with clever ways of routing shipping containers or disaster relief supplies), since that's a puzzle-solving subdiscipline where the basic problems are at least accessible to lay-people. Trying this with meaty scientific challenges would be much more difficult, unless those challenges could be translated effectively into problems that don't require years and years of foreknowledge. Hmm. Still very thought-provoking.

Friday, July 01, 2011

The tyranny of the buried interface

Time and again, a major impediment to research progress in condensed matter physics, electrical engineering, materials science, and physical chemistry is the need to understand what is happening in some system at a buried interface. For example, in organic photovoltaic devices, it is of great importance to learn more about what is happening at metal/organic semiconductor interfaces (charge transfer, interfacial dipole formation, Fermi level pinning) and organic/organic interfaces (exciton splitting at the interface between electron- and hole-transporting materials). Another example: in lithium ion batteries, at the interface between either the cathode or the anode and the electrolyte, after the first couple of charge and discharge cycles, there forms the "solid electrolyte interface" (SEI) layer. The SEI is nanoscale in thickness, stabilizes the electrode surface, establishes the energetic lineup between the electrolyte redox chemistry and the actual electrode surface, strongly affects the kinetics of the lithium ion transport, etc.

Unfortunately, probing buried interfaces in situ in functioning systems is extremely hard. There generally is no Star Trek scanner device that can nondestructively reveal atomic-scale details of buried 3d structures. Many of our best characterization approaches are surface-based, or require thinned down samples, and there are always difficult questions about how information gained in such investigations translates to the real situation of interest. This is not a new problem. From the early days of surface science and before, people have been worrying about, e.g., how to connect studies performed in UHV on single crystal surfaces with "real world" situations on polycrystalline surfaces with ambient contaminants. There are some macro-scale interface sensitive approaches (exploiting x-ray standing waves, or interfacial optical effects). Still, the more people working on developing better characterization tools toward this end, the better, even if it doesn't sound terribly exciting to the masses.

Thursday, June 23, 2011

a recurring story

Five years ago, there was a controversy in the pages of Nature regarding this paper from 1993, the first to claim atomic-resolution chemical analysis via scanning transmission electron microscopy.  At issue was whether or not the data in the paper had been reprocessed (in response to referee concerns) in a legitimate or misrepresentative way, and whether the authors had been honest and forthcoming with the journal and the reviewers about the procedures they'd followed.  The reason that matters came to a head more than 12 years after the original paper was the appearance of a preprint in the arxiv and subsequently submitted to Nature Physics, sharing two of the authors of the original paper, with further questions raised about the handling and analysis of data and images.  This was all discussed clearly and succinctly by ZZ at the time.  Nature allowed the authors to publish a corrigendum, a correction rather than a retraction, regarding the original '93 paper.  This was sufficiently controversial that Nature felt the need to write an editorial explaining their decision.  Oak Ridge did an investigation of the matter, and concluded that there was no fabrication or falsification of data; that report and a response by the authors are linked here.  Judging from the appearance of this on the arxiv last night, it would appear that this isn't quite the end of things.

Wednesday, June 15, 2011

Pitch for a tv show

Summer blogging has been and will continue to be light, as I try to get some professional writing done. In the meantime, though, I have to give my elevator pitch for the awesome new TV show that would be great fun. It's "Chopped" meets "Mythbusters" meets "Scrap Heap Challenge"/"Junkyard Wars". Start off with three teams. Give them a physics- or engineering-related task that they have to accomplish (e.g., write the opening crawl from Star Wars in one mm^2; weigh a single grain of salt), some number of tools that they have to use (e.g., a green laser pointer and an infrared corrected microscope objective), and access to a stocked "pantry" (including a PC, electronics components, etc.). Give them a time limit (4 hours, cleverly edited down to half an hour in broadcast). Points awarded for success at the task, time used, and elegance. I think it could be a hit, particularly if there are explanations (narrated by cool resident experts) delivered in a fun, accessible tone. It'd be fun, even if it did conjure up images of Guy Fleegman in Galaxy Quest.

Monday, June 06, 2011

Soliciting book or review article recommendations

I am interested in reading good books or review articles on two particular topics, and I'm hoping that by "crowd-sourcing" to my readership, I might do better than wandering through the literature.  First, I want to find an authoritative discussion of the physics behind the electrochemical potentials of battery materials - not the lore of decades of electrochemistry, but a real hashing out of the physics.  Second, I would like to find a thorough, authoritative discussion of the physics behind catalysis.  Again, I'm not interested in handwaves and parametrized empirical knowledge, but would prefer a physics-based discussion that explains, e.g., why Pd is good at splitting H2, while Ti is not.  Any help would be greatly appreciated.

Sunday, June 05, 2011

Several items

I returned late last week from Germany, where I spoke at a summer school. One fun part of the trip was a tour of the main experimental facility at the neighboring Max Planck Institute for the Chemical Physics of Solids. The facility was a large high-bay lab space, with 9 (!) dilution refrigerator apparatuses, as well as a 0.3K scanning tunneling microscope with 12 Tesla magnet. Very impressive infrastructure, and the place was neat as a pin - the very model of a lab. Note to self: figure out how to instill Germanic ultraprecise lab notebook habits in all incoming grad students...,

Other news this week that is interesting: the US National Academies have decided to make many of their books available for pdf download free of charge. I'm a particular fan of one or two of these. For example, with reference to recent discussions about helium as a resource, check this out.

There is also a great deal of attention being paid to a paper from this week's Science by the group of Aephriam Steinberg. The experiment sends single photons one at a time through a two-slit type apparatus. This is one of those experiments meant to blow the minds of undergrad physics majors taking quantum for the first time: you still build up an interference pattern from the slits, even though there's only one photon in there at a time. That means the photon must be interfering with itself(!). In the new work, the group uses optics techniques (that I freely admit I do not fully understand) to correlate, after the fact, the ("weakly" measured) momentum of the photon while in the apparatus with the (strongly measured) final position of the photon on a CCD. This does not violate the uncertainty relation, since it basically finds a quantum mechanical ensemble average of the momentum as a function of final position. Still, very neat, and discussed in some detail here and here.

I've liked Steinberg's work for years. This business about quantum measurement and post-selection is very fun to think about. For example, this comes up when considering the question, "how long does it take a quantum particle to tunnel through a classically forbidden region?". What you're basically asking is, given the successful measurement of a quantum particle at some position beyond the classically forbidden region, when did the particle, in the past, impinge upon that region in the first place? This is a very hard question to answer experimentally.

Friday, May 27, 2011

Recently in the arxiv

As I get ready to head to Germany for my first ever experience lecturing at a Max Planck summer school, I wanted to point out very briefly three of a number of interesting papers that came through the arxiv this week.

arxiv:1105.4055 - Janssen et al., Graphene, universality of the quantum Hall effect and re-definition of the SI
This paper compares the quantization of the Hall resistance in two different two-dimensional electronic systems: a conventional 2d electron gas in a GaAs/AlGaAs structure, and graphene. The authors find that the Hall resistance is quantized in units of h/e2 identically in the two systems to parts in 1011. On the one hand, this is really amazing, since you're seeing essentially exact quantization in two different systems, and the whole basis for the quantum Hall effect relies in part on dirt - without disorder, you wouldn't see the quantum Hall physics. And yet, even though the materials differ and dirt plays an important role, you get precise quantization in terms of fundamental constants. This is the kind of emergent, exact phenomenon that shows the profound character of condensed matter physics.

arxiv:1105.4642 - Barends et al., Loss and decoherence due to stray infrared light in superconducting quantum circuits
As someone who struggled mightily in grad school to avoid the effects of infinitesimal amounts of rf noise leaking into his ultracold sample, this impressed me. The authors demonstrate that infrared radiation from the surroundings, even when those surroundings are at 4.2 K, can have marked, detectable impact on the coherence properties of superconducting quantum bits. They compare results with and without an absorbing radiation shield in the way, and the effects aren't small. Wild. Time to break out those 50 mK shields from our old nuclear demag cryostat....

arxiv:1105.4652 - Paik et al., How coherent are Josephson junctions?
Along these same lines, these authors have been able to demonstrate coherence times in superconducting qubits that stretching into the tens of microseconds scale. They do this via a new kind of cavity, essentially controlling the environmental dissipation. This isn't really my area, but I know enough to be impressed, and also to be surprised at the apparent lack of the usually ubiquitous 1/f noise problems (in the critical current) that often limit coherence in these kinds of devices. As they point out, these numbers are encouragingly close to the thresholds needed for quantum error correction to be realistic.

Friday, May 20, 2011

Nano for batteries

Improved batteries would be of enormous benefit and utility in many sectors of technology.  A factor of 10 improvement in battery capacity (with good charging rate, safety, etc.) would mean electric cars that get 1000 miles per charge, laptops that run for days w/o charging, electrical storage to help with the use of renewable energy, and a host of other changes.  This rate of performance enhancement is completely commonplace in semiconductor electronics and magnetic data storage, yet batteries have lagged far, far behind.

There is real hope that nanostructured materials can help in this area.  Three examples illustrate this well.  Conventional lithium ion batteries have an anode (usually graphitic carbon, into which lithium ions may be intercalated) and a cathode (such as cobalt oxide), with an intervening electrolyte, and a separator barrier to prevent the two sides from shorting together.  A reasonable figure of merit is the capacity of the electrodes, in units of mA-h/g.  The materials described above, anode and cathode, have capacities on the order of 200-300 mA-h/g.  It is known that silicon can take up even more lithium than carbon, with a possible capacity of more than 3000 mA-h/g (!).  Complicating matters, Si swells dramatically when taking in Li, meaning that bulk single-crystal Si cracks and self-pulverizes when taken through a few charge/discharge cycles.  However, Si nanowires have been observed to be much better behaved - they have large surface specific surface area, and have enough free surface to swell and shrink without destroying themselves - see here.  Very recently, this paper has spectacular electron micrographs of the swelling of such nanowires.

A second example:  nanostructured cobalt oxide particles, self-assembled using selectively modified virus proteins, have been put forward as high capacity Li ion battery cathodes.  This approach has also been extended to iron phosphate cathode material.

A third example:  dramatically improved charging rates may be possible using nanostructured electrode geometries, such as these inverse-opal shapes.

There is real hope that nanostructured materials may enable true breakthroughs in battery technology, even though batteries have been studied exhaustively for many decades.  The ability to engineer materials at previously inaccessible scales may bear fruit soon.

Monday, May 16, 2011

Rice University clean room manager needed.

Just in case anyone out there has or is a promising candidate, I wanted to point out that Rice University is looking for a new clean room facilities manager.  (This is not a soft money position.)  Here is the text of the advertisement:

Rice University is seeking a technical manager to oversee the operations of its clean room user facility and associated characterization equipment.  This Class 100/1000 facility contains a suite of instruments, including a photolithography mask maker, a contact mask aligner, an e-beam evaporator, an RIE/PECVD system, and a collection of characterization tools.  The manager’s responsibilities include oversight of this facility, training of undergraduate and graduate students and other users, and maintenance and upkeep of the equipment.  Applicants must have a BS degree in a science or engineering discipline (PhD preferred but not required), and extensive experience with several of the relevant instruments or a related technical degree or diploma with an additional 2 years of the related experience (for a total of 7 years of related experience working with clean room instruments).  Salary will commensurate with experience.  The need to fill this position is immediate, and resumes will be examined as they arrive.  Please visit http://cohesion.rice.edu/campusservices/humanresources/riceworks.cfm to apply for this listing.  Rice University is an equal opportunity, affirmative action employer.

Friday, May 13, 2011

A university selling its soul

I'll get back to physics shortly.  These two articles (here and here) explain how, in exchange for $1.5M in donations, the Florida State economics department agreed to give the donors veto power over faculty hiring for the donor-supported positions.  Moreover, the donors can withdraw the positions if they aren't happy with annual performance reviews of the professors.  Wow.  I know times are tight, but FSU has clearly decided that they're up for bid.  I don't care whether the donors are right-wing or left-wing (hint:  they're right wing) or centerist - a university that allows donors direct control over faculty hiring and evaluation is out of its mind.  Gee, you think those professors are going to be free to do whatever research they want?  Do you think there's going to be pressure on all of the faculty within the department to toe the line rather than risk angering the donors?  What a mess.  Well, at least it confirms that Texas doesn't have a monopoly on idiocy.

Update:  blogger ate this post, and I had to reconstitute it from the cached version on bing (google blew this one all the way around).  Clearly the Koch brothers are responsible :-)

Thursday, May 05, 2011

Gravity Probe B

Finally, after only 45 years from conception to publication of results, Gravity Probe B has announced (dramatic pause) that Einstein's General Theory of Relativity is consistent with their data.  I had mentioned GPB ("The Project that Ate Stanford") once before.  It was a fascinating, complex, multidisciplinary project that, thanks to its experimental design and extraordinarily long duration, had great impact on a large number of physics, materials science, and engineering careers.  Still, I think they were in a bit of a no-win scenario, particularly once it became clear that there were problems with interpreting the data.  Either they support general relativity, or people just wouldn't trust the results, given how much other evidence there is out there that GR is right, at least in the relatively weak field limit.

Nano for solar

Sorry about the delay in this posting.  Real life has been busy.

Solar energy is an obvious candidate for a long-term solution to many of our energy problems.  The amount of power reaching the surface of the earth is on the order of 350 W/m2.  We could meet the world's projected energy needs in 2030 by covering around 250 km by 250 km with 10% efficient solar cells.  Unfortunately, the total surface area of all photovoltaics ever manufactured is less than 0.1% of that.  (This is why being able to produce photovoltaic cells by printing processes would be great.  Hint:  estimate the total area printed by the New York Times in a month.)  There are a number of challenges involved in solar.  Why might "nano" broadly defined be a big help?  Let me give three examples from the large wealth of ideas out there.

1) Semiconductor nanocrystals as absorbers.  Because of the beauty of quantum confinement, it is possible to make semiconductor nanocrystals out of a single material, and use different sizes to capture different parts of the solar spectrum.  Moreover, there is evidence (after some controversy) that nanocrystals may enhance "multiexciton generation" (e.g., here and here).  In a traditional solar cell, a photon with energy twice as large as the semiconductor band gap will generate an electron-hole pair (which must be ripped apart somehow), and inelastic processes will lead to the excess (above the band gap) energy being lost as heat.  However, at some rate, instead you can generate two band-gap-energy pairs.  The idea is that the rate of that process can be enhanced in nanocrystals, since conservation of "crystal momentum" can be relaxed in materials that are so surface-dominated.

2) Nanostructured materials for photoelectrochemical cells.  There are a number of proposals for using electrolytes in solar applications, including dye-sensitized solar cells.  In this case, one would like to use a high surface area anode, such as nanostructured TiO2 or some similar nanostructured material.  Moreover, instead of using organic dyes as the absorbers and sources of photoexcited electrons, one could imagine again using semiconductor nanocrystals.

3) Plasmon-enhanced photovoltaics.  One way to try to boost the efficiency of solar cells is to get the light to hang around the absorber material for longer.  One compact way to do so is to use plasmonically active metal nanoparticles or nanostructures as optical antennas.  The local fields near these structures can enhance scattering and local intensity in ways that tend to boost performance, though resistive losses in the metal may limit their effectiveness.  It's worth pointing out that one can also use plasmonic antennas as sources of hot electrons, also interesting from the photovoltaic angle.

There are many more ideas out there - I haven't even mentioned anything about nanotubes or graphene.  While the odds of any individual idea being a truly transformative breakthrough are small, there are probably more clever things being proposed in this area now that at any time ever before, thanks to our ability to manipulate matter on very small scales.   

Wednesday, April 27, 2011

Nano and energy

It might be fun to do a few posts on how nanoscale science can be used to the benefit of our energy concerns.  First, let me specify what I mean when I say that there's an "energy problem".  The fact is, average people enjoying first-world standards of living (e.g., US/Canada/Western Europe/Japan) have an enormous per capita energy consumption compared to, e.g., tribesmen in sub-Saharan Africa, or rural farmers in the hinterland of China.  If the goal is to raise the standard of living of the 5-ish billion people not enjoying the high life, and to get everyone up to a high standard of living, then we've got a problem:  there's no nice way to do so without incurring other enormous costs (e.g., burning enormous quantities of fossil fuels; building GW-scale power plants at very high rates, like several per day for the next 30 years).  Either we're not going to raise that standard of living for those billions of people, or the energy costs for the top economic tier are going to have to fall, or we're headed for major upheaval (or possibly some of all of the above).

When I teach my second-semester nano class, I point this out, and if you want interesting quantitative references, check here.  Broadly construed, nanotechnology and nanoscale science (and more broadly, condensed matter physics and materials science) can try to address several aspects of this challenge, though there are certainly no silver bullets.  The areas that come to mind are:  energy generation; energy storage; energy distribution; conservation or improved efficiency; and environmental remediation.  In future posts, I'll try to summarize very briefly a few thoughts on this.   

Saturday, April 23, 2011

Public funding of science, and access to information

On multiple blogs over the last few months, I've read comments from lay-persons (that is, nonscientists) that say, in essence, "As a citizen, I paid for this research, and therefore I should have access to all the data and all the software necessary to analyze that data."  The implications are (1) research funded by the public should be publicly accessible; and (2) the researchers themselves sometimes/often? hold back information or misinterpret the results, perhaps because they are biased and have an agenda to further.  

Now, as a pragmatist, there are a number of issues here.  For example, making available raw columns of tab-delimited numerical data and, e.g., matlab code, won't give a nonscientist the technical know-how to do analysis properly, or to know what models to apply, etc.  Things really get tricky if the "data" consists of physical samples (e.g., soil, or ice cores, or zebrafish)....  Yes, scientists that are publicly funded have the responsibility to make their research results available to the public, and to explain those results and their analysis.  As a practical matter, scientists are not obligated to make any interested citizen into an expert on their research.

While this is an interesting topic, I'd rather discuss a related issue:  How much public funding triggers the need to make something publicly available?  For example, suppose I used NSF funding to buy a coaxial cable for $5 as part of project A.  Then, later on, I use that coax in project B, which is funded at the $100K level by a non-public source.  I don't think any reasonable person would then argue that all of project B's results should become public domain because of 0.005% public support.  When does the obligation kick in?  Just an idle thought on a Saturday morning.

Tuesday, April 19, 2011

Friction, commensurability, and superlubricity

In the limit of clean surfaces, friction has its origins in the microscopic, chemical interactions at the interface between the two objects in question.  One of the more amazing (to me, anyway) consequences of this is the extremely important role played by commensurability between the surfaces.  Let me explain with an example.  Consider a gold crystal terminated at the (111) surface, and another gold crystal also terminated at the (111) surface.  Now, if those two surfaces are brought into contact, with the right orientation so that they match up as if they were two adjacent layers of atoms inside a larger gold crystal, what will happen?  The answer is, in the absence of adsorbed contaminants, the surfaces will stick.  This is called "cold welding".  In contrast, if you bring together two ultraclean surfaces that are incommensurate, they can slide past each other with essentially no friction.  This is called "superlubricity".  Here are two great examples (pdf of first one; pdf of second one) of this.

In this new paper, Liu et al. are able to do some very cute experiments in this regard, looking at the motion of thin graphite flakes (exfoliated from and) sliding on graphite pedestals.  It's clear from the observations that graphite flakes shifted relative to the underlying graphite substrate can slide essentially frictionlessly over micron scales.  Very neat and elegant, and surprising since there is not any rotation at work here to break commensurability.  This is a very firm reminder that our macroscale physical intuition about materials and their interactions can fail badly at the nanoscale.

Tuesday, April 12, 2011

Playing chicken with the global economy

I get it - we need to fix the structural problems associated with the US budget.  However, don't these geniuses realize that threatening to default (let alone actually defaulting) on the US sovereign debt will severely undermine the dollar?  It's like they actually want to have hyperinflation, so that they can claim it was all Obama's fault.  Other countries don't have a  "debt ceiling", you know.  Update:  seems I'm not alone in realizing that even talking about default is dangerous.

Monday, April 11, 2011

Choosing a postdoctoral position

I had a request a while ago for a post about how to choose a postdoctoral position (from the point of view of a finishing-up grad student, I'm assuming).  This is a tricky topic, precisely because it's somewhere between choosing a grad school (lots of good places to go, with guaranteed open positions every year) and getting a faculty job (many fewer open positions per year in a given field, and therefore a much restricted field of play; plus, a critical need to make some hard decisions that could be postponed or avoided in grad school).  Moreover, different disciplines within the physical sciences have very different approaches on postdocs.  In some fields like astronomy, externally funded fellowships sponsored by observatories/facilities/programs are standard practice, while condensed matter physics is much more principal-investigator-driven.  So, I'll try to stick to general points.
  • I strongly suggest going somewhere that is not your graduate institution, unless there are strong extenuating circumstances.  It's just intellectually healthier to get a broad exposure to what is out there, rather than to stay entirely comfortable.
  • This is also one of the relatively few points in your career when you can really shift gears, if you are so motivated.  My doctorate was in ultralow temperature physics, but I decided to become a nano researcher, for example.  More dramatically, this is often the point where many people get into interdisciplinary fields like biophysics.  There are trade-offs, of course.  If you do a postdoc in an area very close to your thesis work, you can often make rapid progress.  On the other hand, most people who go on in research (industrial or academic) do not end up working on their thesis topic for the lion's share of their career, and this is a chance to broaden your skill set and knowledge base.
  • Word of mouth and self-motivation are essential to getting a good postdoc position, beyond posted ads.  If you're finishing up in grad school, you are enough of a professional that you should be able to email or otherwise contact people whose work you find interesting and exciting, and ask whether they have any postdoctoral openings.  You should make sure that these emails are reasonably detailed and that it's clear they're personalized - not a form letter being spammed to several hundred generic faculty members simultaneously.  Your hit rate won't be high, but it's better than nothing.
  • Don't discount industry, though it's a narrowing field.  There are still industrial postdoc positions, and if you've got an interest in industry more so than academia, then you should look at these possibilities.  This includes places like Bell Labs (yes, they still exist), IBM, Intel, HP Labs, etc.  It is a tragedy that there aren't more opportunities like this out there now.
  • You need to think about how a particular postdoc position is structured.  Are you going to be acting as middle-management, helping to mentor a team of grad and undergrad students?  Are you going to be leading a research project yourself?  Is there a lot of lab-building or lab-moving?  How long is the position, and how does it match up w/ the seasonal nature of academic hiring, if academia is what you want to do?  Where have previous postdocs in that lab or group ended up?
  • How set are you on academia?  If you are set on academia, what kind of academic position would make you happy?  Go into the academic track with your eyes open!  If you're looking beyond academia, what do you need out of a postdoc position (besides a paycheck)?  Are there particular skills you want to learn?
None of this is particularly insightful, but it doesn't hurt to have this written down in one place.  Suggestions for further things to consider are invited in the comments....

Tuesday, April 05, 2011

Designing a lab

Designing a lab is not trivial, particularly if you have no experience in doing it before.  My new lab (day 2 of the move....) was perhaps the ideal circumstance: a new building is being constructed, and you have a very free hand in determining the layout, the facilities, and so forth.  In any realistic process you never get everything you want (e.g., this building does not have a building-wide deionized water system; I can't have unlimited space; there are restrictions based on cost and feasibility).  The challenge is to end up with functional space - laid out intelligently, so that work flows well and you don't find yourself fighting with the building or yourselves.  Sometimes this is not simple.  In my original lab space, for example, that floor of the building was never designed with vibration-sensitive work in mind.  The need to position certain pieces of equipment on the vibrationally quiet parts of the floor strongly influenced lab layout, rather than basic experimental logic.

Lab design ranges from the Big Picture (e.g., I have a couple of optics tables, so I should probably have a separate area with independently controlled lighting; I want isolation transformers to keep my sensitive measurement electronics off the power lines used for my big pumps.) to a zillion little details (e.g., where should every single electrical outlet and ethernet port be positioned?  What about emergency power?  Gas lines?  What fittings are going to be on the chilled water lines?).  Nothing is ever perfect, and there are always minor glitches (e.g., mislabeled circuit breakers).  You also want to design for the future.  If you think you're eventually going to need a gizmo that requires chilled water or a certain amount of 480V current, it's better to plan ahead, cost permitting....  The situation is definitely more constrained if you're moving into pre-existing space, particularly in an older building.  Like many aspects of being a professor, this is something that no one ever sits down and teaches you.  Rather, you're left to figure it out, hopefully with the help of a professional.

Monday, April 04, 2011

Moving the lab

Today's the beginning of moving my lab into the new Brockman Hall for Physics here at Rice.  As the week goes on, if I have time I'll write a bit about the process of lab design and the joys of moving equipment.  It's exciting, but there's no question that I wish we could skip over the actual transition.

Sunday, March 27, 2011

Blogger spam + McEuen novel

Two unrelated topics.  First, blogger needs to get their act together regarding comment spam.  They have some attempt at automatic spam detection, but it's clear that in the last two or three weeks people have figured out how to evade their blocking algorithm.  The spam comments quote some fragment of the original blog post or a previous comment, and then have a clickable username that links to some shady vendor website.  Very annoying.  I'd really rather not shift to a moderated comments approach, but I may have to if this keeps up.

Second, I was very surprised last weekend when reading the Wall Street Journal, and coming upon an article about Paul McEuen (author link, physicist link), who has written apparently a very successful novel.  As one of my friends exclaimed upon hearing this news when I told her at the APS, come on Paul - you're again making the rest of us look like lazy underachievers!  I'm going to have to get this on kindle....

Thursday, March 24, 2011

March Meeting, further thoughts

I had to cut my March Meeting a bit short this year, to get back to Rice in time for the dedication of our new Brockman Hall for Physics.  Still, a few more thoughts from the APS meeting:
  • Michelle Simmons gave a terrific talk summarizing the work, over more than a decade, of her group at the University of New South Wales on their progress toward their eventual goal of building a quantum computer based on P donors in Si (the Kane approach).  I knew of the work, but I'd never seen it all laid out like that, and it was impressive.  There are very few people out there in the CM community with the fortitude to plan out and pursue steadily a coherent, goal-directed research program over a dozen years.
  • I also saw something I hadn't observed in a number of years:  a speaker completely blowing off the 10 minute time limit on a contributed talk.  When the yellow warning light clicked on, it was clear that the speaker was nowhere near the end.  When the red light clicked on and started to blink, still no conclusion.  The session chair stood up and loomed intimidatingly.  No dice.  Finally the speaker ended after a total of about 16 minutes.  That takes nerve (and a lack of consideration for the others in the session....).
  • I chaired two sessions this year (note to self:  only chair one session....), and in contrast to the previous point, really didn't have any bad talks at all in there.  Very pleasant, generally.  Most interestingly, the metal-insulator transition in vanadium oxide session was 100% experimental talks!  Perhaps theorists have given up?  (kidding.)  

Tuesday, March 22, 2011

2011 APS March Meeting, first thoughts

A few brief thoughts at the APS March Meeting (more later....) in Dallas:
  • First time I've ever been at a convention center with a graveyard adjacent to the building.  Quite a time saver if there are really bad talks, I suppose.
  • Frank Wilczek still gives a terrific talk about the connection between superconductivity and high energy physics.  Very droll, too.  He clearly has a strong aesthetic desire for supersymmetry, but just as clearly acknowledges that all of this could go up in smoke, depending on what the LHC finds.
  • The APS's attempt at a mobile app (for iPad, iPhone, etc.) is so painfully slow and incomplete (no scheduling ability I can almost understand, but how can you not list the room numbers for the sessions?) that it's better to use wireless internet access to visit the APS meeting website instead.
  • Roland Wiesendanger also presents an outstanding talk.  His group's accumulated work on spin-polarized STM is very impressive, and definitely made me feel an intense bout of "imaging envy" (in the sense that my group's work usually does not have beautiful 3d renders of data sets that grace the cover of glossy journals).
  • Lots of discussions with people about looming budget concerns, and separately the decline of science journalism.  On some level, these topics are related....

Sunday, March 13, 2011

Advice on choosing a graduate school

This is my 500th post (!), and I realized, after spending a big part of the last two days talking with prospective graduate students, that I had never written down my generic unsolicited advice about picking a graduate school. 
  • Always go someplace where there is more than one faculty member with whom you might want to work.  Even if you are 100% certain that you want to work with Prof. Smith, and that the feeling is mutual, you never know what could happen, in terms of money, circumstances, etc.  Moreover, in grad school you will learn a lot from your fellow students and other faculty.  An institution with many interesting things happening will be a more stimulating intellectual environment, and that's not a small issue.
  • It's ok at the applicant stage not to know exactly what you want to do.  While some prospective grad students are completely sure of their interests, that's more the exception than the rule.
  • If you get the opportunity to visit a school, you should go.  A visit gives you a chance to see a place, get a subconscious sense of the environment (a "gut" reaction), and most importantly, an opportunity to talk to current graduate students.  Always talk to current graduate students if you get the chance - they're the ones who really know the score.  A professor should always be able to make their work sound interesting, but grad students can tell you what a place is really like.
  • I know that picking an advisor and thesis area are major decisions, but it's important to realize that those decisions do not define you for the whole rest of your career.  I would guess (and if someone had real numbers on this, please post a comment) that the very large majority of science and engineering PhDs end up spending most of their careers working on topics and problems distinct from their theses.  Your eventual employer is most likely going to be paying for your ability to think critically, structure big problems into manageable smaller ones, and knowing how to do research, rather than the particular detailed technical knowledge from your doctoral thesis.  A personal anecdote:  I did my graduate work on the ultralow temperature properties of amorphous insulators.  I no longer work at ultralow temperatures, and I don't study glasses either; nonetheless, I learned a huge amount in grad school about the process of research that I apply all the time.
  • You should not go to grad school because you're not sure what else to do with yourself.  You should not go into research if you will only be satisfied by a Nobel Prize.  In both of those cases, you are likely to be unhappy during grad school.  
  • I know grad student stipends are low, believe me.  However, it's a bad idea to make a grad school decision based on a financial difference of a few hundred or a thousand dollars a year.  Different places have vastly different costs of living.  Pick a place for the right reasons.
  • Likewise, while everyone wants a pleasant environment, picking a grad school largely based on the weather is silly.
  • Pursue external fellowships if given the opportunity.  It's always nice to have your own money and not be tied strongly to the funding constraints of the faculty, if possible.
  • Be mindful of how departments and programs are run.  Is the program well organized?  What is a reasonable timetable for progress?  How are advisors selected, and when does that happen?  Who sets the stipends?  What are TA duties and expectations like?  Are there qualifying exams?  Know what you're getting into!
  • It's fine to try to communicate with professors at all stages of the process.  We'd much rather have you ask questions than the alternative.  If you don't get a quick response to an email, it's almost certainly due to busy-ness, and not a deeply meaningful decision by the faculty member.
There is no question that far more information is now available to would-be graduate students than at any time in the past.  Use it!  Look at departmental web pages, look at individual faculty member web pages.  Make an informed decision.  Good luck!

Wednesday, March 09, 2011

Blogging scarcity - tidbits.

My blogging has been sparse of late because of several colliding deadlines and constraints (NSF report due; review article due; APS meeting coming up; impending travel; visits of prospective graduate students; the ever-present book; teaching; impending move of my whole lab to the new Brockman Hall for Physics).  This doesn't mean that there aren't interesting things going on out there in condensed matter physics (and physic in general) - just that I've been extraordinarily busy.  

To tide you over, here are a handful of interesting links.

This is an amazing video made entirely from shots of Saturn and its moons taken by the Cassini spacecraft.  It looks like something out of Hollywood, but is a zillion times more fascinating because it's real - no cgi here.

This older preprint (I'll revise the link when the paper comes out in PRL next week) puts forward the argument that the pseudospin degree of freedom of electrons in graphene does actually correspond to a real half-integer angular momentum.  Surprising - I need to think about this more.

This experiment is extremely slick.  The authors are able to use the magnetic field gradient from a sharp magnetic scanned probe tip to interact w/ individual nitrogen vacancy centers in diamond (which have an unpaired electron spin).  This is basically magnetic resonance imaging of single electron spins.

This paper shows a clear implementation of an idea that is increasingly popular:  using the plasmon properties of metal nanostructures to enhance solar energy harvesting.  Essentially the evanescent optical fields from the metal nanoparticles trap the light near the interface where, in this case, the photochemistry is happening.

Saturday, February 26, 2011

Of gaps and pseudogaps

ZapperZ's recent post about new work on the pseudogap in high temperature superconductors has made me think about how to try to explain something like this to scientifically literate nonspecialists. Here's an attempt, starting from almost a high school chemistry angle. Chemists (and spectroscopists) like energy level diagrams. You know - like this one - where a horizontal line at a certain height indicates the existence of a particular (electronic) energy level for a system at some energy. The higher up the line, the higher the energy. In extended solid state systems, there are usually many, many levels. That means that an energy level diagram would have zillions of horizontal lines. These tend to group into bands, regions of energy with many energy levels, separated by gaps, regions of energy with no levels.

Let's take the simplest situation first, where the energies of those levels don't depend on how many electrons we actually have. This is equivalent to turning off the electron-electron interaction. The arrangement of atoms gives us some distribution of levels, and we just start filling it up (from the bottom up, if we care about the lowest energy states of the system; remember, electrons can be spin-up or spin-down, meaning that each (spatial state) level can in principle hold two electrons). There's some highest occupied level, and some lowest unoccupied level. We care about whether the highest occupied level is right up against an energy gap, because that drastically affects many things we can measure. If our filled up system is gapped, that means that the energetically cheapest (electronic) excitation of that system is the gap energy. Having gaps also restricts what processes can happen, since any quantum mechanical process has to take the system from some initial state to some final state. If there's no final state available that satisfies energy conservation, for example, the process can't happen. This means we can map out the gaps in the system by various spectroscopy experiments (e.g., photoemission; tunneling).

So, what happens in systems where the electron-electron interaction does matter a lot? In that case, you should think of the energy levels as rearranging and redistributing themselves depending on how many electrons are in the system. This all has to happen self-consistently. One particularly famous example of what can happen is the Mott insulating state. (Strictly speaking, I'm going to describe a version of this related to the Hubbard model.) Suppose there are N real-space sites, and N electrons to place in there. In the noninteracting case, the highest occupied level would not be near a gap - it would be in the middle of a band. Because the electrons can shuffle around in space without any particular cost to doubly occupying a site, the system would be a metal. However, suppose it costs an energy U to park two electrons on any site. The lowest energy state of the whole system would be each of the N sites occupied by one electron, with an energy gap of U separating that ground state from the first excited state. So, in the presence of strong interactions, at exactly "half-filling", you can end up with a gap. Even without this lattice site picture, in the presence of disorder, it's possible to see signs of the formation of a gap near the highest occupied level (for experts, in the weak disorder limit, this is the Altshuler-Aronov reduction in the density of states; in the strong disorder limit, it's the Efros-Shklovskii Coulomb gap).

Another kind of gap exists in the superconducting state. There is an energy gap between the superconducting ground state and the low lying excitations. In the high temperature superconductors, that gap is a bit weird, since there actually are low-lying excitations that correspond to electrons with very specific amounts of momentum ("nodal quasiparticles").

A pseudogap is more subtle. There isn't a "hard" gap, with zero states in it. Instead, the number of states near the highest occupied level is depressed relative to noninteracting expectations. That reduction and how it varies as a function of energy can tell you a lot about the underlying physics. One complicated aspect of high temperature superconductors is the existence of such a pseudogap well above the superconducting transition temperature. In conventional superconductors (e.g., lead), this doesn't exist. So, the question has been lingering for 25 years now, is the pseudogap the sign of incipient superconductivity (i.e., electrons are already pairing up, but they lack the special coherence required for actual superconductivity), or is it a sign of something else, perhaps something competing with superconductivity? That's still a huge question out there, complicated by the fact that doping the high-Tc materials to be superconductors adds disorder to the problem.

Monday, February 21, 2011

This is why micro/nanofab with new material systems is hard.

Whenever I read a super-enthusiastic news story about how devices based on new material XYZ are the greatest thing ever and are going to be an eventual replacement for silicon-based electronics, I immediately think that the latter clause is likely not true. People have gotten very spoiled by silicon (and to a lesser degree, III-V compound semiconductors like GaAs), and no wonder: it's at the heart of modern technology, and it seems like we are always coaxing new tricks out of it. Of course, that's because there have been millions of person-years worth of research on Si. Any new material system (be it graphene, metal oxide heterostructures, or whatever) starts out behind the eight ball by comparison. This paper on the arxiv this evening is an example of why this business is hard. It's about Bi2Se3, one of the materials classified as "topological insulators". These materials are meant to be bulk insulators (well, at low enough temperature; this one is actually a fairly small band gap semiconductor), with special "topologically protected" surface states. One problem is, very often the material ends up doped via defects, making the bulk relatively conductive. Another problem, as studied in this paper, is that exposure to air, even for a very brief time, dopes the material further, and creates a surface oxide layer that seems to hurt the surface states. This sort of problem crops up with many materials. It's truly impressive that we've learned how to deal with these issues in Si (where oxygen is not a dopant, but does lead to a surface oxide layer very quickly). This kind of work is very important and absolutely needs to be done well....

Tuesday, February 15, 2011

You could, but would you want to?

Texas governor Rick Perry has proposed (as a deliberately provocative target) that the state's (public) universities should be set up so that a student can get a bachelor's degree for $10,000 total (including the cost of books).  Hey, I'm all for moon shot-type challenges, but there is something to be said for thinking hard about what you're suggesting.  This plan (which would set costs per student cheaper than nearly all community colleges, by the way) is not well thought-out at all, which is completely unsurprising.  To do this, the handwave argument is that professors should maximize online content for distance learning, and papers could be graded by graduate students or (apparently very cheaply hired) instructors.  Even then, it's not clear that you could pull this off.  Let me put it this way:  I can argue that the world would benefit greatly from a solar electric car that costs $1,000, but that doesn't mean that one you'd want to own can actually be produced in an economically sustainable way at that price.  This is classic Perry, though. 

Sunday, February 13, 2011

Battle hymn of the Tiger Professor

Like Amy Chua, I'm choosing to be deliberately provocative in what I write below, though unlike her I don't have a book to sell. I recently heard a talk where a well reputed science educator (not naming names) argued that those of us teaching undergraduates need to adapt to the learning habits of "millennials". That is, these are a group of people who have literally grown up with google (a thought that makes me feel very old, since I went to grad school w/ Sergei Brin) - they are used to having knowledge (in the form of facts) at their fingertips in a fraction of a second. They are used to nearly continuous social networking, instantaneous communication, and constant multitasking (or, as a more stodgy person might put it, complete distraction, attention deficit behavior, and a chronic inability to concentrate). This academic argued that we need to make science education mimic real research if we want to produce researchers and get students jazzed about science. Moreover, this academic argued that making students listen to lectures and do problem sets was (a) ineffective, since that's not how they were geared to learn, and (b) somewhere between useless and abusive, being slavishly ruled by a culture of "covering material" without actually educating. Somehow we should be more in tune with how Millennials learn, and appeal to that, rather than being stodgy fogies who force dull, repetitious "exercises at the end of the chapter" work.

While appealing to students' learning modalities has its place, I contend that this concept simply will not work well in some introductory, foundational classes in the sciences, math, and engineering. Physical science (chemistry, physics) and math are inherently hierarchical. You simply cannot learn more advanced material without mastery of the underpinnings. Moreover, in the case of physics (with which I am most familiar), we're not just teaching facts (which can indeed be looked up easily on the internet); we're supposedly teaching analytical skills - how to think like a physicist; how to take a physical situation and translate it into math that enables us to solve for what we care about in terms of what we know. Getting good at this simply requires practice. To take the Amy Chua analogy, hard work is necessary and playdates are not. There literally is no substitute for doing problems and getting used to thinking this way. While open-ended reasoning exercises can be fun and useful (and could be a great addition to the standard curriculum, or perhaps a way to run a lab class to be more like real research), at some point students actually do need to become proficient in basic problem-solving skills. I really don't like the underlying assumption that this educator was making: that the twitter/facebook/short-attention-span approach is unavoidable and possibly superior to focused hard work. Hey, I'm part of the distractable culture as much as anyone in the 21st century, but you'll have to work hard to convince me that it's the right way to teach foundational knowledge in physics, math, and chemistry.

Wednesday, February 09, 2011

Science and the nation

(The US, that is.) More people need to read this.

Sunday, February 06, 2011

Triboelectricity and enduring mysteries of physics

This past week I hosted Seth Putterman for a physics colloquium here at Rice, and one of the things he talked about is some of his group's work related to triboelectricity, or the generation of charge separation by friction/rubbing.  When you think about it, it's quite amazing that we have no first-principles explanation of a phenomenon we're all shown literally as children (rub a balloon on your hair and it builds up enough "static" charge that it will stick to a plaster wall, unless you live in a very humid place like Houston).  The amount of charge that may be moved is on the order of 1012 electrons per square cm, and the resulting potential differences can measure in the tens of kilovolts (!), leading to remarkable observations like the generation of x-rays from peeling tape, or UV and x-ray emission from a mercury meniscus moving along a glass surface.  In fact, there's still some disagreement about whether the charge moving in some triboelectric experiments is electrons or ions!  Wild stuff.