Search This Blog

Thursday, July 14, 2011

Science and the public

I couldn't help but notice that one of my favorite producers of animated films, Aardman Animation, is coming out with a new movie (trailer here). I find it very interesting that the UK version of the movie is "The Pirates! In an Adventure with Scientists!", while the US version is "The Pirates! Band of Misfits!". The film is based on a book with the former title, by the way. I don't want to overanalyze this, but it's hard to escape the conclusion that some marketing drone decided, "scientist" is box-office poison, and that "misfit" was an acceptable and more marketable substitute in the US. Great. Wonderful. In case you're wondering, Charles Darwin shows up as a character in the book/movie. I imagine that the US ads won't be playing that up very much, or there will be protests. Sigh.

(I do have a science post I'll make shortly. I just couldn't let this pass w/o comment. And it's taking enormous self-restraint not to launch into extended political invective about the US, but there are many places where people can read that if they want to.)

Thursday, July 07, 2011

Follow-up, and blogger drop-off

Regarding the story mentioned here, Nature has published both a provocative and interesting article by Eugenie Reich about the larger issues raised, and an editorial. Sorry that these are behind a pay-wall. To summarize in a few sentences: Eugenie Reich points out that the misconduct investigation relevant to this discussion highlights important problems with the US Department of Energy's handling of such cases. To wit: There are issues of independence and chain of authority of the investigators, and lack of proper record keeping, documentation, etc. of investigation reports. The conclusion is that this is a powerful argument for the DOE to establish an Office of Research Integrity, like those in some other agencies. The editorial from Nature chastises the DOE along these lines. Interesting that the Nature editorial makes no mention at all of their own role in not publishing technical comments relevant to this particular matter.

In blogging news, there has been a drop-off in the number of active physical science bloggers. David Bacon's Quantum Pontiff has decohered. The Incoherent Ponderer has gone so far as to apparently delete his entire blog and blogger profile. Other blogs have not been updated in many months. It's likely that this is all part of a natural stabilization of blogging - people run out of things to say, and the novelty of blogging has trailed off. It will be interesting to see where this trend resolves. It'll be a shame to have fewer interesting voices to follow, though. (Clearly we should all switch to Twitter, since 140 characters should be more than sufficient to carry out detailed science discussions or popularizations for the lay audience. Ahem.)

Tuesday, July 05, 2011

Crowd-sourcing, video games, and the world's problems

This past weekend, I caught a snippet of a rebroadcast of this NPR story about Jane McGonigal and the thesis of her recent book. In short, she points out that as a species we have spent literally millions of person-years playing World of Warcraft, an online game that involves teamwork and puzzle-solving (as well as all the usual fun silliness of videogames). Her point is that in the game environment, people have demonstrated great creativity as well as a willingness to keep coming back, over and over, to tackle challenging problems (in part because there is recognition by the players that problems are pitched at a level that is tricky but not insurmountable). She wants to harness this kind of intellectual output for good, rather than just have it as a social (or antisocial) outlet. She's not the first person to have this sort of idea, of course (see, e.g., Ender's Game, or the Timothy Zahn short story "The Challenge"), but the WoW numbers are truly eye-popping.

It would be great if there were certain scientific problems to which this could be applied. The overall concept seems easiest to adapt to logistics (e.g., coming up with clever ways of routing shipping containers or disaster relief supplies), since that's a puzzle-solving subdiscipline where the basic problems are at least accessible to lay-people. Trying this with meaty scientific challenges would be much more difficult, unless those challenges could be translated effectively into problems that don't require years and years of foreknowledge. Hmm. Still very thought-provoking.

Friday, July 01, 2011

The tyranny of the buried interface

Time and again, a major impediment to research progress in condensed matter physics, electrical engineering, materials science, and physical chemistry is the need to understand what is happening in some system at a buried interface. For example, in organic photovoltaic devices, it is of great importance to learn more about what is happening at metal/organic semiconductor interfaces (charge transfer, interfacial dipole formation, Fermi level pinning) and organic/organic interfaces (exciton splitting at the interface between electron- and hole-transporting materials). Another example: in lithium ion batteries, at the interface between either the cathode or the anode and the electrolyte, after the first couple of charge and discharge cycles, there forms the "solid electrolyte interface" (SEI) layer. The SEI is nanoscale in thickness, stabilizes the electrode surface, establishes the energetic lineup between the electrolyte redox chemistry and the actual electrode surface, strongly affects the kinetics of the lithium ion transport, etc.

Unfortunately, probing buried interfaces in situ in functioning systems is extremely hard. There generally is no Star Trek scanner device that can nondestructively reveal atomic-scale details of buried 3d structures. Many of our best characterization approaches are surface-based, or require thinned down samples, and there are always difficult questions about how information gained in such investigations translates to the real situation of interest. This is not a new problem. From the early days of surface science and before, people have been worrying about, e.g., how to connect studies performed in UHV on single crystal surfaces with "real world" situations on polycrystalline surfaces with ambient contaminants. There are some macro-scale interface sensitive approaches (exploiting x-ray standing waves, or interfacial optical effects). Still, the more people working on developing better characterization tools toward this end, the better, even if it doesn't sound terribly exciting to the masses.

Thursday, June 23, 2011

a recurring story

Five years ago, there was a controversy in the pages of Nature regarding this paper from 1993, the first to claim atomic-resolution chemical analysis via scanning transmission electron microscopy.  At issue was whether or not the data in the paper had been reprocessed (in response to referee concerns) in a legitimate or misrepresentative way, and whether the authors had been honest and forthcoming with the journal and the reviewers about the procedures they'd followed.  The reason that matters came to a head more than 12 years after the original paper was the appearance of a preprint in the arxiv and subsequently submitted to Nature Physics, sharing two of the authors of the original paper, with further questions raised about the handling and analysis of data and images.  This was all discussed clearly and succinctly by ZZ at the time.  Nature allowed the authors to publish a corrigendum, a correction rather than a retraction, regarding the original '93 paper.  This was sufficiently controversial that Nature felt the need to write an editorial explaining their decision.  Oak Ridge did an investigation of the matter, and concluded that there was no fabrication or falsification of data; that report and a response by the authors are linked here.  Judging from the appearance of this on the arxiv last night, it would appear that this isn't quite the end of things.

Wednesday, June 15, 2011

Pitch for a tv show

Summer blogging has been and will continue to be light, as I try to get some professional writing done. In the meantime, though, I have to give my elevator pitch for the awesome new TV show that would be great fun. It's "Chopped" meets "Mythbusters" meets "Scrap Heap Challenge"/"Junkyard Wars". Start off with three teams. Give them a physics- or engineering-related task that they have to accomplish (e.g., write the opening crawl from Star Wars in one mm^2; weigh a single grain of salt), some number of tools that they have to use (e.g., a green laser pointer and an infrared corrected microscope objective), and access to a stocked "pantry" (including a PC, electronics components, etc.). Give them a time limit (4 hours, cleverly edited down to half an hour in broadcast). Points awarded for success at the task, time used, and elegance. I think it could be a hit, particularly if there are explanations (narrated by cool resident experts) delivered in a fun, accessible tone. It'd be fun, even if it did conjure up images of Guy Fleegman in Galaxy Quest.

Monday, June 06, 2011

Soliciting book or review article recommendations

I am interested in reading good books or review articles on two particular topics, and I'm hoping that by "crowd-sourcing" to my readership, I might do better than wandering through the literature.  First, I want to find an authoritative discussion of the physics behind the electrochemical potentials of battery materials - not the lore of decades of electrochemistry, but a real hashing out of the physics.  Second, I would like to find a thorough, authoritative discussion of the physics behind catalysis.  Again, I'm not interested in handwaves and parametrized empirical knowledge, but would prefer a physics-based discussion that explains, e.g., why Pd is good at splitting H2, while Ti is not.  Any help would be greatly appreciated.

Sunday, June 05, 2011

Several items

I returned late last week from Germany, where I spoke at a summer school. One fun part of the trip was a tour of the main experimental facility at the neighboring Max Planck Institute for the Chemical Physics of Solids. The facility was a large high-bay lab space, with 9 (!) dilution refrigerator apparatuses, as well as a 0.3K scanning tunneling microscope with 12 Tesla magnet. Very impressive infrastructure, and the place was neat as a pin - the very model of a lab. Note to self: figure out how to instill Germanic ultraprecise lab notebook habits in all incoming grad students...,

Other news this week that is interesting: the US National Academies have decided to make many of their books available for pdf download free of charge. I'm a particular fan of one or two of these. For example, with reference to recent discussions about helium as a resource, check this out.

There is also a great deal of attention being paid to a paper from this week's Science by the group of Aephriam Steinberg. The experiment sends single photons one at a time through a two-slit type apparatus. This is one of those experiments meant to blow the minds of undergrad physics majors taking quantum for the first time: you still build up an interference pattern from the slits, even though there's only one photon in there at a time. That means the photon must be interfering with itself(!). In the new work, the group uses optics techniques (that I freely admit I do not fully understand) to correlate, after the fact, the ("weakly" measured) momentum of the photon while in the apparatus with the (strongly measured) final position of the photon on a CCD. This does not violate the uncertainty relation, since it basically finds a quantum mechanical ensemble average of the momentum as a function of final position. Still, very neat, and discussed in some detail here and here.

I've liked Steinberg's work for years. This business about quantum measurement and post-selection is very fun to think about. For example, this comes up when considering the question, "how long does it take a quantum particle to tunnel through a classically forbidden region?". What you're basically asking is, given the successful measurement of a quantum particle at some position beyond the classically forbidden region, when did the particle, in the past, impinge upon that region in the first place? This is a very hard question to answer experimentally.

Friday, May 27, 2011

Recently in the arxiv

As I get ready to head to Germany for my first ever experience lecturing at a Max Planck summer school, I wanted to point out very briefly three of a number of interesting papers that came through the arxiv this week.

arxiv:1105.4055 - Janssen et al., Graphene, universality of the quantum Hall effect and re-definition of the SI
This paper compares the quantization of the Hall resistance in two different two-dimensional electronic systems: a conventional 2d electron gas in a GaAs/AlGaAs structure, and graphene. The authors find that the Hall resistance is quantized in units of h/e2 identically in the two systems to parts in 1011. On the one hand, this is really amazing, since you're seeing essentially exact quantization in two different systems, and the whole basis for the quantum Hall effect relies in part on dirt - without disorder, you wouldn't see the quantum Hall physics. And yet, even though the materials differ and dirt plays an important role, you get precise quantization in terms of fundamental constants. This is the kind of emergent, exact phenomenon that shows the profound character of condensed matter physics.

arxiv:1105.4642 - Barends et al., Loss and decoherence due to stray infrared light in superconducting quantum circuits
As someone who struggled mightily in grad school to avoid the effects of infinitesimal amounts of rf noise leaking into his ultracold sample, this impressed me. The authors demonstrate that infrared radiation from the surroundings, even when those surroundings are at 4.2 K, can have marked, detectable impact on the coherence properties of superconducting quantum bits. They compare results with and without an absorbing radiation shield in the way, and the effects aren't small. Wild. Time to break out those 50 mK shields from our old nuclear demag cryostat....

arxiv:1105.4652 - Paik et al., How coherent are Josephson junctions?
Along these same lines, these authors have been able to demonstrate coherence times in superconducting qubits that stretching into the tens of microseconds scale. They do this via a new kind of cavity, essentially controlling the environmental dissipation. This isn't really my area, but I know enough to be impressed, and also to be surprised at the apparent lack of the usually ubiquitous 1/f noise problems (in the critical current) that often limit coherence in these kinds of devices. As they point out, these numbers are encouragingly close to the thresholds needed for quantum error correction to be realistic.

Friday, May 20, 2011

Nano for batteries

Improved batteries would be of enormous benefit and utility in many sectors of technology.  A factor of 10 improvement in battery capacity (with good charging rate, safety, etc.) would mean electric cars that get 1000 miles per charge, laptops that run for days w/o charging, electrical storage to help with the use of renewable energy, and a host of other changes.  This rate of performance enhancement is completely commonplace in semiconductor electronics and magnetic data storage, yet batteries have lagged far, far behind.

There is real hope that nanostructured materials can help in this area.  Three examples illustrate this well.  Conventional lithium ion batteries have an anode (usually graphitic carbon, into which lithium ions may be intercalated) and a cathode (such as cobalt oxide), with an intervening electrolyte, and a separator barrier to prevent the two sides from shorting together.  A reasonable figure of merit is the capacity of the electrodes, in units of mA-h/g.  The materials described above, anode and cathode, have capacities on the order of 200-300 mA-h/g.  It is known that silicon can take up even more lithium than carbon, with a possible capacity of more than 3000 mA-h/g (!).  Complicating matters, Si swells dramatically when taking in Li, meaning that bulk single-crystal Si cracks and self-pulverizes when taken through a few charge/discharge cycles.  However, Si nanowires have been observed to be much better behaved - they have large surface specific surface area, and have enough free surface to swell and shrink without destroying themselves - see here.  Very recently, this paper has spectacular electron micrographs of the swelling of such nanowires.

A second example:  nanostructured cobalt oxide particles, self-assembled using selectively modified virus proteins, have been put forward as high capacity Li ion battery cathodes.  This approach has also been extended to iron phosphate cathode material.

A third example:  dramatically improved charging rates may be possible using nanostructured electrode geometries, such as these inverse-opal shapes.

There is real hope that nanostructured materials may enable true breakthroughs in battery technology, even though batteries have been studied exhaustively for many decades.  The ability to engineer materials at previously inaccessible scales may bear fruit soon.

Monday, May 16, 2011

Rice University clean room manager needed.

Just in case anyone out there has or is a promising candidate, I wanted to point out that Rice University is looking for a new clean room facilities manager.  (This is not a soft money position.)  Here is the text of the advertisement:

Rice University is seeking a technical manager to oversee the operations of its clean room user facility and associated characterization equipment.  This Class 100/1000 facility contains a suite of instruments, including a photolithography mask maker, a contact mask aligner, an e-beam evaporator, an RIE/PECVD system, and a collection of characterization tools.  The manager’s responsibilities include oversight of this facility, training of undergraduate and graduate students and other users, and maintenance and upkeep of the equipment.  Applicants must have a BS degree in a science or engineering discipline (PhD preferred but not required), and extensive experience with several of the relevant instruments or a related technical degree or diploma with an additional 2 years of the related experience (for a total of 7 years of related experience working with clean room instruments).  Salary will commensurate with experience.  The need to fill this position is immediate, and resumes will be examined as they arrive.  Please visit http://cohesion.rice.edu/campusservices/humanresources/riceworks.cfm to apply for this listing.  Rice University is an equal opportunity, affirmative action employer.

Friday, May 13, 2011

A university selling its soul

I'll get back to physics shortly.  These two articles (here and here) explain how, in exchange for $1.5M in donations, the Florida State economics department agreed to give the donors veto power over faculty hiring for the donor-supported positions.  Moreover, the donors can withdraw the positions if they aren't happy with annual performance reviews of the professors.  Wow.  I know times are tight, but FSU has clearly decided that they're up for bid.  I don't care whether the donors are right-wing or left-wing (hint:  they're right wing) or centerist - a university that allows donors direct control over faculty hiring and evaluation is out of its mind.  Gee, you think those professors are going to be free to do whatever research they want?  Do you think there's going to be pressure on all of the faculty within the department to toe the line rather than risk angering the donors?  What a mess.  Well, at least it confirms that Texas doesn't have a monopoly on idiocy.

Update:  blogger ate this post, and I had to reconstitute it from the cached version on bing (google blew this one all the way around).  Clearly the Koch brothers are responsible :-)

Thursday, May 05, 2011

Gravity Probe B

Finally, after only 45 years from conception to publication of results, Gravity Probe B has announced (dramatic pause) that Einstein's General Theory of Relativity is consistent with their data.  I had mentioned GPB ("The Project that Ate Stanford") once before.  It was a fascinating, complex, multidisciplinary project that, thanks to its experimental design and extraordinarily long duration, had great impact on a large number of physics, materials science, and engineering careers.  Still, I think they were in a bit of a no-win scenario, particularly once it became clear that there were problems with interpreting the data.  Either they support general relativity, or people just wouldn't trust the results, given how much other evidence there is out there that GR is right, at least in the relatively weak field limit.

Nano for solar

Sorry about the delay in this posting.  Real life has been busy.

Solar energy is an obvious candidate for a long-term solution to many of our energy problems.  The amount of power reaching the surface of the earth is on the order of 350 W/m2.  We could meet the world's projected energy needs in 2030 by covering around 250 km by 250 km with 10% efficient solar cells.  Unfortunately, the total surface area of all photovoltaics ever manufactured is less than 0.1% of that.  (This is why being able to produce photovoltaic cells by printing processes would be great.  Hint:  estimate the total area printed by the New York Times in a month.)  There are a number of challenges involved in solar.  Why might "nano" broadly defined be a big help?  Let me give three examples from the large wealth of ideas out there.

1) Semiconductor nanocrystals as absorbers.  Because of the beauty of quantum confinement, it is possible to make semiconductor nanocrystals out of a single material, and use different sizes to capture different parts of the solar spectrum.  Moreover, there is evidence (after some controversy) that nanocrystals may enhance "multiexciton generation" (e.g., here and here).  In a traditional solar cell, a photon with energy twice as large as the semiconductor band gap will generate an electron-hole pair (which must be ripped apart somehow), and inelastic processes will lead to the excess (above the band gap) energy being lost as heat.  However, at some rate, instead you can generate two band-gap-energy pairs.  The idea is that the rate of that process can be enhanced in nanocrystals, since conservation of "crystal momentum" can be relaxed in materials that are so surface-dominated.

2) Nanostructured materials for photoelectrochemical cells.  There are a number of proposals for using electrolytes in solar applications, including dye-sensitized solar cells.  In this case, one would like to use a high surface area anode, such as nanostructured TiO2 or some similar nanostructured material.  Moreover, instead of using organic dyes as the absorbers and sources of photoexcited electrons, one could imagine again using semiconductor nanocrystals.

3) Plasmon-enhanced photovoltaics.  One way to try to boost the efficiency of solar cells is to get the light to hang around the absorber material for longer.  One compact way to do so is to use plasmonically active metal nanoparticles or nanostructures as optical antennas.  The local fields near these structures can enhance scattering and local intensity in ways that tend to boost performance, though resistive losses in the metal may limit their effectiveness.  It's worth pointing out that one can also use plasmonic antennas as sources of hot electrons, also interesting from the photovoltaic angle.

There are many more ideas out there - I haven't even mentioned anything about nanotubes or graphene.  While the odds of any individual idea being a truly transformative breakthrough are small, there are probably more clever things being proposed in this area now that at any time ever before, thanks to our ability to manipulate matter on very small scales.   

Wednesday, April 27, 2011

Nano and energy

It might be fun to do a few posts on how nanoscale science can be used to the benefit of our energy concerns.  First, let me specify what I mean when I say that there's an "energy problem".  The fact is, average people enjoying first-world standards of living (e.g., US/Canada/Western Europe/Japan) have an enormous per capita energy consumption compared to, e.g., tribesmen in sub-Saharan Africa, or rural farmers in the hinterland of China.  If the goal is to raise the standard of living of the 5-ish billion people not enjoying the high life, and to get everyone up to a high standard of living, then we've got a problem:  there's no nice way to do so without incurring other enormous costs (e.g., burning enormous quantities of fossil fuels; building GW-scale power plants at very high rates, like several per day for the next 30 years).  Either we're not going to raise that standard of living for those billions of people, or the energy costs for the top economic tier are going to have to fall, or we're headed for major upheaval (or possibly some of all of the above).

When I teach my second-semester nano class, I point this out, and if you want interesting quantitative references, check here.  Broadly construed, nanotechnology and nanoscale science (and more broadly, condensed matter physics and materials science) can try to address several aspects of this challenge, though there are certainly no silver bullets.  The areas that come to mind are:  energy generation; energy storage; energy distribution; conservation or improved efficiency; and environmental remediation.  In future posts, I'll try to summarize very briefly a few thoughts on this.   

Saturday, April 23, 2011

Public funding of science, and access to information

On multiple blogs over the last few months, I've read comments from lay-persons (that is, nonscientists) that say, in essence, "As a citizen, I paid for this research, and therefore I should have access to all the data and all the software necessary to analyze that data."  The implications are (1) research funded by the public should be publicly accessible; and (2) the researchers themselves sometimes/often? hold back information or misinterpret the results, perhaps because they are biased and have an agenda to further.  

Now, as a pragmatist, there are a number of issues here.  For example, making available raw columns of tab-delimited numerical data and, e.g., matlab code, won't give a nonscientist the technical know-how to do analysis properly, or to know what models to apply, etc.  Things really get tricky if the "data" consists of physical samples (e.g., soil, or ice cores, or zebrafish)....  Yes, scientists that are publicly funded have the responsibility to make their research results available to the public, and to explain those results and their analysis.  As a practical matter, scientists are not obligated to make any interested citizen into an expert on their research.

While this is an interesting topic, I'd rather discuss a related issue:  How much public funding triggers the need to make something publicly available?  For example, suppose I used NSF funding to buy a coaxial cable for $5 as part of project A.  Then, later on, I use that coax in project B, which is funded at the $100K level by a non-public source.  I don't think any reasonable person would then argue that all of project B's results should become public domain because of 0.005% public support.  When does the obligation kick in?  Just an idle thought on a Saturday morning.

Tuesday, April 19, 2011

Friction, commensurability, and superlubricity

In the limit of clean surfaces, friction has its origins in the microscopic, chemical interactions at the interface between the two objects in question.  One of the more amazing (to me, anyway) consequences of this is the extremely important role played by commensurability between the surfaces.  Let me explain with an example.  Consider a gold crystal terminated at the (111) surface, and another gold crystal also terminated at the (111) surface.  Now, if those two surfaces are brought into contact, with the right orientation so that they match up as if they were two adjacent layers of atoms inside a larger gold crystal, what will happen?  The answer is, in the absence of adsorbed contaminants, the surfaces will stick.  This is called "cold welding".  In contrast, if you bring together two ultraclean surfaces that are incommensurate, they can slide past each other with essentially no friction.  This is called "superlubricity".  Here are two great examples (pdf of first one; pdf of second one) of this.

In this new paper, Liu et al. are able to do some very cute experiments in this regard, looking at the motion of thin graphite flakes (exfoliated from and) sliding on graphite pedestals.  It's clear from the observations that graphite flakes shifted relative to the underlying graphite substrate can slide essentially frictionlessly over micron scales.  Very neat and elegant, and surprising since there is not any rotation at work here to break commensurability.  This is a very firm reminder that our macroscale physical intuition about materials and their interactions can fail badly at the nanoscale.

Tuesday, April 12, 2011

Playing chicken with the global economy

I get it - we need to fix the structural problems associated with the US budget.  However, don't these geniuses realize that threatening to default (let alone actually defaulting) on the US sovereign debt will severely undermine the dollar?  It's like they actually want to have hyperinflation, so that they can claim it was all Obama's fault.  Other countries don't have a  "debt ceiling", you know.  Update:  seems I'm not alone in realizing that even talking about default is dangerous.

Monday, April 11, 2011

Choosing a postdoctoral position

I had a request a while ago for a post about how to choose a postdoctoral position (from the point of view of a finishing-up grad student, I'm assuming).  This is a tricky topic, precisely because it's somewhere between choosing a grad school (lots of good places to go, with guaranteed open positions every year) and getting a faculty job (many fewer open positions per year in a given field, and therefore a much restricted field of play; plus, a critical need to make some hard decisions that could be postponed or avoided in grad school).  Moreover, different disciplines within the physical sciences have very different approaches on postdocs.  In some fields like astronomy, externally funded fellowships sponsored by observatories/facilities/programs are standard practice, while condensed matter physics is much more principal-investigator-driven.  So, I'll try to stick to general points.
  • I strongly suggest going somewhere that is not your graduate institution, unless there are strong extenuating circumstances.  It's just intellectually healthier to get a broad exposure to what is out there, rather than to stay entirely comfortable.
  • This is also one of the relatively few points in your career when you can really shift gears, if you are so motivated.  My doctorate was in ultralow temperature physics, but I decided to become a nano researcher, for example.  More dramatically, this is often the point where many people get into interdisciplinary fields like biophysics.  There are trade-offs, of course.  If you do a postdoc in an area very close to your thesis work, you can often make rapid progress.  On the other hand, most people who go on in research (industrial or academic) do not end up working on their thesis topic for the lion's share of their career, and this is a chance to broaden your skill set and knowledge base.
  • Word of mouth and self-motivation are essential to getting a good postdoc position, beyond posted ads.  If you're finishing up in grad school, you are enough of a professional that you should be able to email or otherwise contact people whose work you find interesting and exciting, and ask whether they have any postdoctoral openings.  You should make sure that these emails are reasonably detailed and that it's clear they're personalized - not a form letter being spammed to several hundred generic faculty members simultaneously.  Your hit rate won't be high, but it's better than nothing.
  • Don't discount industry, though it's a narrowing field.  There are still industrial postdoc positions, and if you've got an interest in industry more so than academia, then you should look at these possibilities.  This includes places like Bell Labs (yes, they still exist), IBM, Intel, HP Labs, etc.  It is a tragedy that there aren't more opportunities like this out there now.
  • You need to think about how a particular postdoc position is structured.  Are you going to be acting as middle-management, helping to mentor a team of grad and undergrad students?  Are you going to be leading a research project yourself?  Is there a lot of lab-building or lab-moving?  How long is the position, and how does it match up w/ the seasonal nature of academic hiring, if academia is what you want to do?  Where have previous postdocs in that lab or group ended up?
  • How set are you on academia?  If you are set on academia, what kind of academic position would make you happy?  Go into the academic track with your eyes open!  If you're looking beyond academia, what do you need out of a postdoc position (besides a paycheck)?  Are there particular skills you want to learn?
None of this is particularly insightful, but it doesn't hurt to have this written down in one place.  Suggestions for further things to consider are invited in the comments....

Tuesday, April 05, 2011

Designing a lab

Designing a lab is not trivial, particularly if you have no experience in doing it before.  My new lab (day 2 of the move....) was perhaps the ideal circumstance: a new building is being constructed, and you have a very free hand in determining the layout, the facilities, and so forth.  In any realistic process you never get everything you want (e.g., this building does not have a building-wide deionized water system; I can't have unlimited space; there are restrictions based on cost and feasibility).  The challenge is to end up with functional space - laid out intelligently, so that work flows well and you don't find yourself fighting with the building or yourselves.  Sometimes this is not simple.  In my original lab space, for example, that floor of the building was never designed with vibration-sensitive work in mind.  The need to position certain pieces of equipment on the vibrationally quiet parts of the floor strongly influenced lab layout, rather than basic experimental logic.

Lab design ranges from the Big Picture (e.g., I have a couple of optics tables, so I should probably have a separate area with independently controlled lighting; I want isolation transformers to keep my sensitive measurement electronics off the power lines used for my big pumps.) to a zillion little details (e.g., where should every single electrical outlet and ethernet port be positioned?  What about emergency power?  Gas lines?  What fittings are going to be on the chilled water lines?).  Nothing is ever perfect, and there are always minor glitches (e.g., mislabeled circuit breakers).  You also want to design for the future.  If you think you're eventually going to need a gizmo that requires chilled water or a certain amount of 480V current, it's better to plan ahead, cost permitting....  The situation is definitely more constrained if you're moving into pre-existing space, particularly in an older building.  Like many aspects of being a professor, this is something that no one ever sits down and teaches you.  Rather, you're left to figure it out, hopefully with the help of a professional.

Monday, April 04, 2011

Moving the lab

Today's the beginning of moving my lab into the new Brockman Hall for Physics here at Rice.  As the week goes on, if I have time I'll write a bit about the process of lab design and the joys of moving equipment.  It's exciting, but there's no question that I wish we could skip over the actual transition.

Sunday, March 27, 2011

Blogger spam + McEuen novel

Two unrelated topics.  First, blogger needs to get their act together regarding comment spam.  They have some attempt at automatic spam detection, but it's clear that in the last two or three weeks people have figured out how to evade their blocking algorithm.  The spam comments quote some fragment of the original blog post or a previous comment, and then have a clickable username that links to some shady vendor website.  Very annoying.  I'd really rather not shift to a moderated comments approach, but I may have to if this keeps up.

Second, I was very surprised last weekend when reading the Wall Street Journal, and coming upon an article about Paul McEuen (author link, physicist link), who has written apparently a very successful novel.  As one of my friends exclaimed upon hearing this news when I told her at the APS, come on Paul - you're again making the rest of us look like lazy underachievers!  I'm going to have to get this on kindle....

Thursday, March 24, 2011

March Meeting, further thoughts

I had to cut my March Meeting a bit short this year, to get back to Rice in time for the dedication of our new Brockman Hall for Physics.  Still, a few more thoughts from the APS meeting:
  • Michelle Simmons gave a terrific talk summarizing the work, over more than a decade, of her group at the University of New South Wales on their progress toward their eventual goal of building a quantum computer based on P donors in Si (the Kane approach).  I knew of the work, but I'd never seen it all laid out like that, and it was impressive.  There are very few people out there in the CM community with the fortitude to plan out and pursue steadily a coherent, goal-directed research program over a dozen years.
  • I also saw something I hadn't observed in a number of years:  a speaker completely blowing off the 10 minute time limit on a contributed talk.  When the yellow warning light clicked on, it was clear that the speaker was nowhere near the end.  When the red light clicked on and started to blink, still no conclusion.  The session chair stood up and loomed intimidatingly.  No dice.  Finally the speaker ended after a total of about 16 minutes.  That takes nerve (and a lack of consideration for the others in the session....).
  • I chaired two sessions this year (note to self:  only chair one session....), and in contrast to the previous point, really didn't have any bad talks at all in there.  Very pleasant, generally.  Most interestingly, the metal-insulator transition in vanadium oxide session was 100% experimental talks!  Perhaps theorists have given up?  (kidding.)  

Tuesday, March 22, 2011

2011 APS March Meeting, first thoughts

A few brief thoughts at the APS March Meeting (more later....) in Dallas:
  • First time I've ever been at a convention center with a graveyard adjacent to the building.  Quite a time saver if there are really bad talks, I suppose.
  • Frank Wilczek still gives a terrific talk about the connection between superconductivity and high energy physics.  Very droll, too.  He clearly has a strong aesthetic desire for supersymmetry, but just as clearly acknowledges that all of this could go up in smoke, depending on what the LHC finds.
  • The APS's attempt at a mobile app (for iPad, iPhone, etc.) is so painfully slow and incomplete (no scheduling ability I can almost understand, but how can you not list the room numbers for the sessions?) that it's better to use wireless internet access to visit the APS meeting website instead.
  • Roland Wiesendanger also presents an outstanding talk.  His group's accumulated work on spin-polarized STM is very impressive, and definitely made me feel an intense bout of "imaging envy" (in the sense that my group's work usually does not have beautiful 3d renders of data sets that grace the cover of glossy journals).
  • Lots of discussions with people about looming budget concerns, and separately the decline of science journalism.  On some level, these topics are related....

Sunday, March 13, 2011

Advice on choosing a graduate school

This is my 500th post (!), and I realized, after spending a big part of the last two days talking with prospective graduate students, that I had never written down my generic unsolicited advice about picking a graduate school. 
  • Always go someplace where there is more than one faculty member with whom you might want to work.  Even if you are 100% certain that you want to work with Prof. Smith, and that the feeling is mutual, you never know what could happen, in terms of money, circumstances, etc.  Moreover, in grad school you will learn a lot from your fellow students and other faculty.  An institution with many interesting things happening will be a more stimulating intellectual environment, and that's not a small issue.
  • It's ok at the applicant stage not to know exactly what you want to do.  While some prospective grad students are completely sure of their interests, that's more the exception than the rule.
  • If you get the opportunity to visit a school, you should go.  A visit gives you a chance to see a place, get a subconscious sense of the environment (a "gut" reaction), and most importantly, an opportunity to talk to current graduate students.  Always talk to current graduate students if you get the chance - they're the ones who really know the score.  A professor should always be able to make their work sound interesting, but grad students can tell you what a place is really like.
  • I know that picking an advisor and thesis area are major decisions, but it's important to realize that those decisions do not define you for the whole rest of your career.  I would guess (and if someone had real numbers on this, please post a comment) that the very large majority of science and engineering PhDs end up spending most of their careers working on topics and problems distinct from their theses.  Your eventual employer is most likely going to be paying for your ability to think critically, structure big problems into manageable smaller ones, and knowing how to do research, rather than the particular detailed technical knowledge from your doctoral thesis.  A personal anecdote:  I did my graduate work on the ultralow temperature properties of amorphous insulators.  I no longer work at ultralow temperatures, and I don't study glasses either; nonetheless, I learned a huge amount in grad school about the process of research that I apply all the time.
  • You should not go to grad school because you're not sure what else to do with yourself.  You should not go into research if you will only be satisfied by a Nobel Prize.  In both of those cases, you are likely to be unhappy during grad school.  
  • I know grad student stipends are low, believe me.  However, it's a bad idea to make a grad school decision based on a financial difference of a few hundred or a thousand dollars a year.  Different places have vastly different costs of living.  Pick a place for the right reasons.
  • Likewise, while everyone wants a pleasant environment, picking a grad school largely based on the weather is silly.
  • Pursue external fellowships if given the opportunity.  It's always nice to have your own money and not be tied strongly to the funding constraints of the faculty, if possible.
  • Be mindful of how departments and programs are run.  Is the program well organized?  What is a reasonable timetable for progress?  How are advisors selected, and when does that happen?  Who sets the stipends?  What are TA duties and expectations like?  Are there qualifying exams?  Know what you're getting into!
  • It's fine to try to communicate with professors at all stages of the process.  We'd much rather have you ask questions than the alternative.  If you don't get a quick response to an email, it's almost certainly due to busy-ness, and not a deeply meaningful decision by the faculty member.
There is no question that far more information is now available to would-be graduate students than at any time in the past.  Use it!  Look at departmental web pages, look at individual faculty member web pages.  Make an informed decision.  Good luck!

Wednesday, March 09, 2011

Blogging scarcity - tidbits.

My blogging has been sparse of late because of several colliding deadlines and constraints (NSF report due; review article due; APS meeting coming up; impending travel; visits of prospective graduate students; the ever-present book; teaching; impending move of my whole lab to the new Brockman Hall for Physics).  This doesn't mean that there aren't interesting things going on out there in condensed matter physics (and physic in general) - just that I've been extraordinarily busy.  

To tide you over, here are a handful of interesting links.

This is an amazing video made entirely from shots of Saturn and its moons taken by the Cassini spacecraft.  It looks like something out of Hollywood, but is a zillion times more fascinating because it's real - no cgi here.

This older preprint (I'll revise the link when the paper comes out in PRL next week) puts forward the argument that the pseudospin degree of freedom of electrons in graphene does actually correspond to a real half-integer angular momentum.  Surprising - I need to think about this more.

This experiment is extremely slick.  The authors are able to use the magnetic field gradient from a sharp magnetic scanned probe tip to interact w/ individual nitrogen vacancy centers in diamond (which have an unpaired electron spin).  This is basically magnetic resonance imaging of single electron spins.

This paper shows a clear implementation of an idea that is increasingly popular:  using the plasmon properties of metal nanostructures to enhance solar energy harvesting.  Essentially the evanescent optical fields from the metal nanoparticles trap the light near the interface where, in this case, the photochemistry is happening.

Saturday, February 26, 2011

Of gaps and pseudogaps

ZapperZ's recent post about new work on the pseudogap in high temperature superconductors has made me think about how to try to explain something like this to scientifically literate nonspecialists. Here's an attempt, starting from almost a high school chemistry angle. Chemists (and spectroscopists) like energy level diagrams. You know - like this one - where a horizontal line at a certain height indicates the existence of a particular (electronic) energy level for a system at some energy. The higher up the line, the higher the energy. In extended solid state systems, there are usually many, many levels. That means that an energy level diagram would have zillions of horizontal lines. These tend to group into bands, regions of energy with many energy levels, separated by gaps, regions of energy with no levels.

Let's take the simplest situation first, where the energies of those levels don't depend on how many electrons we actually have. This is equivalent to turning off the electron-electron interaction. The arrangement of atoms gives us some distribution of levels, and we just start filling it up (from the bottom up, if we care about the lowest energy states of the system; remember, electrons can be spin-up or spin-down, meaning that each (spatial state) level can in principle hold two electrons). There's some highest occupied level, and some lowest unoccupied level. We care about whether the highest occupied level is right up against an energy gap, because that drastically affects many things we can measure. If our filled up system is gapped, that means that the energetically cheapest (electronic) excitation of that system is the gap energy. Having gaps also restricts what processes can happen, since any quantum mechanical process has to take the system from some initial state to some final state. If there's no final state available that satisfies energy conservation, for example, the process can't happen. This means we can map out the gaps in the system by various spectroscopy experiments (e.g., photoemission; tunneling).

So, what happens in systems where the electron-electron interaction does matter a lot? In that case, you should think of the energy levels as rearranging and redistributing themselves depending on how many electrons are in the system. This all has to happen self-consistently. One particularly famous example of what can happen is the Mott insulating state. (Strictly speaking, I'm going to describe a version of this related to the Hubbard model.) Suppose there are N real-space sites, and N electrons to place in there. In the noninteracting case, the highest occupied level would not be near a gap - it would be in the middle of a band. Because the electrons can shuffle around in space without any particular cost to doubly occupying a site, the system would be a metal. However, suppose it costs an energy U to park two electrons on any site. The lowest energy state of the whole system would be each of the N sites occupied by one electron, with an energy gap of U separating that ground state from the first excited state. So, in the presence of strong interactions, at exactly "half-filling", you can end up with a gap. Even without this lattice site picture, in the presence of disorder, it's possible to see signs of the formation of a gap near the highest occupied level (for experts, in the weak disorder limit, this is the Altshuler-Aronov reduction in the density of states; in the strong disorder limit, it's the Efros-Shklovskii Coulomb gap).

Another kind of gap exists in the superconducting state. There is an energy gap between the superconducting ground state and the low lying excitations. In the high temperature superconductors, that gap is a bit weird, since there actually are low-lying excitations that correspond to electrons with very specific amounts of momentum ("nodal quasiparticles").

A pseudogap is more subtle. There isn't a "hard" gap, with zero states in it. Instead, the number of states near the highest occupied level is depressed relative to noninteracting expectations. That reduction and how it varies as a function of energy can tell you a lot about the underlying physics. One complicated aspect of high temperature superconductors is the existence of such a pseudogap well above the superconducting transition temperature. In conventional superconductors (e.g., lead), this doesn't exist. So, the question has been lingering for 25 years now, is the pseudogap the sign of incipient superconductivity (i.e., electrons are already pairing up, but they lack the special coherence required for actual superconductivity), or is it a sign of something else, perhaps something competing with superconductivity? That's still a huge question out there, complicated by the fact that doping the high-Tc materials to be superconductors adds disorder to the problem.

Monday, February 21, 2011

This is why micro/nanofab with new material systems is hard.

Whenever I read a super-enthusiastic news story about how devices based on new material XYZ are the greatest thing ever and are going to be an eventual replacement for silicon-based electronics, I immediately think that the latter clause is likely not true. People have gotten very spoiled by silicon (and to a lesser degree, III-V compound semiconductors like GaAs), and no wonder: it's at the heart of modern technology, and it seems like we are always coaxing new tricks out of it. Of course, that's because there have been millions of person-years worth of research on Si. Any new material system (be it graphene, metal oxide heterostructures, or whatever) starts out behind the eight ball by comparison. This paper on the arxiv this evening is an example of why this business is hard. It's about Bi2Se3, one of the materials classified as "topological insulators". These materials are meant to be bulk insulators (well, at low enough temperature; this one is actually a fairly small band gap semiconductor), with special "topologically protected" surface states. One problem is, very often the material ends up doped via defects, making the bulk relatively conductive. Another problem, as studied in this paper, is that exposure to air, even for a very brief time, dopes the material further, and creates a surface oxide layer that seems to hurt the surface states. This sort of problem crops up with many materials. It's truly impressive that we've learned how to deal with these issues in Si (where oxygen is not a dopant, but does lead to a surface oxide layer very quickly). This kind of work is very important and absolutely needs to be done well....

Tuesday, February 15, 2011

You could, but would you want to?

Texas governor Rick Perry has proposed (as a deliberately provocative target) that the state's (public) universities should be set up so that a student can get a bachelor's degree for $10,000 total (including the cost of books).  Hey, I'm all for moon shot-type challenges, but there is something to be said for thinking hard about what you're suggesting.  This plan (which would set costs per student cheaper than nearly all community colleges, by the way) is not well thought-out at all, which is completely unsurprising.  To do this, the handwave argument is that professors should maximize online content for distance learning, and papers could be graded by graduate students or (apparently very cheaply hired) instructors.  Even then, it's not clear that you could pull this off.  Let me put it this way:  I can argue that the world would benefit greatly from a solar electric car that costs $1,000, but that doesn't mean that one you'd want to own can actually be produced in an economically sustainable way at that price.  This is classic Perry, though. 

Sunday, February 13, 2011

Battle hymn of the Tiger Professor

Like Amy Chua, I'm choosing to be deliberately provocative in what I write below, though unlike her I don't have a book to sell. I recently heard a talk where a well reputed science educator (not naming names) argued that those of us teaching undergraduates need to adapt to the learning habits of "millennials". That is, these are a group of people who have literally grown up with google (a thought that makes me feel very old, since I went to grad school w/ Sergei Brin) - they are used to having knowledge (in the form of facts) at their fingertips in a fraction of a second. They are used to nearly continuous social networking, instantaneous communication, and constant multitasking (or, as a more stodgy person might put it, complete distraction, attention deficit behavior, and a chronic inability to concentrate). This academic argued that we need to make science education mimic real research if we want to produce researchers and get students jazzed about science. Moreover, this academic argued that making students listen to lectures and do problem sets was (a) ineffective, since that's not how they were geared to learn, and (b) somewhere between useless and abusive, being slavishly ruled by a culture of "covering material" without actually educating. Somehow we should be more in tune with how Millennials learn, and appeal to that, rather than being stodgy fogies who force dull, repetitious "exercises at the end of the chapter" work.

While appealing to students' learning modalities has its place, I contend that this concept simply will not work well in some introductory, foundational classes in the sciences, math, and engineering. Physical science (chemistry, physics) and math are inherently hierarchical. You simply cannot learn more advanced material without mastery of the underpinnings. Moreover, in the case of physics (with which I am most familiar), we're not just teaching facts (which can indeed be looked up easily on the internet); we're supposedly teaching analytical skills - how to think like a physicist; how to take a physical situation and translate it into math that enables us to solve for what we care about in terms of what we know. Getting good at this simply requires practice. To take the Amy Chua analogy, hard work is necessary and playdates are not. There literally is no substitute for doing problems and getting used to thinking this way. While open-ended reasoning exercises can be fun and useful (and could be a great addition to the standard curriculum, or perhaps a way to run a lab class to be more like real research), at some point students actually do need to become proficient in basic problem-solving skills. I really don't like the underlying assumption that this educator was making: that the twitter/facebook/short-attention-span approach is unavoidable and possibly superior to focused hard work. Hey, I'm part of the distractable culture as much as anyone in the 21st century, but you'll have to work hard to convince me that it's the right way to teach foundational knowledge in physics, math, and chemistry.

Wednesday, February 09, 2011

Science and the nation

(The US, that is.) More people need to read this.

Sunday, February 06, 2011

Triboelectricity and enduring mysteries of physics

This past week I hosted Seth Putterman for a physics colloquium here at Rice, and one of the things he talked about is some of his group's work related to triboelectricity, or the generation of charge separation by friction/rubbing.  When you think about it, it's quite amazing that we have no first-principles explanation of a phenomenon we're all shown literally as children (rub a balloon on your hair and it builds up enough "static" charge that it will stick to a plaster wall, unless you live in a very humid place like Houston).  The amount of charge that may be moved is on the order of 1012 electrons per square cm, and the resulting potential differences can measure in the tens of kilovolts (!), leading to remarkable observations like the generation of x-rays from peeling tape, or UV and x-ray emission from a mercury meniscus moving along a glass surface.  In fact, there's still some disagreement about whether the charge moving in some triboelectric experiments is electrons or ions!  Wild stuff.

Sunday, January 30, 2011

Now that's an impressive capability.

The Bad Astronomer periodically makes posts that show just how cool some astro phenomenon or astro observational capability can be. In keeping with this idea, I find this paper to be just damned impressive. (Apologies for the subscription-only link.) The investigators at Oxford University have one of the best and fanciest transmission electron microscopes (TEM) in the world. In TEM, a highly focused (on the atomic scale!) beam of electrons is fired through a very thin (under 100 nm thick) sample, and the transmitted electrons are analyzed as the beam is scanned over the sample surface. By using very clever electron optics techniques (aberration correction) and the right choice of samples, the investigators have been able to watch the motion of single atoms and few-atom clusters (of praesodymium, which has a big atomic number and therefore interacts strongly with the electron beam) within a carbon nanotube. They can study the formation of 1d crystals this way. Very impressive imaging tool. I want one :-)

Saturday, January 29, 2011

Not even wrong.

No, I'm not talking about Peter Woit's website or Wolfgang Pauli.  Instead, I mean this article, which shows that Allstate Insurance apparently thinks that it's meaningful to look at car accident risk as a function of the astrological sign of the driver.  Astrology?  A major company using astrology?  We're supposed to believe that there is a statistically meaningful correlation between the time of the year you're born and your driving ability?  This is why there is a crying need for math and science literacy.

Saturday, January 22, 2011

Cold fusion (err, low energy nuclear reactions) yet again.

I'm starting to know how Phil Plait must feel every time he has to write yet another article about how Betelgeuse is not about to explode.  (Though my readership is about 0.01% of the Bad Astronomer's)

Once again, there is a claim receiving attention from various media sources (here, here, here) that someone has demonstrated some gadget that produces so much "excess heat" that the conjectured source of the energy is some kind of nuclear reaction taking place in a condensed matter environment.  This time, it's two Italian researchers, and they have demonstrated (in some very restricted way, more on this below) a device that they say uses a reaction involving nickel and ordinary hydrogen.  The claim is that for a steady state input power of 400 watts, they can produce around 12 kW steady state of power in the form of heat.  The device when running supposedly takes in room temperature water at some rate and outputs dry steam, and doing the enthalpy balance and water flow rate is how one gets the 12 kW figure.  Crucially, the claim is that this whole process only consumes a tiny amount of hydrogen (far too little for some kind of chemical combustion to be the source of all the heat).  The conjectured nuclear reaction is some pathway from 62Ni + p -> 63Cu.  No big radiation produced, though of course the demo doesn't really allow proper measurements.   Don't even bother reading the would-be theoretical "explanation" - it's ridiculously bad physics, and completely beside the point.  What's really of interest is the experimental question.

As always in these cases, there are HUGE problems with all of this.  The would-be paper is "published" in an online journal run by one of the claimants.  The claimants won't let independent people examine the apparatus.  They also don't do the completely obvious demonstration - setting up a version that runs in closed cycle (that is, take some of that 12 kW worth of steam flow, and generate the 400 W of electrical power needed to keep the apparatus running, and just let the system run continuously).  If the process really is nuclear in origin, and the hydrogen accounting is correct, it should be possible to run such a system continuously for months or longer.  The claimants say that they've been using a 10 kW version of such a unit to heat a factory in Italy for the past year, but they conveniently don't show that to anyone.

The burden of proof is on these people - if they've really done this, the world will beat a path to their door, and that would be great.  I'm not buying my nickel futures yet, however.  Once again there will be people out there who claim that evil scientists are suppressing these unorthodox geniuses; this is such a ridiculous mischaracterization of science that it still ticks me off every time I read it.  Of course I wish this were a genuine discovery - it would be world-changing and reveal enormous new physics.  However, so far no version of this kind of low energy nuclear reaction business has passed the bar of reasonable reproducibility in controlled circumstances.  (See here for a past discussion concerning the palladium variety and its reproducibility.   Read the comments there before posting angrily below that I don't understand the situation, or that I haven't looked at this, or that I'm otherwise hugely ignorant on the subject.)  That's not the establishment being oppressive, it's the way good science works.  Extraordinary claims require extraordinary evidence.  The self-sustaining demo I described above with independent verification and measurements would go a long way.  I'm not holding my breath.

Tuesday, January 18, 2011

Various and sundry

Here are a number of links that may be of interest:

Back in December, Steven Blau at Physics Today wrote an interesting blog post about the arrogance of physicists. For some reason I just came across this today. Prof. Stone's comment on the post is, I think, right on the mark, and reminds me of this xkcd comic.

Here is a series of four blog posts (one, two, three, four) from Mike Mayberry at Intel, to give you a sense of some of the research directions they're pursuing as we near the possible end of scaling for conventional Si-based FETs. Very interesting stuff on the challenges of integrating other materials (like III-V compound semiconductors) with Si.

Veering into humor, here is a video made by Adam Ruben, whom I know through the alumni network of the Princeton Band. It's called "The Grad Student Rap", and it's part of the promotion for his book, Surviving Your Stupid, Stupid Decision to go to Graduate School.

Thursday, January 13, 2011

This just in: a Nobel in medicine does not imply knowledge of basic physics.

Having read something about this online, I had to see for myself.  Take a look at this paper.  One of the 2008 Nobel laureates for medicine is the lead author, and he claims that simply having certain kinds of DNA in water (1) creates electromagnetic waves at very low frequencies, like 7 Hz; (2) those waves are sufficiently strong that a simple pickup coil of copper wire can be used to detect them inductively; and (3) somehow those waves continue to self-propagate in a weird way so that repeated dilution of the solution preserves the "imprint" of those waves.  Wow.  The science here is so unbelievably bad, it's hard to imagine that this is serious.  A pick-up coil?!  No serious discussion of the magnitude of the effect, and whether it's even remotely credible that detectable inductive signals could be produced?  Silly numerology demonstrating a complete lack of understanding of quantum mechanics?  Impressive.  Can we make a deal?  Medicine laureates won't make crazy, misinformed claims about physics (which then naturally get picked up by the media, who love to report "the controversy", as if there is no such thing as a right or wrong answer to a scientific question), and physics laureates won't make crazy, misinformed claims about biology.  Please?

Blast from the past

Yesterday I received a very nice and welcome email from a faculty member who had been one of my best classroom instructors in graduate school. This email was, effectively, a reply to an email that I had sent him regarding Stanford's graduate physics curriculum. The amusing bit is that I had sent him that email 14 years ago, when I was a senior grad student representative to Stanford's physics graduate committee. At the time, there had been ongoing discussions about what topics should be in the first-year graduate curriculum, particularly the "mechanics" sequence, and my opinion had been asked for. It's interesting to look back now as a faculty member at what I'd suggested at the time. Here are the bullet point topics I'd suggested. Remember that Stanford is on the quarter system, meaning that there are three ten-week quarters during the regular academic year.

For "Mechanics of Particles" (basically graduate mechanics and dynamics), I'd said:
- Brief review of variational calculus
- Lagrangians and Hamiltonians, action principle
- Canonical transformations, phase space
- Symmetries and conservation laws (Noether's thm?)
- Normal modes, harmonic oscillator review
- Rigid body motion (numerical work?)
- Orbital mechanics review
- Classical perturbation theory (w/ orbits, rigid body dynamics, anharmonic oscillator)
- Action-angle variables
- Poisson brackets, symplectic structure (*definitions of 1-forms, tangent spaces, tangent bundles?)
- Chaos, nonlinear dynamics, ergodicity
- Brief review of Einstein summation convention
- Special relativity w/ Einstein summation convention, space-time diagrams

For "Continuum mechanics" (fairly unique, I now realize - many departments offer no such course), my suggestions reflected my undergrad engineering background to some degree. I now realize that what I list below is considerably too much for a 10 week course:
- Mechanics of solids:
+ Continuum mechanics version of Hooke's law; stress, strain, tension, compression, shear, bulk modulus, a few numbers about strength of materials, Young's modulus, shear modulus
+ Lagrangian/Hamiltonian densities, more variational calculus
+ *Flexure of beams, bending moments, areal moments of inertia (why I-beams are stiffer than rods of the same cross-sectional area)
+ *Torsion of members, polar "moments of inertia"
+ *Dynamics of beams: the wave equation, longitudinal and transverse sound, natural frequencies of cantilevers
+ Acoustics, idea of acoustic impedance and mismatch
- Fluid statics
+ Hydrostatics, Archimedes' principle, buoyancy
+ *Surface tension, capillary action, wetting
- Fluid mechanics
+ Euler and Lagrange pictures
+ "Convective derivatives", transport of momentum and energy
+ The energy equation, the momentum equation, the continuity equation, the Navier-Stokes equation
+ Inviscid, incompressible flow:
- Bernoulli's Eqn.
- Potential theory
- *Vorticity, circulation, Magnus' law, "lift"
+ Viscous, incompressible flow:
- Definition of viscosity, comparison w/ shear modulus, definition of Newtonian fluid
- Stoke's law
- Intro to dimensional analysis, Reynolds' number
- Laminar flow, parabolic velocity profile in a round pipe
- Turbulent flow, mention engineering approach to these problems (Moody chart, friction factor, Bernoulli w/ losses)
- Froud number, hydraulic jumps (example of a "shock" discontinuity that you can demonstrate in a sink)
+ Compressible flow
- Mention of shockwaves, scaling

For "Statistical Mechanics", the main challenge was dealing with the divergent backgrounds of incoming students - some people had very strong undergrad preparation in statistical and thermal physics, others much less so. This is an issue in graduate quantum mechanics to an even greater degree. Now that I've taught undergrad stat mech several times, I think what I listed below could use some additional advanced topics:
- Definition of entropy, why it's a log
- The equal prob. postulate/ergodic thm.
- The Boltzmann factor and the partition fn., Fermi and Dirac distributions
- *Mention of Feynman diagram methods, saddle-point integration to get Z in complicated systems
- The canonical and grand canonical ensembles, the chemical potential
- "Natural" variables, Legendre transforms, thermodynamic potentials, *the idea of a constrained maximization of S, the Maxwell relations, the "thermodynamic square"
- Gases
+ Ideal classical
+ Van der Waals, virial coefficients
+ Fermi gas at zero and finite T
+ Ideal Bose gas, BEC, phonons & photons *(incl. laser discussion!)
- Liquids - diagrammatic methods of treating interactions?
- Solids
+ Phonons
+ Concept of long-range order
- *Correlation functions, *connection w/ susceptibilities
- *Correlations and fluctuations, *how they're measured!
- Theories of phase transitions
+ Concept of order parameter
+ Ginsberg-Landau theory, diff. betw. 1st and 2nd order, extensions to include fluctuations
+ 1st order: Van der Waals reprise, Clausius-Clapeyron
+ Mean-field theory, example of magnetism
+ Ising model in 1-d
+ Renormalization group to solve Ising model, critical behavior, correlation length ideas
- *Transport
+ *Boltzmann equation
+ *Noise in transport: fluctuation/dissipation thm
It was definitely interesting to me to see how my thinking on this stuff has evolved now that I have to teach it.

Sunday, January 09, 2011

Friction - sometimes electrons matter!

While I don't do any research on the subject myself, over the last few years I've become more interested in the origins of friction, a subject about which almost no physics progress was made between from around 1650 to 1950. Since the development of the tools of surface science (ultrahigh vacuum, for example) and scanned probe microscopy, however, people have learned much about where friction comes from.

We all have an intuitive grasp of what friction is, and in freshman physics (or even high school), we learn that we can model friction as a (shear) force between two surfaces as they slide (or attempt to slide) relative to one another. That force is modeled as proportional to the normal force between the surfaces, with the surface-dependent friction coefficient as the proportionality constant. The force is further traditionally modeled as being independent of the contact area between the two surfaces, and independent of the relative speeds of the two surfaces (except for the distinction between static friction - with no relative motion - and kinetic or sliding friction). That approach does a very good job at describing many many experiments on friction between macroscopic objects.

The problem is, as many famous scientists (e.g., Coulomb) discovered, it's very difficult to come up with a microscopic model of the interaction between surfaces that has these properties. One of the essential difficulties is rather deep: friction has to result in real dissipation. Energy has to be transferred from macroscopic degrees of freedom (the motion of a hockey puck relative to the ice) into microscopic degrees of freedom (the relative vibrational motions of the atoms in the hockey puck, and similar motions of the atoms in the ice - heat, in short.). That transfer of energy from macroscopic coordinates to microscopic motions or coordinates is irreversible in the same sense that the motion of water in a pond is irreversible after a stone is tossed in. (Yes, it's physically conceivable from the point of view of Newton's laws that all the little bits of water at the edge of the pond could jiggle just right so as to send coordinated ripples inward toward the center of the pond, spitting the stone back out. However, that's incredibly unlikely, given all of the possible microscopic states of the water, so from the standpoint of macroscopic thermodynamics, the water rippling process is irreversible.)

There has been some beautiful work on friction at the nanoscale, and much of it has focused on chemical interactions between surfaces, as well as vibrations (phonons) as the relevant microscopic degrees of freedom. However, in the case of metals, there are other excitations where the energy could end up: electrons! That's one defining characteristic of a metal, the existence of possible electronic excitations of (almost) arbitrarily low energy. How can you tell if the energy is ending up in the electrons? Well, you'd really like to do an experiment where none of the vibrational properties are changed, but that allows you to compare between with-electrons and without-electrons. Amazingly, it is possible to do something close to that by working with a metal that is superconducting! Above the superconducting transition temperature, Tc, the metal has plenty of low energy electronic excitations. Below Tc, however, in the superconducting state, electronic excitations are forbidden below some threshold energy (this "gap" in the excitation spectrum is one key reason why superconductors have no electrical resistance). In this new paper (sorry about not having an arxiv version to link), the investigators have demonstrated that the (noncontact) friction between a metal tip and a niobium film drops dramatically once the niobium becomes superconducting. This argues that electronic dissipation is responsible for much of the friction in this case (in the normal state). I should point out that previous work with lead films had hinted at similar physics.  The new experiment is very clear and benefits from technique developments in the meantime.

Thursday, January 06, 2011

Spin-orbit coupling

Happy new year! I want to write a little about what physicists call spin-orbit interactions. It turns out that there is a deep connection between electric and magnetic fields that can be made somewhat obvious by considering a thought experiment. (For a great discussion of this, see the textbook by Purcell.) Imagine a line of stationary positive charges. From our perspective (at rest relative to the line of charges), there is no current, so one should see an electric field pointed radially outward from the line of charges, and a positive charge placed next to the line of charges should respond accordingly, being pushed radially outward. Now consider viewing this from a reference frame moving parallel to the line of charges. From our point of view in that frame, we see a current, and therefore there should be a magnetic field associated with that current (as well as an electric field from the net positive charge). In special relativity, one can figure out how electric and magnetic fields transform into and out of each other when changing reference frames.

This shift of point of view is the way that spin-orbit coupling is usually explained in undergrad quantum mechanics. Consider a hydrogen atom. The electron zipping around the proton has a spin degree of freedom, and a corresponding magnetic moment. From the point of view of the (classically) moving electron, the proton is essentially a current producing a magnetic field, which will tend to align the electron magnetic moment. This couples the spin of the electron to the orbital motion of the electron; hence the name "spin-orbit coupling"; and it is technically a relativistic effect which tends to be bigger in heavier atoms.

Why should you care? Well, spin-orbit coupling can be important in solids, too, since one can think of their electronic states as being built out of atomic orbitals. As ZapperZ points out, a recent paper shows that these kinds of relativistic corrections are not necessarily tiny in ordinary, everyday solids.  In fact, it appears that it is essential to worry about such relativistic effects in order to understand why the electrochemical redox potentials of an ordinary car battery are what they are!