Search This Blog

Sunday, August 19, 2012

And this guy sits on the House Science Committee.

Today Congressman Todd Akin from Missouri, also the Republican nominee for the US Senate seat currently held by Sen. Claire McCaskill, said that women have a biological mechanism that makes it very difficult for them to get pregnant in the case of "legitimate rape" (whatever that is).  Specifically, he said "If it’s a legitimate rape, the female body has ways to try to shut that whole thing down."  Yes, he said that, and there's video.  Regardless of your politics or your views on abortion, isn't it incredibly embarrassing that a member of the House Science Committee would say something so staggeringly ignorant?   

Update:  Once again, The Onion gets it right.

Wednesday, August 15, 2012

Intro physics - soliciting opinions

For the third year in a row, I'm going to be teaching Rice's honors intro mechanics course (PHYS 111).  I use the outstanding but mathematically challenging (for most first-year undergrads) book by Kleppner and Kolenkow.  It seems pretty clear (though I have done no rigorous study of this) that the students who perform best in the course are those that are the most comfortable with real calculus (both differential and integral), and not necessarily those with the best high school physics background.  Teaching first-year undergrads is generally great fun in this class, though quite a bit of work.  Since these are a self-selected bunch who really want to be there, and since Rice undergrads are generally very bright, they are a good audience. 

I do confess, though, that (like all professors who really care about educating students) I go back and forth about whether I've structured the class properly.  It's definitely set up like a traditional lecture course, and while I try to be interactive with the students, it is a far cry from some of the modern education research approaches.  I don't use clickers (though I've thought seriously about it), and I don't use lots of peer instruction or discovery-based interactions.  The inherent tradeoffs are tricky:  we don't really have the properly configured space or personnel resources to do some of the very time-intensive discussion/discovery-based approaches.  Likewise, while those approaches undoubtedly teach some of the audience better than traditional methods, perhaps with greater retention, it's not clear whether the gains outweigh the fact that nearly all of those methods trade subject content for time.  That is, in order to teach, e.g., angular momentum really well, they dispense with other topics.  It's also not clear to me that these methods are well-suited to the Kleppner-Kolenkow level of material.

As unscientific as a blog posting is, I'd like to solicit input from readers.  Anyone out there have particularly favorite approaches to teaching intro physics at this level?  Evidence, anecdotal or otherwise, that particular teaching methods really lead to improved instruction, at the level of an advanced intro class (as opposed to general calc-based physics)?

Wednesday, August 08, 2012

Another sad loss

It was disheartening to hear of another sad loss in the community of condensed matter physics, with the passing of Zlatko Tesanovic. I had met Zlatko when I visited Johns Hopkins way back when I was a postdoc, and he was a very fun person. My condolences to his family, friends, and colleagues.



Saturday, August 04, 2012

Confirmation bias - Matt Ridley may have some.

Matt Ridley is a columnist who writes generally insightful material for the Wall Street Journal about science and the culture of scientists.  For the last three weeks, he has published a three-part series about confirmation bias, the tendency of people to overly weight evidence that agrees with their preconceived notions and downgrade the importance of evidence that disagrees with their preconceived notions.  Confirmation bias is absolutely real and part of the human condition.   Climate change skeptics have loudly accused climate scientists of confirmation bias in their interpretation of both data and modeling results.  The skeptics claim that people like James Hansen will twist facts unrelentingly to support their emotion-based conclusion that climate change is real and caused by humans.

Generally Mr. Ridley writes well.  However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters.  He says:  "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise....  He may be right, but such curve-fitting reasoning is an example of confirmation bias." 

Climate science debate aside, that last statement is just flat-out wrong.  First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias.  Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling.  To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses.  The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct.  An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct.  Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?"  Doing this right is tricky.  Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing.  Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct.   (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)

The bottom line:  when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct".  Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.  





Tuesday, July 31, 2012

A new big prize

So there's another big scientific prize around now, the Milner Prize for Fundamental Physics.  Interesting.  Clearly wealthy Russian multimillionaires can do what they like with their money, whether that means physics prizes or miniature giraffes.  However, I am not terribly thrilled with the idea that "fundamental physics" includes (to a significant degree so far) theoretical ideas that simply have not been tested experimentally.  It would be very unfortunate if this ends up as a high profile media splash that boosts the erroneous public perceptions of fundamental physics as (1) virtually entirely high energy theory; and (2) exotic ideas unconnected to experiment (e.g., the string multiverse).

Monday, July 30, 2012

A few items

This editorial from yesterday's NY Times is remarkable for just how off-base it is.  The author, a retired social scientist professor, argues that we should stop teaching algebra to so many people.  He actually thinks that teaching algebra to everyone is bad for society:  "Making mathematics mandatory prevents us from discovering and developing young talent."  Basically, his reasoning is that (1) it's hard for many non-math-inclined people, so it takes up a lot of time that could be spent on other things; and (2) it's really not useful for most people, so it's doubly a waste of time.  How anyone can argue publicly that what society really needs is less math literacy is completely beyond me.  Like all of these sorts of things, there is a grain of reason in his argument:  Most people do not need to prove solutions to cubic equations with professional-grade math rigor.  However, algebra and algebraic ideas are absolutely essential to understanding many many things, from financial literacy to probability and statistics.  Moreover, algebra teaches real quantitative reasoning, rather than just arithmetic.  The fact that this even got printed in the Times is another example of the anti-science/math/engineering bias in our society.  If I tried to get an editorial into the Washington Post advocating that we stop teaching history and literature to everyone because it takes away time from other things and writing is hard for many people, I would rightly be decried as an idiot.

This editorial from today's NY Times is also remarkable.  The author had historically been a huge skeptic of the case for anthropogenic global warming.  Funded by the oil magnate (and totally unbiased about this issue, I'm sure) Koch brothers, he did a study based much more on statistics and data gathering than relying on particular models of climate forecasting.  Bottom line:  He's now convinced that global warming is real, and that human activities in terms of CO2 are very significant drivers.  Funding to be cut off and his name to be publicly excoriated on Fox News in 5...4...3....  See?  Quantitative reasoning is important.

Rumor has it that Bill Nye the Science Guy is considering making new episodes of his show.  Bill, I know you won't read this, but seven years ago you visited Rice and posed for a picture with my research group.  Please make this happen!  If there is anything I can do to increase the likelihood that this takes place, let me know.

Finally, from the arxiv tonight, this paper is very interesting.   These folks grew a single layer of FeSe on a strontium titanate substrate, and by annealing it under different conditions they affect its structure (as studied by angle-resolved photoemission).  The important point here is that they find conditions where this layer superconducts with a transition temperature of 65 K.  That may not sound so impressive, but if it holds up, it beats the nearest Fe-based material by a good 10 K, and beats bulk FeSe by more like a factor of 1.5 in transition temperature.  Stay tuned.  Any upward trend in Tc is worth watching.





Thursday, July 26, 2012

Memristor or not - discussion in Wired.

Earlier in the month, Wired reported that HP is planning to bring their TiO2-based resistive memory to market in 2014.  Resistive memory is composed of a bunch of two-terminal devices that function as bits.  Each device has a resistance that is determined by the past history of the voltage (equivalently current) applied to the device, and can be toggled between a high resistance state and a low resistance state.   In HP's case, their devices are based on the oxidation and reduction of TiO2 and the diffusion of oxygen vacancies.

This announcement and reporting apparently raised some hackles.  Wired has finally picked up on the fact that HP's use of the term "memristor" to describe their devices is more of a marketing move than a rigorous scientific claim.  As I pointed out almost two years ago, memristors are (in my view) not really fundamental circuit elements in the same way as resistors, capacitors, and inductors; and just because some widget has a history-dependent resistance, that does not make it a memristor in the sense of the original definition.

Thursday, July 19, 2012

Tragic news.

This was terrible news to get.  Words fail.  Further information here.

Tuesday, July 17, 2012

Replacement for Virtual Journals?

I notice that the APS Virtual Journals are going away. I was a big fan of the nano virtual journal, since the people who ran it generally did a really nice job of aggregating articles from a large number of journals (APS, AIP, IOP, Nature, Science, PNAS, etc.) on a weekly basis. True, they never did manage to work out a deal to coordinate with the ACS, and they'd made a conscious decision to avoid journals dedicated to nano (e.g., Nature Nano). Still it will be missed.

Now I will show my age. Is there a nice web 2.0 way to replace the virtual journal? In the announcement linked above, they justify the end of these virtual journals by saying that there are new and better tools available for gathering this sort of information. What I'd like to do, I think, is look at the RSS feeds of the tables of contents of a bunch of journals, filter on certain keywords, and aggregate together links to all the articles. Is there a nice way to do this without miss and fuss? Suggestions would be appreciated.

 

Thursday, July 12, 2012

Exotic (quasi)particles, and why experimental physics is challenging

There was a very large amount of press, some of it rather breathless, earlier in the year about the reported observation of (effective) Majorana fermions in condensed matter systems.  Originally hypothesized in the context of particle physics, Majorana fermions are particles with rather weird properties.  Majorana looked hard at the Dirac equation (which is complex), and considered particles "built out of" linear combinations of components of solutions to the Dirac equation.  These hypothesized particles would obey a real (not complex) wave equation, and would have the rather odd property that they are their own antiparticles (!)  In the language of quantum field theory, if the operator \( \gamma^{\dagger} \) creates a Majorana particle, then \( \gamma^{\dagger} \gamma^{\dagger}\) creates and destroys one, leaving behind nothing.  In the context of condensed matter, it has been theorized (here and here, for example) that it's possible to take a superconductor and a semiconductor wire with strong spin-orbit coupling, and end up with a composite system that has low energy excitations (quasiparticles) that have properties like those of Majorana fermions.

So, if you had these funky quasiparticles in your system, how could you tell?  What experiment could you do that would give you the relevant information?  What knob could you turn and what could you measure that would confirm or deny their presence?  That's the challenge (and the fun and the frustration) of experimental physics.  There are only so many properties that can be measured in the lab, and only so many control parameters that can be tuned.  Is it possible to be clever and find an experimental configuration and a measurement that give an unambiguous result, one that can only be explained in this case by Majorana modes? 

In the particular experiment that received the lion's share of attention, the experimental signature was a "zero-bias peak" in the electrical conductance of these structures.  The (differential) conductance is the slope of the \(I-V\) curve of an electrical device - at any given voltage (colloquially called "bias"), the (differential) conductance tells you how much more current you would get if you increased the voltage by a tiny amount.  In this case, the experimentalists found a peak in the conductance near \( V  = 0 \), and that peak stayed put at \(V = 0\) even when a magnetic field was varied quite a bit, and a gate voltage was used to tune the amount of charge in the semiconductor.  This agreed well with predictions for the situation when there is a Majorana-like quasiparticle bound to the semiconductor/superconductor interface. 

The question is, though, is that, by itself, sufficient to prove the existence of Majorana-like quasiparticles experimentally?  According to this new paper, perhaps not.  It looks like it's theoretically possible to have other (boring, conventional) quasiparticles that can form bound states at that interface that also give a zero-bias peak in the conductance.  Hmm.  Looks like it may well be necessary to look at other measurable quantities besides just the conductance to try to settle this once and for all.  This is an important point that gets too little appreciation in popular treatments of physics.  It's rare that you can directly measure the really interesting property or system directly.  Instead, you have to use the tools at your disposal to test the implications of the various possibilities. 


Sunday, July 08, 2012

What is effective mass?

Given my previous post and the Higgs excitement, it's worth thinking a bit about what we mean by "effective mass" for charge carriers in solids.  At the root of the concept is the idea that it is meaningful to describe the low energy (compared with the bandwidth, which turns out to be on the order of electron-volts) electronic excitations of (the many electrons in) solids as electron-like quasiparticles - quantum objects that are well described as occupying particular states that are described as having definite energy and momentum (for the experts, these states are approximate eigenstates of energy and momentum).  One can look at those allowed states, and ask how energy \(E \) varies as a function of momentum \(\mathbf{p} \).  If the leading variation is quadratic, then we can define an effective mass by \(E \approx p^{2}/2m* \).  Note that this doesn't have to be the situation.  Near the "Dirac point" in graphene, where the occupied \(\pi \) electron band has its maximum energy and the unoccupied \(\pi \) band has its minimum energy, the energy of the quasiparticles goes linearly in momentum, analogous to what one would expect for ultrarelativistic particles in free space.

The actual situation is more rich than this.  In real space, we believe the universe to be invariant under continuous translational symmetry - that is, the properties of the universe don't depend on where we are.  Translating ourselves a little to the right doesn't change the laws of nature.  That invariance is what actually implies strict conservation of momentum.  In the case of a periodic solid, we have a lower symmetry situation, with discrete translational symmetry - move over one lattice spacing, and you get back to the same physics.  In that case, while true momentum is still conserved (the universe is what it is), the parameter that acts like momentum in describing the electronic excitations in the solid is only conserved if one allows the solid as a whole the chance to pick up momentum in certain amounts (proportional to 1/the lattice spacing). 

More complicated still, when electron-electron interactions are important, say between the mobile electrons and others localized to the lattice, the spectrum of low energy states can be modified quite a bit.  This can lead to the appearance of "heavy fermions", with effective masses hundreds of times larger than the free electron mass.   Note that this doesn't mean that the real electrons are actually more massive.  Pull one out of the solid and it's like any other electron.  Rather, it means that the relationship between the energy of the electronic states and their momentum in the solid differs quite a bit from what you'd see in a free electron. 

So, knowing this, how fundamental is mass?  Could there be some underlying degrees of freedom of the universe, such that our standard model of particle physics is really an effective low-energy theory, and what we think of as mass really comes from the energy and momentum spectrum of that theory?  In a sense that's one aspect of what something like string theory is supposed to do.

On a more nano-related note, this discussion highlights why certain abuses of the term effective mass annoy me.  For example, it doesn't really make sense to talk about the effective mass of an electron tunneling through a dodecane molecule - 12 carbons do not make an infinite periodic system.  You can use models where effective mass is a parameter in this sort of problem, but you shouldn't attach deep physical meaning to the number at the end of the day.

Friday, July 06, 2012

What is mass?

The (most likely) Higgs boson discovery brings up a distinction in my mind that seems to be getting overlooked in most of the popular press discussions of the CERN work.  What do we mean as physicists when we talk about "mass"?  In classical mechanics, there are in some sense two types of mass.  There is gravitational mass - in Newtonian gravity, the gravitational force between two particles of masses \( m \) and \( M \) has the magnitude \(G m M/r^{2} \), where \( G \) is the gravitational constant and \(r \) is the distance between the particles.  The force is attractive and acts along the line between the two particles.  The Higgs boson has no (direct) connection to this at all.

There is also inertial mass, and this can be described in a couple of ways.  The way we usually teach beginning students is that a total force of magnitude \(F \) exerted on an object (in an inertial frame of reference, but that's a detail) produces an acceleration \(a \equiv d^{2}r/dt^{2}\) that is linearly proportional to \(F \).  Exert twice as much force and get twice as much acceleration.  The constant of proportionality is the inertial mass \( m \), and we write all this in one form of Newton's Second Law of Motion, \( \mathbf{F} = m \mathbf{a} \).  The more (inertially) massive something is, the smaller the acceleration for a given amount of force.

A more subtle way to define this would be to say that there is this thing called momentum, \(\mathbf{p} \), which we believe to be a conserved quantity in the universe.  Empirically, momentum is connected with velocity.  At low speeds (compared with \(c \), the speed of light), momentum is directly proportional to velocity, and the constant that connects them is the mass:  \( \mathbf{p} = m \mathbf{v} \).  (The full relativistic expression is \( \mathbf{p} = m \mathbf{v}/ \sqrt{1-v^{2}/c^{2}} \) ).  The more massive something is, for a given speed, the more momentum it has (and the more it's going to pack a whallop when it hits you).

The coupling of elementary particles to the Higgs field is supposed to determine this mass, the relationship between momentum and velocity (or equivalently, between momentum and energy).  As far as we know, the inertial mass and the gravitational mass appear to be identical - this is the Equivalence Principle, and it's supported by a wealth of experiment (though there are always ideas out there for new tests and new limits).

Thursday, July 05, 2012

congratulations to my high energy colleagues

I'll post something else later now that I'm back from travel, but in the meantime, congratulations to my high energy colleagues on the discovery of what is most likely some form of the Higgs boson.  A monumental effort.  Let the Nobel speculation (about who should share it w/ Higgs, and whether it should be this year or wait for further confirmation) begin!  Oh, and see the image at right (not an original) for my feelings about Leon Lederman's term for this particle.  (If you don't get it, and don't mind bad language, see here at around the 4:50 mark.)

Tuesday, June 26, 2012

At least they're being explicit about it.

The Texas Republican Party platform for 2012 is out (pdf).  I've complained about previous incarnations before.  This time they include this gem: 
We oppose the teaching of Higher Order Thinking Skills (HOTS) (values clarification), critical thinking skills and similar programs that are simply a relabeling of Outcome-Based Education (OBE) (mastery learning) which focus on behavior modification and have the purpose of challenging the student’s fixed beliefs and undermining parental authority.  [Emphasis mine - DN]
Wow.   They explicitly oppose teaching students to think critically, because that might be a threat to their fixed beliefs.  Wow.  And somehow these people keep winning statewide office.  Boggles the mind.  As a bonus, they also say:
We support objective teaching and equal treatment of all sides of scientific theories. We believe theories such as life origins and environmental change should be taught as challengeable scientific theories subject to change as new data is produced. Teachers and students should be able to discuss the strengths and weaknesses of these theories openly and without fear of retribution or discrimination of any kind.
So, they like the idea of challenging scientific theories with new data, but they don't like critical thinking.  Right.


Precision engineering.

Here's an experimentalist complaint for which I do not think there is an analogous theorist problem.  In my lab we have a piece of equipment of European manufacture that is very good and beautifully engineered.  The one problem is, it's so precisely made that it's impossible to service.  For example, after years of repeated thermal cycling, an electrical connector has failed and needs to be replaced.  The problem is, the way the system was put together, there is essentially no slack in the relevant cabling.  They strung the cable through during the original assembly, cut it precisely to length, and then attached connectors that make it topologically impossible to take apart without their removal.  One can't replace the connector without either cutting cabling and inserting more connections, or other approaches with similar levels of inconvenience.  This is the lab equivalent of having to remove half of the guts of a car in order to get to the oil pan.  Ahh well.  Let this be a lesson to mechanical designers:  It's never a bad idea to design a complex system with the possibility that it may need to be taken apart nondestructively someday.

Sunday, June 24, 2012

Grants and ethics

I recently came across this story.  I'd heard about it at a NSF panel but hadn't gotten all the details.  This person (who plead guilty and has not yet been sentenced that I can see) did at least two bad things.  First, and obviously illegal, he had a NIH award in which there was supposed to be a significant subcontract to another institution, and instead he spent that money on something else (possibly even on personal stuff).  It's actually amazing to me that he didn't get caught earlier on that by his institution's research office.  As chronically understaffed and overworked as ours is, they are zealous about making sure that subcontracts and reporting are properly handled, so I don't see how something like not passing along $500K could happen.

Second, and potentially trickier, after getting an award from the NSF, he applied for a grant from the DOE's ARPA-E for basically the same work, without telling either the DOE or NSF about the overlap.  That's also illegal, though I suspect it's more common simply because there are shades of grey possible here.  Research projects can have overlap - particularly if a PI has a particular technique or tool that they've developed and want to push in many directions - the question is, how much commonality is too much?  This particular case was egregious.  Still, after reviewing some grants recently for a few places, I want to encourage my junior colleagues to take these issues seriously.  When you review a proposal and realize that you've actually already reviewed something with lots of overlap before from the same PI, and it was funded, yet the PI claims there is no overlap in the programs, it's not a good situation for anyone.

Friday, June 22, 2012

Classical elasticity is surprisingly robust.

This paper was just published in Nano Letters.  The authors use suspended, single-layer graphene as a template for the growth (via atomic layer deposition) of aluminum oxide, Al2O.  Then they use an oxygen plasma to etch away the graphene, leaving a suspended alumina membrane 1 nm thick.  This is very cute, but what I find truly remarkable is how well the elastic properties of that membrane are modeled by simple, continuum elasticity.  The authors can apply a pressure gradient across the membrane and measure the deformed shape of the membrane as the pressure difference causes it to bulge.  That shape agrees extremely well with a formula from continuum mechanics that just assumes some average density and elastic modulus for the material.  That's the point of continuum mechanics and elasticity:  You don't have to worry about the fact that the material is really made out of atoms; instead you assume it's smooth and continuous on arbitrary scales.  Still, it's impressive to me that this works so well even when the total thickness of the material is only a few atoms!

Thursday, June 21, 2012

The Higgs and the media

There are a variety of blog discussions going on right now concerning rumors of the Higgs boson.  Peter Woit's post about Higgs rumors sparked a back-and-forth about whether blog discussions of rumors are actually harmful to science, to the scientific process, and to the public perception of the science.  I agree completely with Chad Orzel's take on this:  Given that CERN's press office and many high energy physicists have continuously hyped this experiment for years, no one should be surprised that there is interest in its status.  Complaining about this is absurd. 

Assuming that the CERN collaborations do announce the discovery of a particle with Higgs-like properties at around 125 GeV, I would be willing to wager the following things:
1) Some fraction of high energy physics theorists will become completely insufferable.
2) Some fraction of high energy physics theorists will be quoted in poorly written popular media articles that imply the result favors (a) string theory; (b) the multiverse; (c) supersymmetry.  These articles will also imply that high energy physics is pretty much all of physics.
3) The phrase "so-called 'God Particle'" will shoot up in google's rankings.
4) There will be articles talking about the need for the next big accelerator.

Wednesday, June 13, 2012

Handy numbers to know

My thesis advisor has a mastery of an impressive library of handy physics tidbits, the kinds of things that have proven very useful to him and his group over the years.  These are facts that it's better to know from memory so you can hash problems out at a whiteboard without having to run to reference books.  Here are a few of his:
  • One liquid liter of helium becomes about 700 gas liters at STP.
  • One liquid liter of nitrogen becomes about 500 gas liters at STP.
  • The latent heat of liquid helium is such that one Watt of heating will boil off one liquid liter per hour.
  • For thermal conduction through metals, when the temperature difference between \( T_{\mathrm{hot}} \) and \( T_{\mathrm{cold}} \) is not small, the rate of heat flow is given by \( (T_{\mathrm{hot}}^{2} - T_{\mathrm{cold}}^{2})/(2 R_{\mathrm{th}}T) \), where \( R_{th} \) is the thermal resistance.
  • The Wiedemann-Franz rule for heat conduction through metals means that an electrical resistance of 150 n\(\Omega\) corresponds to \(R_{\mathrm{th}} T = 6 \) K\(^{2}\)/W.
  • 20 GHz is equivalent to 1 K in terms of energy.
  • 1 meV is about 12 K in terms of energy.
Here are a few that I've adopted over the years in working with nanoscale physics:
  • The conductance quantum, \( G_{0} \equiv 2 e^{2}/h\), is about 12.9 k\(\Omega\).
  • A typical elastic mean free path for electrons in a polycrystalline good metal is 10-20 nm.
  • Tunneling of electrons from a metal through vacuum drops off by about a factor of 7.2 for every additional Angstrom of distance.
  • The density of states for gold at the Fermi energy is about \(\nu = 10^{47}\)/Jm\(^{3}\).
  • The Fermi velocity for gold is \(v_{\mathrm{F}} = 1.4 \times 10^{6}\) m/s.
  • You can go back and forth between the resistivity and the mean free path in a metal using the Einstein relation:  \( (1/\rho) = e^{2} \nu D \), where \(e\) is the electronic charge, \(\nu\) is the density of states at the Fermi energy, and \(D\) is the diffusion constant.  In 3d, \(D = (1/3)v_{\mathrm{F}}\ell\).
  • In goofy energy units, 8000 cm-1 is 1 eV.
  • \(\hbar c\) = 200 eV-nm.
 There are others of varying degrees of obscurity.  Please feel free to add others in the comments.  To use math in the comments, you need to preface your LaTeX math with a \ and a (, and end your math expression with a \ and a ). 

MathJax = outstanding.

I've just found MathJax, which is a javascript-based rendering plug-in for either LaTeX or MathML formatted equations. It took me a few minutes to get the syntax working right in blogger, but it seems pretty excellent. If you have scripts turned off, then LaTeX code should show up as LaTeX source. However, if you have scripts turned on, then equations can be rendered very nicely, and can either be in-line, like this: \( -\frac{\hbar^2}{2 m}\nabla^{2} \Psi = E \Psi \), or as display equations, like this: \[ \nabla \cdot \mathbf{B} = 0. \] I'm going to have to donate money to these people - they've done a really nice job.

Tuesday, June 05, 2012

Swamped.

Just pointing out that real life has been very busy of late.  Hopefully I'll have more blogging time shortly.  In the meantime, definitely check out this post by Ash Jogalekar.  It's a topic I've written about more than once, and I've been thinking hard about what to do to address this.  Things like TedEd are intriguing.  It should be possible to do some about the remarkable aspects of condensed matter.  Heck, you could do a great one about Pauli Exclusion....

Tuesday, May 29, 2012

Buying out of teaching - opinions?

This is a topic that comes up at many research universities, and I'd be curious for your opinions.  Some institutions formally allow researchers to "buy" out of teaching responsibilities.  Some places actively encourage this practice, as a way to try to boost research output and standing.  Does this work overall?  Faculty who spend all their time on research should generally be more research-productive, though it would be interesting to see quantitatively how much so.  Of course, undergraduate and graduate classroom education is also an essential part of university life, and often (though certainly not always) productive researchers are among the better teachers.  It's a fair question to ask whether teaching buyout is a net good for the university as a whole.  What do you think?

Sunday, May 27, 2012

Work functions - a challenge of molecular-scale electronics

This past week I was fortunate enough to attend this workshop at Trinity College, Dublin, all about the physics of atomic- and molecular-scale electronics.  It was a great meeting, and I feel like I really learned several new things (some of which I may elaborate upon in future posts).  One topic that comes up persistently when looking at this subject is the concept of the work function, defined typically as the minimum amount of energy it takes to kick an electron completely out of a material (so that it can go "all the way to infinity", rather than being bound to the material somehow).  As Einstein and others pointed out when trying to understand the photoelectric effect, each material has an intrinsic work function that can be measured, in principle, using photoemission.  You can hit a material surface with ultraviolet light and measure the energy of the electrons that get kicked out (for example, by slowing them down with an electric field and seeing how long it takes them to arrive at a detector).  Alternately, with a fancy tunable light source like a synchrotron, you can dial around the energy of the incident light and see when electrons start getting kicked out.   As you might imagine, if you are trying to understand electronic transport, where an electron has to leave one electrode, traverse through a system such as a molecule, and end up back in another electrode, the work function is important to know.

One problem with work functions is, they are extremely sensitive to the atomic-scale details of a surface.  For example, different crystallographic faces of even the same material (e.g., gold) can have work functions that differ by a couple of hundred millielectronvolts (meV).  Remember, the thermal energy scale at room temperature is 25 meV or so, so these are not small differences.  Moreover, anything that messes with the electronic cloud that spills a little out of the surface of materials at the atomic scale can alter the work function.  Adsorbed impurities on metal surfaces can change the effective work function by more than 1 eV (!).  To see how tricky this gets, imagine chemically assembling a layer of covalently bound molecules on a metal surface.  There is some charge transfer where the molecule chemically bonds to the metal, leading to an electric dipole moment and a corresponding change in work function.  The molecule itself can also polarize or be inherently polar based on its structure.  In the end, ordinary photoemission measures just the total of all of these effects.  Finally, ponder what then happens if the other end of the molecules is also tethered chemically to a piece of metal.  How big are all the dipole shifts?  What is the actual energy landscape "seen" by an electron going from one metal to the other, and is there any way to measure it experimentally, let alone compute it reliably from quantum chemistry methods?  Really understanding the details is difficult yet ultimately essential for progress here.

Monday, May 21, 2012

Catalysis seems like magic.

In our most recent paper, we found that we could dope a particularly interesting material, vanadium dioxide, with atomic hydrogen, via "catalytic spillover". By getting hydrogen in there in interstitial sites, we could dramatically alter the electrical properties of the material, allowing us to stabilize its unusual metallic state down to low temperatures. The funkiest part of this to me is the catalysis part. The metal electrodes that we use for electronic measurements have enough catalytic activity that they can split hydrogen molecules into atomic hydrogen at an appreciable rate even under very modest conditions (e.g., not much warmer than the boiling point of water). This paper (sorry it is subscription only) shows an elegant experimental demonstration of this, where gold is exposed to H2 and D2 gas and HD molecules are then detected. I would love to understand the physics at work here better. Any recommendations for a physics-based discussion would be appreciated - I know there is enormous empirical and phenomenological knowledge about this stuff, but something closer to an underlying physics description would be excellent.

 

Wednesday, May 16, 2012

Vanity journals: you've got to be kidding me.

I just received the following email:
Dear Pro. ,
Considering your research in related areas, we cordially invite you to submit a paper to Modern Internet of Things (MIOT).

The Journal of Modern Internet of Things (MIOT) is published in English, and is a peer reviewed free-access journal which provides rapid publications and a forum for researchers, research results, and knowledge on Internet of Things. It serves the objective of international academic exchange.
Wow!  I feel so honored, given my vast research experience connected to "Internet of Things". 

The publisher should be shamed over this.  This is absurd, and while amusing, shows that there is something deeply sick about some parts of academic publishing.

Monday, May 14, 2012

The unreasonable clarity of E. M. Purcell

Edward Purcell was one of the great physicists of the 20th century.  He won the Nobel Prize in physics for his (independent) discovery of nuclear magnetic resonance, and was justifiably known for the extraordinarily clarity of his writing.  He went on to author the incredibly good second volume of the Berkeley Physics Course (soon to be re-issued in updated form by Cambridge University Press), and late in life became interested in biophysics, writing the evocative "Life at Low Reynolds Number" (pdf).   

Purcell is also known for the Purcell Factor, a really neat bit of physics.  As I mentioned previously, Einstein showed through a brilliant thermodynamic argument that it's possible to infer the spontaneous transition rate for an emitter in an excited state dropping down to the ground state and spitting out a photon.  The spontaneous emission rate is related to the stimulated rate and the absorption rate.  Both of the latter two may be calculated using "Fermi's Golden Rule", which explains (with some specific caveats that I won't list here) that the rate of a quantum mechanical radiative transition for electrons (for example) is proportional to (among other things) the density of states (number of states per unit energy per unit volume) of the electrons and the density of states of the photons.  The density of states for photons in 3d can be calculated readily, and is quadratic in frequency.  

Purcell had the insight that in a cavity, the number of states available for photons is not quadratic in frequency anymore.  Instead, a cavity on resonance has a photon density of states that is proportional to the "quality factor", Q,  of the cavity, and inversely proportional to the size of the cavity.  The better the cavity and the smaller the cavity, the higher the density of states at the cavity resonance frequency, and off-resonance the photon density of states approaches zero.  This means that the spontaneous emission rate of atoms, a property that seems like it should be fundamental, can actually be tuned by the local environment of the radiating system.  The Purcell factor is the ratio of the spontaneous emission rate with the cavity to that in free space.

While I was doing some writing today, I decided to look up the original citation for this idea.  Remarkably, the "paper" turned out to be just an abstract!  See here, page 681, abstract B10.  That one paragraph explains the essential result better than most textbooks, and it's been cited a couple of thousand times.  This takes over as my new favorite piece of clear, brief physics writing by a famous scientist, displacing my long-time favorite, Nyquist's derivation of thermal noise.  Anyone who can be both an outstanding scientist and a clear writer gets bonus points in my view.

Saturday, May 05, 2012

Models and how physics works

Thanks to ZapperZ for bringing this to my attention. This paper is about to appear in Phys Rev Letters, and argues that the Lorentz force law (as written to apply to magnetic materials, not isolated point charges) is incompatible with Special Relativity. The argument includes a simple thought experiment. In one reference frame, you have a point charge and a little piece of magnetic material. Because the magnet is neutral (and for now we ignore any dielectric polarization of the magnet), there is no net force on the charge or the magnet, and no net torque on the magnet either. Now consider the situation when viewed from a frame moving along a line perpendicular to the line between the magnet and the charge. In the moving frame, the charge seems to be moving, so that produces a current. However (and this is the essential bit!), in first year physics, we model permanent magnetization as a collection of current loops. If we then consider what those current loops look like in the moving frame, the result involves an electric dipole moment, meaning that the charge should now exert a net torque on the magnet when all is said and done. Since observers in the two frames of reference disagree on whether a torque exists, there is a problem! Now, the author points out that there is a way to fix this, and it involves modifying the Lorentz force law in terms of how it treats magnetization, M (and electric polarization, P). This modification was already suggested by Einstein and a coauthor back in 1908.

I think (and invite comments one way or the other) that the real issue here is that our traditional way to model magnetization is unphysical at the semiclassical level. You really shouldn't be able to have a current loop that persists, classically. A charge moving in a loop is accelerating all the time, and should therefore radiate. By postulating no radiation and permanent current loops, we are already inserting something fishy in terms of our treatment of energy and momentum in electrodynamics right at the beginning. The argument by the author of the paper seems right to me, though I do wonder (as did a commenter in ZZ's post) whether this all would have been much more clear if it had been written out in four-vector/covariant notation rather that conventional 3-vectors.

This raises a valuable point about models in physics, though. Our model of M as resulting from current loops is extremely useful for many situations, even though it is a wee bit unphysical. We only run into trouble when we push the model beyond where it should ever have been expected to be valid. The general public doesn't always understand this distinction - that something can be a little wrong in some sense yet still be useful. Science journalists and scientists trying to reach the public need to keep this in mind. Simplistically declaring something to be wrong, period, is often neither accurate nor helpful.

 

Wednesday, April 25, 2012

Heat flow at the mesoscale

When we teach about thermal physics at the macroscopic scale, we talk in terms of the thermal conductivity, k.  For the 1d problem of a homogeneous rod of cross sectional area A and length L, the rate that energy flows from one end of the rod to the other is given by (kA/L)(Th-Tc), where Th and Tc are the temperatures of the hot and cold ends of the rod, respectively.  Built into this approach is the tacit assumption that the phonons, the quantized vibrational modes of the lattice that carry what we consider to be the thermal energy of the atoms in the solid, move in a diffusive way.  That is, if a phonon is launched, it bounces many times in a random walk sort of motion before it traverses across our region of interest.  Phonons can scatter off disorder in the lattice, or mobile charge carriers (or even each other, if the vibrations aren't perfectly harmonic).  

However, phonon motion doesn't have to be diffusive!   If phonons don't scatter while propagating a certain length scale, their motion is said to be "ballistic".  In this paper, the authors have done a very clever experiment to look at whether there is a significant contribution of ballistic phonons to heat transport in silicon at room temperature on scales considerably longer than the "textbook" mean free path for phonon scattering under those conditions, about 40 nm.  The authors use the interference pattern between two "pump" lasers to produce a (sin^2) intensity pattern (and thus, because of absorption and the electron-lattice coupling, a (sin^2) pattern of elevated temperature) in a suspended Si membrane.  The change in local temperature leads to a small change in local index of refraction.  A low intensity "probe" laser can diffract off the grating pattern set up by the temperature variation.  Depending on how long one waits between pump and probe, the temperature pattern can wash itself out due to phonon transport.  So, by varying the delay between pump and probe and looking at the strength of the diffracted probe signal, they can monitor the time evolution of the temperature profile.  By changing the pitch of the initial interferogram, they can look at thermal transport over different length scales.   They find that there are significant deviations from the expectations of diffusive phonon transport (originally worked out by Klaus Fuchs, among others) up to micron scales, which is pretty darn cool, and important for understanding heat flow in, e.g., computer chips.   Very elegantly done.

Thursday, April 19, 2012

Persistent currents and an impressive experiment

A long while ago, I brought up the topic of persistent currents in normal metal rings.  Please click the link to get the context.  The point is, even in a normal metal (as opposed to a superconductor), if you consider a metal ring small enough that the electrons remain quantum mechanically coherent in going about the ring, the electronic wavefunction must remain single-valued.  That means that the quantum mechanical phase accumulated by an electron diffusing around the ring back to its starting point (to speak in a semiclassical way) has to add up to an integer multiple of 2 pi. Since magnetic flux through the ring tweaks the accumulated phase (via the Aharonov-Bohm effect), a persistent current develops in the ring to make sure that the total phase (that from the electron motion and that from the resulting Aharonov-Bohm contribution) add up to a multiple of 2 pi.  As I'd discussed before, these currents and the magnetic fields they produce tend to be quite small and difficult to detect.

To make matters worse, when an electron scatters off static disorder in a solid, it acquires a phase shift that depends on that particular scattering site.  What this really means is, if you consider an ensemble of nominally identical metal rings, you'll actually get some distribution of persistent currents, because each ring has its own particular configuration of disorder.  Now Jack Harris' group at Yale has done a beautiful measurement, looking at many individual rings and examining the statistics of these persistent currents in the ensemble.  They place each ring at the end of a floppy cantilever.  In the presence of a magnetic field, the magnetic dipole moment from the persistent current exerts a torque on the cantilever, and the results can be detected optically via interferometry.  The experiment requires low temperatures, precision fabrication, and very clean technique.  Very nice.

Tuesday, April 17, 2012

Academic science researchers and economics

This article in the NY Times is rather provocative in several ways. First, it raises the question of whether there is a dramatic rise taking place in the number of journal article retractions (spread across all disciplines). The answer is, it's really not clear, given the enormous increase in the number of published articles. Moreover, it's certainly much easier for people to find, read, and compare articles than ever before. Google Scholar, for example, can see through most pay-walls enough to search for words and phrases, making it far easier than ever before to test for plagiarism. Moving on, the article then looks at whether the culture of academic science research is, for lack of a better word, ailing. There are some choice quotes:
[L]abs continue to have an incentive to take on lots of graduate students to produce more research. “I refer to it as a pyramid scheme,” said Paula Stephan, a Georgia State University economist and author of “How Economics Shapes Science,” published in January by Harvard University Press.

In such an environment, a high-profile paper can mean the difference between a career in science or leaving the field. “It’s becoming the price of admission,” Dr. Fang said.

The scramble isn’t over once young scientists get a job. “Everyone feels nervous even when they’re successful,” he continued. “They ask, ‘Will this be the beginning of the decline?’ ”

...

“What people do is they count papers, and they look at the prestige of the journal in which the research is published, and they see how many grant dollars scientists have, and if they don’t have funding, they don’t get promoted,” Dr. Fang said. “It’s not about the quality of the research.”

Dr. Ness likens scientists today to small-business owners, rather than people trying to satisfy their curiosity about how the world works. “You’re marketing and selling to other scientists,” she said. “To the degree you can market and sell your products better, you’re creating the revenue stream to fund your enterprise.”
I don't want to quote any more for fear of running afoul of fair use. Read the article. This does hit some of the insecurities felt by any reasonable US faculty science or engineering researcher. I would dispute the pyramid scheme comment because it's based on a false premise, that every doctoral student is looking to become a professor and is crushed if they don't get a faculty position. The prestige paper comments are more worrisomely accurate.

Sunday, April 15, 2012

Getting the most out of an experimental technique

This post is a mini-summary of a Perspectives piece I wrote for ACS Nano.  One conceptually simple way to measure the electronic properties of materials at the atomic scale is to use a "break junction".  Imagine taking a metal needle touching a metal surface, and slowly lifting up on the needle.  At some point, the needle will come out of contact with the surface.  As it does so, at the last instant, the contact between the two will take place only at the atomic scale.  If you hook up one end of a battery to the needle and the other through an ammeter to the metal surface to measure the flow of current, you can measure the electrical conduction throughout this process.  Thanks to the availability of high speed electronics these days, it is possible to record conductance, G, vs. time data throughout the process.  A standard analytic approach is then to compile a histogram of all the data points, counting how many times each value of G is measured.  As explained here, the most stable junction configurations naturally have more data points, and this will lead to peaks in the conductance histogram at the values of conductance corresponding to those configurations.   Molecules may be incorporated into such junctions (as I've written about here).  Since it's possible to set up a system to make and break junctions repeatedly and rapidly in an automated way, this approach has proven very fruitful and revealing.

Of course, only looking at the histograms is wasteful.  You actually have an enormous amount of additional information contained in the G vs. t traces.  For instance, you can check to see if the occurrence of a "plateau" in G vs. t at one conductance level always (or never!) correlates with a similar plateau at a different conductance value.  These kinds of cross-correlations are best represented in two-dimensional histograms of various types.  Makk et al. have written a very clear and tutorial paper about how this works in practice, and what kinds of things one can learn from such analyses.  It's definitely worth a read if you work on this stuff, and it's also a great lesson in how as much of your data as possible.

Monday, April 09, 2012

DOI numbers, Web of Science, and article numbers

Two recurring complaints about bibliographies and citations for papers and proposals:
  • Most people really like DOI, a system meant to assure that reference materials like journal articles get an effectively permanent web address, something that will "always" point to that article.  It's become very very popular, and every online journal that I know provides a doi reference for each article.  It shows up in every Web of Science reference these days, too, if it exists.  So, why can't Web of Science make those doi numbers a clickable link?  That is, instead of forcing me to copy and paste the doi into a browser URL line with "http://dx.doi.org/" stuck in front, why not just make the doi itself a link to that?  I mean, why would anyone just want the doi without the link??  Is this some weird bs rule about Web of Science not wanting to have direct links?
  • How come Physical Review handles bibliographic information so badly when it comes to article numbers?   A number of years ago, Phys Rev switched from old-fashioned page numbers for articles to 6-digit article numbers.  Unfortunately, when you try to export bibliographic information for reference management software, for many Phys Rev articles, the automatic response is to stick the article number (which replaced the page number for all practical purposes) in some completely random field, and instead list the page numbers as either blank or the oh-so-useful "1-4" for a four-page article.  Can someone please fix this?  
Both of these are trivial, silly things, but I'd be willing to be that hundreds of person-hours (at least) are lost per year dealing with the latter one.

Sunday, April 08, 2012

Commitment and conflicts

One of the various hats I wear right now is chair of Rice's university committee on research, and one topic that has come up lately (in the context of the US government's new regs about conflict of interest) is the discussion of "commitment". Conflict of interest is comparatively simple to explain to people - everyone grasps the idea that financial or other compensation that may give the appearance of affecting your scholarly objectivity is potentially a conflict of interest. Commitment is a more challenging concept. Most universities expect their science and engineering faculty in particular to spend some of their time doing things that are not immediately, directly connected to their simplest academic duties (teaching courses, supervising research students and postdocs, performing university service). For example, technical consulting isn't that unusual. Likewise, there are other broadly defined academic duties that can come up (serving on advisory or editorial boards; professional society work) that can enhance the academic mission of the university in a higher order way. However, it's clear that there have to be limits of some kind on these auxiliary activities - we would all agree that someone who does so much alternative work that they can't teach their classes or adequately do their normal job is having problems with time allocation. The general question is, how should a university manage these situations - how are they identified, how are they mitigated, and what are the consequences if someone is knowingly going over the line (e.g., spending three working days per week running the day to day operations of a startup company rather than doing their academic job)? Things get particularly complicated when you factor in disciplines that basically demand external work (architecture, business school), and the increasingly common practice of special appointments at foreign universities. If anyone has suggestions of universities with what they think are especially good approaches (or lousy ones, for that matter) to this issue, please post in the comments.

Tuesday, April 03, 2012

An open letter to Neil deGrasse Tyson

Hello, Dr. Tyson. First, let me say that I'm a huge fan. You do the scientific community a tremendous service by being such an approachable, clear spokesman, maintaining scientific accuracy while also entertaining the public. Astronomy is a great side interest of mine (like many scientists and engineers), and I really wanted to be an astronaut for a while (until my eyes were demonstrably lousy); that's why on some gut level I enjoyed your call for a renewed vigor in space exploration.

However, my brain's response to your call is, is this really the best strategy? Much as I'd love to one day walk on the moon or Mars, I can't help but be deeply skeptical of NASA's ability to allocate resources. Right now their annual budget is about $17B, more than twice that of the NSF, and more than three times that of the DOE Office of Science. While the achievements of the robotic spacecraft missions are truly amazing, much of the rest of NASA seems very dysfunctional. I'll admit, my impression colored by my thesis advisor's experience on the Columbia accident investigation board, my knowledge of the ISS (hint: the Soyuz "lifeboats" where the ISS crew shelters in case of debris impact? They're actually the most debris-vulnerable part of the ISS.), and the fact that NASA has employees that do things like this and this at some rate.

If taxpayers are going to be persuaded to invest another $17B/yr in federally funded research, I think a much more compelling case needs to be made that NASA is the place for that investment, given the alternatives. Yes, NASA's history and subject matter are inspiring, but you need to convince me that NASA as an agency will really get value out of that investment, given that their recent leadership has been singularly unimpressive.

PS - If you ever need a sub to go onto Colbert in your stead, please call.

Monday, April 02, 2012

Several items

My apologies to my readers for low blogging rate recently. Multiple papers, proposals, teaching, travel, etc. have all contributed to this slow-down. Here are a few brief items to consider:
  • The (nearly) final details have come out regarding the OPERA experiment.  Goodbye, superluminal neutrinos - we hardly knew ye.    Would've been fun!
  • It would appear that one can correlate political affiliation in the US with the somewhat ill-defined concept of "trust in science".  Much as it's tempting to make a wry comment here, I suspect that some of this is due to the very disparate nature of those self-identifying as "conservative" these days.  Either way, this is a problem, though.  Science (in the sense of careful, rigorous testing of hypotheses that allege predictive power) is an incredibly useful way to look at much of the world, and I would hope that this would be appreciated by the vast majority of people out there.
  • Someone has advanced the idea that Mitt Romney is a quantum object.  Clearly we should put him through some sort of interferometer to test this idea.  Alternately, he should interlace his fingers and make a loop with his arms - we can then thread magnetic flux through him and see if his response about the individual mandate for healthcare oscillates as the magnetic field is swept.
  • Visiting NSF is always enlightening.  I really hadn't appreciated before the quantitative problem that they face in proposal evaluation and administration:  the number of proposals that are submitted has more than doubled in the last few years, while their staffing has remained unchanged.  Even apart from overall resource problems (e.g., the runaway positive feedback cycle, when people realize that the odds of funding are bad, so they submit more proposals, making the odds of funding worse), just the challenge of properly handling all the paperwork is becoming incredibly difficult.
  • April Fools is always fun on the web.  This is one of my favorites.

Sunday, March 25, 2012

Responsibilities, rational and otherwise

Professors have many responsibilities - to their students and postdocs, to their departments and colleagues, to their university, to the scientific community, and to the public. When on a doctoral committee, for example, a professor's duty is to make sure that the candidate's thesis is rigorous and careful, and that the student actually knows what they're talking about. Obviously primary responsibility for supervision of the student lies with the advisor(s), but the committee members are not window dressing; they're supposed to serve a valuable role in upholding the quality of the work.

I have a colleague at another institution (names and circumstances have been changed here; I'll say no more about specifics) who really had to put his foot down several years ago, as a committee member, to make sure that a student (the last one of a just-retired professor) didn't hand in a thesis sufficiently fringe that it bordered on pseudoscience. It was pretty clear that the advisor would have been willing to let this slide (!) for the sake of getting the last student out the door. My colleague (junior faculty at the time) had to push hard to make sure that this got resolved. Eventually the student did complete an acceptable thesis (on a much more mainstream topic) and got the degree. This colleague just recently came across the former student again, and was disappointed and sad to see that the fringe aspects of science are back in what he's doing. My colleague is now feeling (irrational) guilt about this (that the former student is now credentialed and pushing this stuff), even though the actual thesis was fine in the end. This does raise the question, though: how much of a gatekeeper should a committee member be?

Sunday, March 18, 2012

Paranormal activity edition

Two items, oddly about parapsychology (as a means to raise points about science and the public).  First, this article from The Guardian last week is both unsurprising and disappointing.  It is not at all surprising that careful attempts to reproduce almost-certainly-spurious results implying precognitive phenomena have shown that those effects apparently to not really exist.  What is worth pondering and discussion, however, is the fact that the authors who tried to check the original results had such a hard time publishing their work, because the major journals dismiss attempts to reproduce controversial results as unoriginal or derivative.  This is a problem.  Sure, you don't want to take up premiere journal space with lots of confirmations or repetitions of previous work.  However, if a journal is willing to hype controversial results to boost circulation, then surely there is some burden on them to follow up on whether those extraordinary claims withstand the test of time.  

Second, this morning's Dear Abby column (yes, I still read a newspaper on Sundays) had a letter from a woman seeking advice about how to use her "psychic gifts".   It's very depressing that the response said "Many people have psychic abilities to a greater or lesser degree than you do, and those "vibes" can be invaluable."  Really?  Many people have psychic abilities?  How's this for advice:  if you really have psychic abilities, go to the James Randi Foundation and take their Million Dollar Challenge.  Once you pass, you can use the money to make peoples' lives better.  I know it's stupid to get annoyed by this, just as it's pointless to complain about the horoscopes that run in the paper.  Still, if someone has an audience as large as Dear Abby, they should think a little bit about spreading this silliness.

Friday, March 16, 2012

Tidbits

Some interesting and thought-provoking things have come up in the last week or so. For instance, here is an article from the IEEE that discusses the decline in science and engineering jobs in the US. Figure 2 is particularly thought-provoking, showing that the number of US undergrad STEM degrees is very strongly correlated with the number of non-medical US federal research dollars spent, from 1955-2000. My personal take is, if you really want Americans to become scientists, engineers, and more broadly supportive of technical education, you need to create a culture where those professions are (more) respected and valued, not viewed as nerdy, geeky, asocial, elitist, or otherwise unacceptable.

On this same theme, there was this op-ed in the New York Times about why so few American political figures are scientists. Accurate (in my opinion) and depressing. I'm not saying we should live in a society run by technocrats, but surely we can be better than this. As a culture, do we really need more lawyers and undergrad "business" majors?

On a more technical note, the ICARUS collaboration, another group in Gran Sasso in Italy working with neutrinos produced by CERN, has announced (paper here) that their measurements show neutrinos traveling at a speed consistent w/ c. Not surprising, and only truly independent measurements can really pin down the issues w/ the OPERA work.

Here is a beautiful new paper by the Manoharan group at Stanford. By arranging spatially ordered arrays of CO molecules on a copper surface, they can manipulate surface states in a way that produces dispersion relations (the relationship between energy and momentum for electrons) with the same kinds of features seen in graphene. While I haven't had a chance to read this in detail yet, it is very slick, and makes explicit the connection between real-space distortions of the graphene structure and how these are mathematically equivalent to electric and magnetic fields for the charge carriers confined to that 2d environment. It's also a great demonstration of how the motion of charge carriers in a condensed matter environment depends on the potential energy's distribution as a function of position, rather than the details. Here, the electrons are not carbon p electrons feeling the "chickenwire" potential energy of the carbon atom lattice in graphene. Rather, the electrons are those that live in the copper surface state, and they feel a designer "chickenwire" potential energy due to the arrangement of CO molecules on the copper surface. However, the net effect is the same. Very pretty. (Still makes me wonder a bit about the details, though.... At the end of the day, electrons have to scatter out of that surface state and into the bulk for the STM measurement to work, and yet that process has to be sufficiently weak that it doesn't screw up the surface state much. Very fortunate that the numbers happen to work!)

Finally, here is a cool, fun project, using nanofab tools to make art (too small to see with the unaided eye). Sameer Walavalkar did his PhD with the well known nano group of Axel Sherer at CalTech. This kind of creative outlet is another way to do outreach, and it's a heck of a lot cooler than many other approaches.

Saturday, March 10, 2012

Mini update

I am out on a brief break, but I wold be remiss if I didn't point out this exciting result. The investigators have managed to make a light emitting diode with greater than 100% electrical efficiency when operated just right. The trick is, the LED gets the energy for the "extra" photons from the temperature difference between the LED and it's surroundings. Basically it's a combination LED and heat engine. Very clever. I wonder if there are some entropy restrictions that come into play, particularly if the final photon state is, e.g., the macroscopically occupied state of a laser cavity.

Tuesday, March 06, 2012

NSF - proposal compliance

This is for everyone out there who submits to proposals to the Division of Materials Research, and more broadly, to the National Science Foundation. Here's some context for those who don't know the story. The NSF has a Grant Proposal Guide that spells out, in detail, the proper content and formatting for proposals. You can understand why they do this, particularly with regard to things like font size. There's a 15 page limit on the "Project Description" part of a proposal, and if they didn't specify a font size and margins, there would be people trying to game the system by submitting proposals in 6-pt unreadable font with 1cm margins. Historically, NSF has erred on the side of latitude about the minutiae, however. For example, they have never really been aggressive about policing whether the bibliographic references are perfectly formatted.

That's why this news came as a surprise: As part of a new policy, starting this past fall, DMR is taking basically a zero-tolerance approach regarding compliance with the Grant Proposal Guide. That means, for example, that any letter of collaboration included with a proposal can only say, in effect, "I agree to do the tasks listed in the Project Description". Anything more (e.g., context about what the collaborator's expertise is, or mentioning that this continues an existing collaboration) is no longer allowed, and would be cause for either deletion of the letter or outright rejection of the proposal without review. This new policy also means, and this is scary, that your references have to be perfectly formatted - leaving out titles, or leaving out the second page number, or using "et al." instead of long author lists - all of these can lead to a proposal being rejected without review. I heard this first hand from a program officer. Imagine spending weeks writing a proposal, and having it get bounced because you used the wrong setting in bibTeX or EndNote.

We can have a vigorous discussion in the comments about whether this policy makes much sense. In the meantime, though, I think it's very important that people be aware of this change. The bottom line: Scrupulously follow the Grant Proposal Guide. Cross every "t" and dot every "i".

Please spread this information - if one division of NSF is doing this, you can bet that it will spread, and you don't want to be the one whose proposal gets bounced.

Sunday, March 04, 2012

March Meeting last day and wrap-up

Not too much to report from the final day of the March Meeting. Lots of good conversations with colleagues, though I never did get a chance to sit down with a couple of folks I'd wanted to see. Ahh well.

I split most of my time between two invited sessions. The first of these was on the unusual properties of the nu=5/2 fractional quantum Hall state. This may sound very narrow and esoteric, but it is actually quite profound. A good review of the whole topic in more generality is here. At a very particular value of perpendicular magnetic field (related to the number of charge carriers per square centimeter), the electrons in a 2d layer in GaAs/AlGaAs semiconductor structures apparently condense into a really weird state. The lowest energy excitations of this state, its quasiparticles, have very strange properties. First, they have an effective electronic charge of 1/4 e. Second, when two of these fractionally charged quasiparticles are moved around each other to swap positions, the whole quantum mechanical state of the system changes (to another state with the same energy as the original), in a way much more complex than just picking up a phase factor (which would be -1 if the quasiparticles acted like ordinary electrons). Somehow the detailed history of winding the particles around each other is supposedly encoded in the many-body state itself. Quasiparticles with this bizarre property are said to obey "non-Abelian statistics". To date, there has not been an experimental "smoking gun" demonstrating these weird properties unambiguously. My postdoc mentor, Bob Willett, gave a very data-heavy talk showing persuasive evidence for consistency with a number of the relevant theory predictions in this system. Following him, Woowon Kang of the University of Chicago showed other data that also looks consistent with some of these ideas (though I'm no expert).

The other invited session dealt with the theory behind the transport of electrons and ions in nanoscale systems. Unfortunately I missed the beginning (since I was seeing the other talks above), but I did get to hear a neat discussion by Kirk Bevan of McGill University about the physics of electromigration. Electromigration is the mechanism by which flowing electrons can scatter off defects and grain boundaries, dumping momentum into atoms and pushing them around.

Final suggestions for the APS:
1) Don't have the small rooms arranged so that getting to seats in the front requires blocking the projector. The result of that is that the front 6 rows or so remain almost completely empty, while people pile up in the back of the rooms.

2) Would it really be that hard to have wireless internet access that doesn't suck? Are there no convention centers that can really support this?

3) Having a big bio presence at the meeting and then scheduling it directly opposite the Biophysical Society meeting seems odd.

4) Every year, there is an electronic letter-writing or petition campaign to support federal funding of research. That's fine and dandy, but is there any way we could try to get some representative Congress-critters to come hear a session, perhaps one of the fun, general invited sessions, or one about industrially relevant research? Remember, next year in Baltimore is quite close to DC....

Friday, March 02, 2012

March Meeting day 3 (for me)

Yesterday I spent a fair bit of time seeing specialized talks related to my group's research. In the contributed session in the morning, I saw a couple of talks by the theory group of Kevin Ingersent at the University of Florida. When describing electronic transport through a molecule, there are two basic theory approaches. One way to tackle this problem is to try to do realistic quantum chemistry calculations about specific molecular orbitals and how a molecule couples electronically to metal electrodes. A complementary tactic is to construct a mathematical model that you think contains the essential physics (e.g., treat the molecule as a "dot" with two electronic levels, each coupled to generic conduction electrons in the leads; then add in a single, local harmonic vibrational mode with some coupling between the level populations and the amplitude of the vibration, etc.). These two schemes correspond well with approaches to bulk materials: realistic electronic structure calculations vs. construction of model Hamiltonians. Ingersent's group takes the latter approach, and it looks like there is even more rich physics buried in single-impurity junctions than I'd previously appreciated.

In the same session, there were some nice experimental talks from Latha Venkataraman's group at Columbia. Recently, her students have seen that it's possible to create comparatively good contacts between molecules and metals, with a single quantum channel being transmitted through the system with a transmission of about 90%. This is in contrast to the more common situation, where transmission is more like 0.1%. She's also started doing single-molecule measurements of thermopower and Seebeck coefficient, where you apply a temperature gradient across a molecule and look at the resulting voltage difference that shows up. Cool data, though thermal transport at these scales is very challenging.

Later in the day I heard some nice invited talks. Jean-Marc Triscone gave a nice presentation of the properties of the two-dimensional electron gas that shows up at the interface between strontium titanate and lanthanum aluminate (STO/LAO). This field of oxide heterostructures has become very popular, and includes all sorts of rich physics, including coexistent superconductivity and magnetism. Any topic that gets to talk about a "polarization catastrophe" has to be good.

In another invited session, Cyrus Hirbijehedin talked in detail about Kondo physics in single magnetic atoms on very thin insulating layers, as probed by STM. Dan Ralph gave an extremely clear talk (via iphone from Cornell, due to inclement weather) on Kondo physics in single-molecule junctions, with their particular experimental twist of being able to stretch or squish the junctions in situ. Very neat. There are some lingering technical points in such structures that need further examination by the community.

I did not, unfortunately, see Leo Kouwenhoven's ballyhooed (here and here) talk about Majorana fermions. I need to read more about the particular work before I can offer any intelligent commentary.