Saturday, July 30, 2016

Ask me something.

I realized that I haven't had an open "ask me" post in almost two years.  Is there something in particular you'd like me to write about?  As we head into another academic year, are there matters of interest to (grad or undergrad) students?

Sunday, July 24, 2016

Dark matter, one more time.

There is strong circumstantial evidence that there is some kind of matter in the universe that interacts with ordinary matter via gravity, but is otherwise not readily detected - it is very hard to explain things like the rotation rates of galaxies, the motion of star clusters, and features of the large scale structure of the universe without dark matter.   (The most discussed alternative would be some modification to gravity, but given the success of general relativity at explaining many things including gravitational radiation, this seems less and less likely.)  A favorite candidate for dark matter would be some as-yet undiscovered particle or class of particles that would have to be electrically neutral (dark!) and would only interact very weakly if at all beyond the gravitational attraction.

There have been many experiments trying to detect these particles directly.  The usual assumption is that these particles are all around us, and very occasionally they will interact with the nuclei of ordinary matter via some residual, weak mechanism (say higher order corrections to ordinary standard model physics).  The signature would be energy getting dumped into a nucleus without necessarily producing a bunch of charged particles.   So, you need a detector that can discriminate between nuclear recoils and charged particles.  You want a lot of material, to up the rate of any interactions, and yet the detector has to be sensitive enough to see a single event, and you need pure enough material and surroundings that a real signal wouldn't get swamped by background radiation, including that from impurities.  The leading detection approaches these days use sodium iodide scintillators (DAMA), solid blocks of germanium or silicon (CDMS), and liquid xenon (XENON, LUX, PandaX - see here for some useful discussion and links).

I've been blogging long enough now to have seen rumors about dark matter detection come and go.  See here and here.  Now in the last week both LUX and PandaX have reported their latest results, and they have found nothing - no candidate events at all - after their recent experimental runs.  This is in contrast to DAMA, who have been seeing some sort of signal for years that seems to vary with the seasons.  See here for some discussion.  The lack of any detection at all is interesting.  There's always the possibility that whatever dark matter exists really does only interact with ordinary matter via gravity - perhaps all other interactions are somehow suppressed by some symmetry.  Between the lack of dark matter particle detection and the apparent lack of exotica at the LHC so far, there is a lot of head scratching going on....

Saturday, July 16, 2016

Impact factors and academic "moneyball"

For those who don't know the term:  Moneyball is the title of a book and a movie about the 2002 Oakland Athletics baseball team, a team with a payroll in the bottom 10% of major league baseball at the time.   They used a data-intensive, analytics-based strategy called sabermetrics to find "hidden value" and "market inefficiencies", to put together a very competitive team despite their very limited financial resources.   A recent (very fun if you're a baseball fan) book along the same lines is this one.  (It also has a wonderful discussion of confirmation bias!)

A couple of years ago there was a flurry of articles (like this one and the academic paper on which it was based) about whether a similar data-driven approach could be used in scientific academia - to predict success of individuals in research careers, perhaps to put together a better department or institute (a "roster") by getting a competitive edge at identifying likely successful researchers.

The central problems in trying to apply this philosophy to academia are the lack of really good metrics and the timescales involved in research careers.  Baseball is a paradise for people who love statistics.  The rules have been (largely) unchanged for over a hundred years; the seasons are very long (formerly 154 games, now 162), and in any game an everyday player can get multiple opportunities to show their offensive or defensive skills.   With modern tools it is possible to get quantitative information about every single pitched ball and batted ball.  As a result, the baseball stats community has come up with a huge number of quantitative metrics for evaluating performance in different aspects of the game, and they have a gigantic database against which to test their models.  They even have devised metrics to try and normalize out the effects of local environment (baseball park-neutral or adjusted stats).

Fig. 1, top panel, from this article.  x-axis = # of citations.
The mean of the distribution is strongly affected by the outliers.
In scientific research, there are very few metrics (publications; citation count; impact factor of the journals in which articles are published), and the total historical record available on which to base some evaluation of an early career researcher is practically the definition of what a baseball stats person would call "small sample size".   An article in Nature this week highlights the flaws with impact factor as a metric.  I've written before about this (here and here), pointing out that impact factor is a lousy statistic because it's dominated by outliers, and now I finally have a nice graph (fig. 1 in the article; top panel shown here) to illustrate this.  

So, in academia, the tantalizing fact is that there is almost certainly a lot of "hidden value" out there missed by traditional evaluation approaches.  Just relying on pedigree (where did so-and-so get their doctorate?) and high impact publications (person A must be better than person B because person A published a paper as a postdoc in a high impact glossy journal) almost certainly misses some people who could be outstanding researchers.  However, the lack of good metrics, the small sample sizes, the long timescales associated with research, and enormous local environmental influence (it's just easier to do cutting-edge work at Harvard than at Northern Michigan), all mean that it's incredibly hard to come up with a way to find these people via some analytic approach.  

Wednesday, July 06, 2016

Keeping your (samples) cool is not always easy.

Very often in condensed matter physics we like to do experiments on materials or devices in a cold environment.  As has been appreciated for more than a century, cooling materials down often makes them easier to understand, because at low temperatures there is not enough thermal energy bopping around to drive complicated processes.  There are fewer lattice vibrations.  Electrons settle down more into their lowest available states.  The spread in available electron energies is proportional to \(k_{\mathrm{B}}T\), so any electronic measurement as a function of energy gets sharper-looking at low temperatures.

Sometimes, though, you have to dump energy into the system to do the study you care about.  If you want to measure electronic conduction, you have to apply some voltage \(V\) across your sample to drive a current \(I\), and that \(I \times V\) power shows up as heat.  In our case, we have done work over the last few years trying to do simultaneous electronic measurements and optical spectroscopy on metal junctions containing one or a few molecules (see here).   What we are striving toward is doing inelastic electron tunneling spectroscopy (IETS - see here) at the same time as molecular-scale Raman spectroscopy (see here for example).   The tricky bit is that IETS works best at really low temperatures (say 4.2 K), where the electronic energy spread is small (hundreds of microvolts), but the optical spectroscopy works best when the structure is illuminated by a couple of mW of laser power focused into a ~ 1.5 micron diameter spot.

It turns out that the amount of heating you get when you illuminate a thin metal wire (which can be detected in various ways; for example, we can use the temperature-dependent electrical resistance of the wire itself as a thermometer) isn't too bad when the sample starts out at, say, 100 K.  If the sample/substrate starts out at about 5 K, however, even modest incident laser power directly on the sample can heat the metal wire by tens of Kelvin, as we show in a new paper.  How the local temperature changes with incident laser intensity is rather complicated, and we find that we can model this well if the main roadblock at low temperatures is the acoustic mismatch thermal boundary resistance.  This is a neat effect discussed in detail here.  Vibrational heat transfer between the metal and the underlying insulating substrate is hampered (like \(1/T^3\) at low temperatures) by the fact that the speed of sound is very different between the metal and the insulator.   There are a bunch of other complicated issues (this and this, for example) that can also hinder heat flow in nanostructures, but the acoustic mismatch appears to be the dominant one in our case.   The bottom line:  staying cool in the spotlight is hard.  We are working away on some ideas on mitigating this issue.  Fun stuff.

(Note:  I'm doing some travel, so posting will slow down for a bit.)