Search This Blog

Thursday, December 29, 2016

Some optimism at the end of 2016

When the news is filled with bleak items, like:
it's easy to become pessimistic.   Bear in mind that modern communications plus the tendency for bad news to get attention plus the size of the population can really distort perception.  To put that another way, 56 million people die every year (!), but now you are able to hear about far more of them than ever before.  

Let me make a push for optimism, or at least try to put some things in perspective.  There are some reasons to be hopeful.  Specifically, look here, at a site called "Our World in Data", produced at Oxford University.  These folks use actual numbers to point out that this is actually, in many ways, the best time in human history to be alive:
  • The percentage of the world's population living in extreme poverty is at an all-time low (9.6%).
  • The percentage of the population that is literate is at an all-time high (85%), as is the overall global education level.
  • Child mortality is at an all-time low.
  • The percentage of people enjoying at least some political freedom is at an all-time high.
That may not be much comfort to, say, an unemployed coal miner in West Virginia, or an underemployed former factory worker in Missouri, but it's better than the alternative.   We face many challenges, and nothing is going to be easy or simple, but collectively we can do amazing things, like put more computing power in your hand than existed in all of human history before 1950, set up a world-spanning communications network, feed 7B people, detect colliding black holes billions of lightyears away by their ripples in spacetime, etc.  As long as we don't do really stupid things, like make nuclear threats over twitter based on idiots on the internet, we will get through this.   It may not seem like it all the time, but compared to the past we live in an age of wonders.

Tuesday, December 20, 2016

Mapping current at the nanoscale - part 2 - magnetic fields!

A few weeks ago I posted about one approach to mapping out where current flows at the nanoscale, scanning gate microscopy.   I had made an analogy between current flow in some system and traffic flow in a complicated city map.  Scanning gate microscopy would be analogous recording the flow of traffic in/out of a city as a function of where you chose to put construction barrels and lane closures.  If sampled finely enough, this would give you a sense of where in the city most of the traffic tends to flow.

Of course, that's not how utilities like Google Maps figure out traffic flow maps or road closures.  Instead, applications like that track the GPS signals of cell phones carried in the vehicles.  Is there a current-mapping analogy here as well?  Yes.  There is some "signal" produced by the flow of current, if only you can have a sufficiently sensitive detector to find it.  That is the magnetic field.  Flowing current density \(\mathbf{J}\) produces a local magnetic field \(\mathbf{B}\), thanks to Ampere's law, \(\nabla \times \mathbf{B} = \mu_{0} \mathbf{J}\).
Scanning SQUID microscope image of x-current density 
in a GaSb/InAs structure, showing that the current is 
carried by the edges.  Scale bar is 20 microns.  Image 



Fortunately, there now exist several different technologies for performing very local mapping of magnetic fields, and therefore the underlying pattern of flowing current in some material or device.  One older, established approach is scanning Hall microscopy, where a small piece of semiconductor is placed on a scanning tip, and the Hall effect in that semiconductor is used to sense local \(B\) field.

Scanning NV center microscopy to see magnetic fields,
Scale bars are 400 nm.
Considerably more sensitive is the scanning SQUID microscope, where a tiny superconducting loop is placed on the end of a scanning tip, and used to detect incredibly small magnetic fields.  Shown in the figure, it is possible to see when current is carried by the edges of a structure rather than by the bulk of the material, for example.

A very recently developed method is to use the exquisite magnetic field sensitive optical properties of particular defects in diamond, NV centers.  The second figure (from here) shows examples of the kinds of images that are possible with this approach, looking at the magnetic pattern of data on a hard drive, or magnetic flux trapped in a superconductor.  While I have not seen this technique applied directly to current mapping at the nanoscale, it certainly has the needed magnetic field sensitivity.  Bottom line:  It is possible to "look" at the current distribution in small structures at very small scales by measuring magnetic fields.

Saturday, December 17, 2016

Recurring themes in (condensed matter/nano) physics: Exponential decay laws

It's been a little while (ok, 1.6 years) since I made a few posts about recurring motifs that crop up in physics, particularly in condensed matter and at the nanoscale.  Often the reason certain mathematical relationships crop up repeatedly in physics is that they are, deep down, based on underlying assumptions that are very simple.  One example common in all of physics is the idea of exponential decay, that some physical property or parameter often ends up having a time dependence proportional to \(\exp(-t/\tau)\), where \(\tau\) is some characteristic timescale.
Buffalo Bayou cistern.  (photo by Katya Horner).

Why is this time dependence so common?  Let's take a particular example.  Suppose we are in the remarkable cistern, shown here, that used to store water for the city of Houston.   If you go on a tour there (I highly recommend it - it's very impressive.), you will observe that it has remarkable acoustic properties.  If you yell or clap, the echo gradually dies out by (approximately) exponential decay, fading to undetectable levels after about 18 seconds (!).  The cistern is about 100 m across, and the speed of sound is around 340 m/s, meaning that in 18 seconds the sound you made has bounced off the walls around 61 times.  Each time the sound bounces off a wall, it loses some percentage of its intensity (stored acoustic energy).

That idea, that the decrease in some quantity is a fixed fraction of the current size of that quantity, is the key to the exponential decay, in the limit that you consider the change in the quantity from instant to instant (rather than taking place via discrete events).    Note that this is also basically the same math that is behind compound interest, though that involves exponential growth.


Saturday, December 10, 2016

Bismuth superconducts, and that's weird

Many elemental metals become superconductors at sufficiently low temperatures, but not all.  Ironically, some of the normal metal elements with the best electrical conductivity (gold, silver, copper) do not appear to do so.  Conventional superconductivity was explained by Bardeen, Cooper, and Schrieffer in 1957.  Oversimplifying, the idea is that electrons can interact with lattice vibrations (phonons), in such a way that there is a slight attractive interaction between the electrons.  Imagine a billiard ball rolling on a foam mattress - the ball leaves trailing behind it a deformation of the mattress that takes some finite time to rebound, and another nearby ball is "attracted" to the deformation left behind.  This slight attraction is enough to cause pairing between charge carriers in the metal, and those pairs can then "condense" into a macroscopic quantum state with the superconducting properties we know.  The coinage metals apparently have comparatively weak electron-phonon coupling, and can't quite get enough attractive interaction to go superconducting.

Another way you could fail to get conventional BCS superconductivity would be just to have too few charge carriers!  In my ball-on-mattress analogy, if the rolling balls are very dilute, then pair formation doesn't really happen, because by the time the next ball rolls by where a previous ball had passed, the deformation is long since healed.  This is one reason why superconductivity usually doesn't happen in doped semiconductors.

Superconductivity with really dilute carriers is weird, and that's why the result published recently here by researchers at the Tata Institute is exciting.  They were working bismuth, which is a semimetal in its usual crystal structure, meaning that it has both electrons and holes running around (see here for technical detail), and has a very low concentration of charge carriers, something like 1017/cm3, meaning that the typical distance between carriers is on the order of 30 nm.  That's very far, so conventional BCS superconductivity isn't likely to work here.  However, at about 500 microKelvin (!), the experimenters see (via magnetic susceptibility and the Meissner effect) that single crystals of Bi go superconducting.   Very neat.  

They achieve these temperatures through a combination of a dilution refrigerator (possible because of the physics discussed here) and nuclear demagnetization cooling of copper, which is attached to a silver heatlink that contains the Bi crystals.   This is old-school ultralow temperature physics, where they end up with several kg of copper getting as low as 100 microKelvin.    Sure, this particular result is very far from any practical application, but the point is that this work shows that there likely is some other pairing mechanism that can give superconductivity with very dilute carriers, and that could be important down the line.

Tuesday, December 06, 2016

Suggested textbooks for "Modern Physics"?

I'd be curious for opinions out there regarding available textbooks for "Modern Physics".  Typically this is a sophomore-level undergraduate course at places that offer such a class.  Often these tend to focus on special relativity and "baby quantum", making the bulk of "modern" end in approximately 1930.   Ideally it would be great to have a book that includes topics from the latter half of the 20th century, too, without having them be too simplistic.  Looking around on amazon, there are a number of choices, but I wonder if I'm missing some diamond in the rough out there by not necessarily using the right search terms, or perhaps there is a new book in development of which I am unaware.   The book by Rohlf looks interesting, but the price tag is shocking - a trait shared by many similarly titled works on amazon.  Any suggestions?

Saturday, November 26, 2016

Quantum computing - lay of the land, + corporate sponsorship

Much has been written about quantum computers and their prospects for doing remarkable things (see here for one example of a great primer), and Scott Aronson's blog is an incredible resource if you want more technical discussions.   Recent high profile news this week about Microsoft investing heavily in one particular approach to quantum computation has been a good prompt to revisit parts of this subject, both to summarize the science and to think a bit about corporate funding of research.  It's good to see how far things have come since I wrote this almost ten years ago (!!).

Remember, to realize the benefits of general quantum computation, you need (without quibbling over the details) some good-sized set  (say 1000-10000) of quantum degrees of freedom, qubits, that you can initialize, entangle to create superpositions, and manipulate in deliberate ways to perform computational operations.  On the one hand, you need to be able to couple the qubits to the outside world, both to do operations and to read out their state.  On the other hand, you need the qubits to be isolated from the outside world, because when a quantum system becomes entangled with (many) environmental degrees of freedom whose quantum states you aren't tracking, you generally get decoherence - what is known colloquially as the collapse of the wavefunction.  

The rival candidates for general purpose quantum computing platforms make different tradeoffs in terms of robustness of qubit coherence and scalability.  There are error correction schemes, and implementations that combine several "physical" qubits into a single "logical" qubit that is supposed to be harder to screw up.  Trapped ions can have very long coherence times and be manipulated with great precision via optics, but scaling up to hundreds of qubits is very difficult (though see here for a claim of a breakthrough).  Photons can be used for quantum computing, but since they fundamentally don't interact with each other under ordinary conditions, some operations are difficult, and scaling is really hard - to quote from that link, "About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information."   Electrons in semiconductor quantum dots might be more readily scaled, but coherence is fleeting.   Superconducting approaches are the choices of the Yale and UC Santa Barbara groups.

The Microsoft approach, since they started funding quantum computing research, has always been rooted in ideas about topology, perhaps unsurprising since their effort has been led by Michael Freedman.  If you can encode quantum information in something to do with topology, perhaps the qubits can be more robust to decoherence.  One way to get topology in the mix is to work with particular exotic quantum excitations in 2d that are non-Abelian.  That is, if you take two such excitations and move them around each other in real space, the quantum state somehow transforms itself to remember that braiding, including whether you moved particle 2 around particle 1, or vice versa.  Originally Microsoft was very interested in the \(\nu = 5/2\) fractional quantum Hall state as an example of a system supporting this kind of topological braiding.  Now, they've decided to bankroll the groups of Leo Kouwenhoven and Charlie Marcus, who are trying to implement topological quantum computing ideas using superconductor/semiconductor hybrid structures thought to exhibit Majorana fermions.

It's worth noting that Microsoft are not the only people investing serious money in quantum computing.   Google invested enormously in John Martinis' effort.  Intel has put a decent amount of money into a silicon quantum dot effort practically down the hall from Kouwenhoven.  This kind of industrial investment does raise some eyebrows, but as long as it doesn't kill publication or hamstring students and postdocs with weird constraints, it's hard to see big downsides.  (Of course, Uber and Carnegie Mellon are a cautionary example of how this sort of relationship may not work out well for the relevant universities.)

Monday, November 21, 2016

More short items, incl. postdoc opportunities

Some additional brief items:

Wednesday, November 16, 2016

short items

A handful of brief items:

  • A biologist former colleague has some good advice on writing successful NSF proposals that translates well to other disciplines and agencies.
  • An astronomy colleague has a nice page on the actual science behind the much-hyped supermoon business.
  • Lately I've found myself recalling a book that I read as part of an undergraduate philosophy of science course twenty-five years ago, The Dilemmas of an Upright Man.  It's the story of Max Planck and the compromises and choices he made while trying to preserve German science through two world wars.  As the Nazis rose to power and began their pressuring of government scientific institutions such as the Berlin Academy and the Kaiser Wilhelm Institutes, Planck decided to remain in leadership roles and generally not speak out publicly, in part because he felt like if he abrogated his position there would only be awful people left behind like ardent Nazi Johannes Stark.   These decisions may have preserved German science, but they broke his relationship with Einstein, who never spoke to Planck again from 1937 until Planck's death in 1947.  It's a good book and very much worth reading.

Wednesday, November 09, 2016

Lenses from metamaterials

As alluded to in my previous posts on metamaterials and metasurfaces, there have been some recently published papers that take these ideas and do impressive things.

  • Khorasaninejad et al. have made a metasurface out of a 2d array of very particularly designed TiO2 posts on a glass substrate.  The posts vary in size and shape, and are carefully positioned and oriented on the substrate so that, for light incident from behind the glass, normal to the glass surface, and centered on the middle of the array, the light is focused to a spot 200 microns above the array surface.  Each little TiO2 post acts like a sub-wavelength scatterer and imparts a phase on the passing light, so that the whole array together acts like a converging lens.  This is very reminiscent of the phased array I'd mentioned previously.  For a given array, different colors focus to different depths (chromatic aberration).  Impressively, the arrays are designed so that there is no polarization dependence of the focusing properties for a given color.    
  • Hu et al. have made a different kind of metasurface, using plasmonically active gold nanoparticles on a glass surface.  The remarkable achievement here is that the authors have used a genetic algorithm to find a pattern of nanoparticle shapes and sizes that somehow, through phased array magic, produces a metasurface that functions as an achromatic lens - different visible colors (red, green, blue) normally incident on the array focus to the same spot, albeit with a short focal length of a few microns. 
  •  Finally, in more of a 3d metamaterial approach, Krueger et al. have leveraged their ability to create 3d designer structures of porous silicon.  The porous silicon frameworks have an effective index of refraction at the desired wavelength.  By controllably varying the porosity as a function of distance from the optical axis of the structure, these things can act as lenses.  Moreover, because of designed anisotropy in the framework, they can make different polarizations of incident light experience different effective refractive indices and therefore have different focal lengths.  Fabrication here is supposed to be considerably simpler than the complicated e-beam lithography needed to accomplish the same goal with 2d metasurfaces.
These are just papers published in the last couple of weeks!  Clearly this is a very active field.

Friday, November 04, 2016

What is a metasurface?

As I alluded in my previous post, metamaterials are made out of building blocks, and thanks to the properties of those building blocks and their spatial arrangement, the aggregate system has, on longer distance scales, emergent properties (e.g., optical, thermal, acoustic, elastic) that can be very different from the traits of the individual building blocks.  Classic examples are opal and butterfly wing, both of which are examples of structural coloration.  The building blocks (silica spheres in opal; chitin structures in butterfly wing) have certain optical properties, but by properly shaping and arranging them, the metamaterial comprising them has brilliant iridescent color very different from that of bulk slabs of the underlying material.

Controlling the relative phases between
antennas in an array lets you steer radiation.
By Davidjessop - Own work, CC BY-SA 4.0,
https://commons.wikimedia.org/
w/index.php?curid=48304978 
This works because of wave interference of light.  Light propagates more slowly in a dielectric (\(c/n(\omega)\), where \(n(\omega)\) is the frequency-dependent index of refraction).  Light propagating through some thickness of material will pick up a phase shift relative to light that propagates through empty space.  Moreover, additional phase shifts are picked up at interfaces between dielectrics.  If you can control the relative phases of light rays that arrive at a particular location, then you can set up constructive interference or destructive interference.

This is precisely the same math that gives you diffraction patterns.  You can also do this actively with radio transmitter antennas.  If you set up an antenna array and drive each antenna at the same frequency but with a controlled phase relative to its neighbors, you can tune where the waves constructively or destructively interfere.  This is the principle behind phased arrays.

An optical metasurface is an interface that has structures on it that impose particular phase shifts on light that either is transmitted through or reflected off the interface.  Like a metamaterial and for the same wave interference reasons, the optical properties of the interface on distance scales larger than those structures can be very different than those of the materials that constitute the structures.  Bear in mind, the individual structures don't have to be boring - each by itself could have complicated frequency response, like acting as a dielectric or plasmonic resonator.  We now have techniques that allow rich fabrication on surfaces with a variety of materials down to scales much smaller than the wavelength of visible light, and we have tremendous computational techniques that allow us to calculate the expected optical response from such structures.  Put these together, and those capabilities enable some pretty amazing optical tricks.  See here (pdf!) for a good slideshow covering this topic.



Tuesday, November 01, 2016

What is a metamaterial?

(This is part of a lead-in to a brief discussion I'd like to do of two papers that just came out.) The wikipedia entry for metamaterial is actually rather good, but doesn't really give the "big picture".  As you will hopefully see, that wording is a bit ironic.

"Ordinary" materials are built up out of atoms or molecules.  The electronic, optical, and mechanical properties of a solid or liquid come about from the properties of the individual constituents, and how those constituents are spatially arranged and coupled together into the whole.   On the length scale of the constituents (the size of atoms, say, in a piece of silicon), the local properties like electron density and local electric field vary enormously.  However,  on length scales large compared to the individual constituent atoms or molecules, it makes sense to think of the material as having some spatially-averaged "bulk" properties, like an index of refraction (describing how light propagates through the material), or a magnetic permeability (how the magnetic induction \(\mathbf{B}\) inside a material responds to an externally applied magnetic field \(\mathbf{H}\)), or an elastic modulus (how a material deforms in response to an applied stress).

A "metamaterial" takes this idea a step further.  A metamaterial is build up out of some constituent building blocks such as dielectric spheres or metallic rods.  The properties of an individual building block arise as above from their own constituent atoms, of course.  However, the properties of the metamaterial, on length scales long compared to the size of the building blocks, are emergent from the properties of those building blocks and how the building blocks are then arranged and coupled to each other.   The most common metamaterials are probably dielectric mirrors, which are a subset of photonic band gap systems.  You can take thin layers of nominally transparent dielectrics, stack them up in a periodic way, and end up with a mirror that is incredibly reflective at certain particular wavelengths - an emergent optical property that is not at all obvious at first glance from the properties of the constituent layers.

Depending on what property you're trying to engineer in the final metamaterial, you will need to structure the system on different length scales.  If you want to mess with optical properties, generally the right ballpark distance scale is around a quarter of the wavelength (within the building block constituent) of the light.  For microwaves, this can be the cm range; for visible light, its tens to hundreds of nm.  If you want to make an acoustic metamaterial, you need to make building blocks on a scale comparable to a fraction of the wavelength of the sound you want to manipulate.  Mechanical metamaterials, which have large-scale elastic properties far different than those of their individual building blocks, are trickier, and should be thought about as something more akin to a small truss or origami framework.  These differ from optical and acoustic metamaterials because the latter rely crucially on interference phenomena between waves to build up their optical or acoustic properties, while structural systems rely on local properties (e.g., bending at vertices).

Bottom line:  We now know a lot about how to build up larger structures from smaller building blocks, so that the resulting structures can have very different and interesting properties compared to those of the constituents themselves.

Friday, October 21, 2016

Measuring temperature at the milliKelvin scale

How do we tell the temperature of some piece of material?  I've written about temperature and thermometry a couple of times before (here, here, here).  For ordinary, every-day thermometry, we measure some physical property of a material or system where we have previously mapped out its response as a function of temperature.  For example, near room temperature liquid mercury expands slightly with increasing \(T\).  Confined in a thin glass tube, the length of a mercury column varies approximately linearly with changes in temperature, \(\delta \ell \sim \delta T\).  To do primary thermometry, we don't want to have some empirical calibration - rather, we want to measure some physical property for which we think we have a complete understanding of the underlying physics, so that \(T\) can be inferred directly from the measured quantity and our theoretical expressions, with no adjustable parameters.  This is particularly important at very low temperatures, thousandths of a Kelvin above absolute zero, where the number of things that we can measure is comparatively limited, and tiny flows of power (from our measurements, say) can actually produce large percentage temperature changes.

This recent paper shows a nice example of applying three different primary thermometry techniques to a single system, a puddle of electrons confined in 2d at a semiconductor interface, at about 6 mK.  This is all the more impressive because of how easy it is to inadvertently heat up electrons in such 2d layers.  All three techniques rely on our understanding of how electrons behave at low temperatures.  According to our theory of electrons in metals (which these 2d electrons are, as far as physicists are concerned), as a function of energy, electrons are spread out in a characteristic way, the Fermi-Dirac distribution.  From the theory side, we know this functional form exactly (figure from that wikipedia link).  At low temperatures, all of the electronic states below a highest-filled-state are full, and all above are empty.  As \(T\) is increased, the electrons smear out into higher energy states, as shown.  The three effects measured in the experiment all depend on \(T\) through this electronic distribution:
Fig. 2 from the paper, showing excellent, consistent agreement
between experiment and theory, showing electron temperatures 
of ~ 6 mK. 
  • Current noise in a quantum point contact, the fluctuations in the average current.  For this particular device, where conduction takes place through a small, controllable number of quantum channels, we think we understand the situation completely.  There is a closed-form expression for what the noise should do as a function of average current, with temperature as the only adjustable parameter (once the conduction has been measured).
  • "Coulomb blockade" in a quantum dot.  Conduction through a puddle of electrons connected to input and output electrodes by tunneling barriers ("pinched off" versions of the point contacts) shows a very particular form of current-voltage characteristic that is tunable by a nearby gate electrode.   The physics here is that, because of the mutual repulsion of electrons, it takes energy (supplied by either a voltage source or temperature) to get charge to flow through the puddle.  Again, once the conduction has been measured, there is a closed-form expression for what the conductance should do as a function of that gate voltage.
  • "Environmental" Coulomb blockade in a quantum dot.  This is like the situation above, but with one of the tunnel barriers replaced by a controlled resistor.  Again, there is an expression for the particular shape of the \(I-V\) curve where the adjustable parameter is \(T\).  
As shown in the figure (Fig. 2 from the paper - open access and also available on the arxiv), the theoretical expressions do a great job of fitting the data, and give very consistent electron temperatures down to 0.006 K.  It's a very impressive piece of work, and I encourage you to read it - look at Fig. 4 if you're interested in how challenging it is to cool electrons in these kinds of devices down to this level.

Saturday, October 08, 2016

What do LBL's 1 nm transistors mean?

In the spirit of this post, it seems like it would be a good idea to write something about this paper (accompanying LBL press release), particularly when popular sites are going a bit overboard with their headlines ("The world's smallest transistor is 1nm long, physics be damned").  (I discuss most of the background in my book, if you're interested.)

What is a (field effect) transistor and how does it work?  A transistor is an electronic switch, the essential building block of modern digital electronics.  A field-effect transistor (FET) has three terminals - a "source" (an input), a "drain" (an output) on either side of a semiconductor "channel", and a "gate" (a control knob).  If you think of electrical current like fluid flow, this is like a pipe with an inlet, and outlet, and a valve in the middle, and the gate controls the valve.  In a "depletion mode" FET, the gate electrode repels away charges in the channel to turn off current between the source and drain.  In an "accumulation mode" FET, the gate attracts mobile charges into the channel to turn on current between the source and drain.   Bottom line:  the gate uses the electrostatic interaction with charges to control current in the channel.  There has to be a thin insulating layer between the gate and the channel to keep current from "leaking" from the gate.   People have had to get very clever in their geometric designs to maximize the influence of the gate on the charges in the channel.

What's the big deal about making smaller transistors?  We've gotten where we are by cramming more devices on a chip at an absurdly increasing rate, by making transistors smaller and smaller.  One key length scale is the separation between source and drain electrode.  If that separation is too small, there are at least two issues:  Current can leak from source to drain even when the device is supposed to be off because the charge can tunnel; and because of the way electric fields actually work, it is increasingly difficult to come up with a geometry where the gate electrode can efficiently (that is, with a small swing in voltage, to minimize power) turn the FET off and on.

What did the LBL team do?  The investigators built a very technically impressive device, using atomically thin MoS2 as the semiconductor layer, source and drain electrodes separated by only seven nm or so, a ZrO2 dielectric layer only a couple of nm thick, and using an individual metallic carbon nanotube (about 1 nm in diameter) as the gate electrode.  The resulting device functions quite well as a transistor, which is pretty damn cool, considering the constraints involved.   This fabrication is a tour de force piece of work.

Does this device really defy physics in some way, as implied by the headline on that news article?  No.  That headline alludes to the issue of direct tunneling between source and drain, and a sense that this is expected to be a problem in silicon devices below the 5 nm node (where that number is not the actual physical length of the channel).   This device acts as expected by physics - indeed, the authors simulate the performance and the results agree very nicely with experiment.

If you read the actual LBL press release, you'll see that the authors are very careful to point out that this is a proof-of-concept device.  It is exceedingly unlikely (in my opinion, completely not going to happen) that we will have chips with billions of MoS2 transistors with nanotube gates - the Si industry is incredibly conservative about adopting new materials.  If I had to bet, I'd say it's going to be Si and Si/Ge all the way down.   (You will very likely need to go away from Si if you want to see this kind of performance at such length scales, though.)   Still, this work does show that with proper fabrication and electrostatic design, you can make some really tiny transistors that work very well!


Monday, October 03, 2016

This year's Nobel in physics - Thouless, Kosterlitz, Haldane

Update:  well, I was completely wrong!  Topology ruled the day.  I will write more later about this, but Congratulations to Thouless, Kosterlitz, and Haldane!

Real life is making this week very busy, so it will be hard for me to write much in a timely way about this, and the brief popular version by the Nobel Foundation is pretty good if you're looking for an accessible intro to the work that led to this.  Their more technical background document (clearly written in LaTeX) is also nice if you want greater mathematical sophistication.

Here is the super short version.  Thouless, Kosterlitz, and Haldane had major roles to play in showing the importance of topology in understanding some key model problems in condensed matter physics.

Kosterlitz and Thouless (and independently Berezinskii) were looking at the problem of phase transitions in two dimensions of a certain type.  As an example, imagine a huge 2d array of compass needles, each free to rotate in the plane, but interacting with their neighbors, so that neighbors tend to want to point the same direction.  In the low temperature limit, the whole array will be ordered (pointing all the same way).  In the very high temperature limit, when thermal energy is big compared to the interaction between needles, the whole array will be disordered, with needles at any moment randomly oriented.  The question is, as temperature is increased, how does the system get from ordered to disordered?  Is it just a gradual thing, or does it happen suddenly in a particular way?  It turns out that the right way to think about this problem is in terms of vorticity, a concept that comes up in fluid mechanics as well (see this wiki page with mesmerizing animations).  It's energetically expensive to flip individual needles - better to rotate needles gradually relative to their neighbors.  The symmetry of the system says that you can't spontaneously create a pattern to the needles that has some net swirliness ("winding number", if you like).  However, it's relatively energetically cheap to create pairs of vortices with opposite handedness (vortex/antivortex pairs).  Kosterlitz, Thouless, and Berezinskii showed that these V/AV pairs "unbind" collectively at some finite temperature in a characteristic way, with testable consequences.  This leads to a particular kind of phase transition in a bunch of different 2d systems that, deep down, are mathematically similar.  2d xy magnetism and superconductivity in 2d are examples.  This generality is very cool - the microscopic details of the systems may be different, but the underlying math is the same, and leads to testable quantitative predictions.

Thouless also realized that topological ideas are critically important in 2d electronic systems in large magnetic fields, and this work led to understanding of the quantum Hall effect.  Here is a nice Physics Today article on this topic.   (Added bonus:  Thouless also did groundbreaking work in the theory of localization, what happens to electrons in disordered systems and how it depends on the disorder and the temperature.)

Haldane, another brilliant person who is still very active, made a big impact on the topology front studying another "model" system, so-called spin chains - 1d arrangements of quantum mechanical spins that interact with each other.  This isn't just a toy model - there are real materials with magnetic properties that are well described by spin chain models.  Again, the questions were, can we understand the lowest energy states of such a system, and how those ordered states go away as temperature is increased.  He found that it really mattered in a very fundamental way whether the spins were integer or half-integer, and that the end points of the chains reveal important topological information about the system.  Haldane has long contributed important insights in quantum Hall physics as well, and in all kinds of weird states of matter that result in systems where topology is critically important.  (Another added bonus:  Haldane also did very impactful work on the Kondo problem, how a single local spin interacts with conduction electrons.)

Given how important topological ideas are to physics these days, it is not surprising that these three have been recognized.   In a sense, this work is a big part of the foundation on which the topological insulators and other such systems are built.


Original post:  The announcement this morning of the Nobel in Medicine took me by surprise - I guess I'd assumed the announcements were next week.  I don't have much to say this year; like many people in my field I assume that the prize will go to the LIGO gravitational wave discovery, most likely to Rainer Weiss, Kip Thorne, and Ronald Drever (though Drever is reportedly gravely ill).    I guess we'll find out tomorrow morning!

Sunday, October 02, 2016

Mapping current at the nanoscale - part 1 - scanning gates

Inspired by a metaphor made by our colloquium speaker, Prof. Silke Paschen, this past week, I'd like to try to explain to a general audience a couple of ways that people have developed for mapping out the flow of charge in materials on small scales.

Eric Heller's art piece "Dendrite", based
on visualization of branching current flow.
Often we are interested in understanding how charge flows through some material or device.  The simplest picture taught in courses is an analogy with water flowing through a pipe.  The idea is that there is some input for current, some output for current, and that in the material or device, you can think of charge moving like a fluid flowing uniformly along.  Of course, you could imagine a more complicated situation - perhaps the material or device doesn't have uniform properties; in the analogy, maybe there are obstacles that block or redirect the fluid flow.  Prof. Eric Heller of Harvard is someone who has thought hard about this situation, and how to visualize it.  (He's also a talented artist, and the image at right is an example of artwork based on exactly this issue - how the flow of electrons in a solid can branch and split because of disorder in the material.)

There's a different analogy that might be more useful in thinking about how people actually map out the flow of current in real systems, though.  Suppose you wanted to map out the roads in a city.  These days, one option would be to track all GPS devices (especially mobile phones) moving faster than, say, a few km/h.  If you did that you would pretty quickly resolve a decent map of the streets of a city, and you'd find where the traffic is flowing in high volume and at what speed.  Unfortunately, with electronic materials and devices, we generally don't have the option of tracking each individual mobile electron.  

Some condensed matter experimentalists (like Bob Westervelt, for example) have developed a strategy, however.  Here's the traffic analogy: You would set up traffic cameras to monitor the flow of cars into and out of the city.  Then you would set up road construction barrels (lanes blocked off, road closures) in known locations in the city, and see how that affected the traffic flow in and out of town.  By systematically recording the in/out traffic flow as a function of where you put in road closures, you could develop a rough map of the important routes.  If you temporarily close a road that hardly carries any cars, there won't be any effect on the net traffice, but if you close a major highway, you'd see a big effect.  

The experimental technique is called scanning gate microscopy.  Rather than setting up traffic cones, the experimentalists take a nanoscale-sharp conductive tip and scan it across the sample in question, mapping the sample's end-to-end conduction as a function of where the tip is and what it's doing.  One approach is to set the tip at a negative potential relative to the sample, which would tend to repel nearby electrons just from the usual like-charges-repel Coulomb interaction.  If there is no current flowing near the tip, this doesn't do much of anything.  If the tip is right on top of a major current path, though, this can strongly affect the end-to-end conduction.   It's a neat idea, and it can produce some impressive and informative images.  I'll write further about another technique for current mapping soon.

Friday, September 23, 2016

Nanovation podcast

 Michael Filler is a chemical engineering professor at Georgia Tech, developing new and interesting nanomaterials.  He is also the host of the outstanding Nanovation podcast, a very fun and informative approach to public outreach and science communication - much more interesting than blogging :-) .  I was fortunate enough to be a guest on his podcast a couple of weeks ago - here is the link.  It was really enjoyable, and I hope you have a chance to listen, if not to that one, then to some of the other discussions.

Wednesday, September 21, 2016

Deborah Jin - gone way too soon.

As was pointed out by a commenter on my previous post, and mentioned here by ZapperZ, atomic physicist Deborah Jin passed away last week from cancer at 47.   I don't think I ever met Prof. Jin (though she graduated from my alma mater when I was a freshman) face to face, and I'm not by any means an expert in her subdiscipline, but I will do my best to give an overview of some of her scientific legacy.  There is a sad shortage of atomic physics blogs....  I'm sure I'm missing things - please fill in additional information in the comments if you like.

The advent of optical trapping and laser cooling (relevant Nobel here) transformed atomic physics from what had been a comparatively sleepy specialty, concerned with measuring details of optical transitions and precision spectroscopy (useful for atomic clocks), into a hive of activity, looking at the onset of new states of matter that happen when gases become sufficiently cold and dense that their quantum statistics start to be important.  In a classical noninteracting gas, there are few limits on the constituent molecules - as long as they don't actually try to be in the same place at the same time (think of this as the billiard ball restriction), the molecules can take on whatever spatial locations and momenta that they can reach.  However, if a gas is very cold (low average kinetic energy per molecule) and dense, the quantum properties of the constituents matter - for historical reasons this is called the onset of "degeneracy".  If the constituents are fermions, then the Pauli principle, the same physics that keeps all 79 electrons in an atom of gold from hanging out in the 1s orbital, keeps the constituents apart, and keeps them from all falling into the lowest available energy state.   In contrast, if the constituents are bosons, then a macroscopic fraction of the constituents can fall into the lowest energy state, a process called Bose-Einstein condensation (relevant Nobel here); the condensed state is a single quantum state with a large occupation, and therefore can show exotic properties.

Prof. Jin's group did landmark work with these systems.  She and her student Brian DeMarco showed that you could actually reach the degenerate limit in a trapped atomic Fermi gas.  A major challenge in this field is trying to avoid 3-body and other collisions that can create states of the atoms that are no longer trapped by the lasers and magnetic fields used to do the confinement, and yet still create systems that are (in their quantum way) dense.  Prof. Jin's group showed that you could actually finesse this issue and pair up fermionic atoms to create trapped, ultracold diatomic molecules.  Moreover, you could then create a Bose-Einstein condensate of molecules (since a pair of fermions can be considered as a composite boson).  In superconductors, we're used to the idea that electrons can form Cooper pairs, which act as composite bosons and form a coherent quantum system, the superconducting state.  However, in superconductors, the Cooper pairs are "large" - the average real-space separation between the electrons that constitute a pair is big compared to the typical separation between particles.  Prof. Jin's work showed that in atomic gases you could span between the limits (BEC of tightly bound molecules on the one hand, vs. condensed state of loosely paired fermions on the other).  More recently, her group had been doing cool work looking at systems good for testing models of magnetism and other more complicated condensed matter phenoma, by using dipolar molecules, and examining very strongly interacting fermions.   Basically, Prof. Jin was an impressively creative, technically skilled, extremely productive physicist, and by all accounts a generous person who was great at mentoring students and postdocs.   She has left a remarkable scientific legacy for someone whose professional career was tragically cut short, and she will be missed.


Sunday, September 18, 2016

Alan Alda Center for Communicating Science, posting

Tomorrow I'll be a participant in an all-day workshop that Rice's Center for Teaching Excellence will be hosting with representatives from the Alan Alda Center for Communicating Science - the folks responsible for the Flame Challenge, a contest about trying to explain a science topic to an 11-year-old.  I'll write a follow-up post sometime soon about what this was like.

I'm in the midst of some major writing commitments right now, so posting frequency may slow for a bit.  I am trying to plan out how to write some accessible content about some recent exciting work in a few different material systems. 

 

Monday, September 12, 2016

Professional service

An underappreciated part of a scientific career is "professional service" - reviewing papers and grant proposals, filling roles in professional societies, organizing workshops/conferences/summer schools - basically carrying your fair share of the load, so that the whole scientific enterprise actually functions.  Some people take on service roles primarily because they want to learn better how the system works; others do so out of altruism, realizing that it's only fair, for example, to perform reviews of papers and grants at roughly the rate you submit them; still others take on responsibility because they either think they know best how to run/fix things, or because they don't like the alternatives.   Often it's a combination of all of these.

More and more journals proliferate; numbers of grant applications climb even as (in the US anyway) support remains flat or declining; and conference attendance continues to grow (the APS March Meeting is now twice as large as in my last year of grad school).  This means that professional demands are on the rise.  At the same time, it is difficult to track and quantify (except by self-reporting) these activities, and reward structures give only indirect incentive (e.g., reviewing grants gives you a sense of what makes a better proposal) to good citizenship.  So, when you're muttering under your breath about referee number 3 or about how the sessions are organized nonoptimally at your favorite conference (as we all do from time to time), remember that at least the people in question are trying to contribute, rather than sitting on the sidelines.

Friday, September 02, 2016

Conference for Undergraduate Women in Physics!

Over January 13-15, 2017, Rice is going to be hosting one of the American Physical Society's Conferences for Undergraduate Women in Physics.  Registration is now open - please click on the link in the previous sentence, and you will be taken to the meeting website.  This is one of about 10 regional CUWiP meetings, and our region encompasses Texas, Mississippi, Alabama, Florida, Arkansas, and Louisiana.  Many thanks to my faculty colleagues Prof. Marj Corcoran and Prof. Pat Reiff for leading the way on this, and to our staff administrator and our excellent SPAS undergraduates for their efforts.

Tuesday, August 30, 2016

Gulf Coast Undergraduate Research Symposium!

Rice University's schools of Natural Sciences and Engineering want to make sure that when talented science and engineering undergraduates in the US are deciding where to apply for graduate school, we are on their radar, so to speak.  To that end, we are hosting our second annual Gulf Coast Undergraduate Research Symposium.  To quote the webpage,
The Gulf Coast Undergraduate Research Symposium (GCURS) is a forum for undergraduate researchers to present original research discoveries.... GCURS fosters intercollegiate interactions among students and faculty who share a passion for undergraduate research. We expect several hundred speakers from about half of the states. The event also offers a friendly and supportive environment to students who would be giving their first formal research presentation, and faculty will provide written constructive feedback.
The registration deadline is Sept. 29.  Breakfast, lunch, and dinner will be provided on Saturday, and travel expenses for students (hotel, mileage, airfare if preapproved) will be covered by Rice's Office of Graduate and Postdoctoral Studies.  Please pass this along - it's a fun time.  If you want more details and contact information either for our department's role or the meeting as a whole, please let me know.

Monday, August 29, 2016

Amazon book categories are a joke

A brief non-physics post.  Others have pointed this out, but Amazon's categorizations for books are broken in such a way that they almost have to be designed to encourage scamming.  As an example, my book is, at this instant (and that's also worth noting - these things seem to fluctuate nearly minute-to-minute), the number 30 best seller in "Books > Science & Math > Physics > Solid State Physics".  That's sounds cool, but it's completely meaningless, since if you click on that category you find that it contains such solid state physics classics as "Ugly's Electrical References, 2014 ed.", "Barron's 500 Flash Cards of American Sign Language", "The Industrial Design Reader", and "Electrical Motor Controls for Integrated Systems", along with real solid state books like Kittel, Simon, and Ashcroft & Mermin.  Not quite as badly, the Nanostructures category is filled "Strength of Materials" texts and books about mechanical structures.  Weird, and completely fixable if Amazon actually cared, which they seem not to.

Wednesday, August 24, 2016

Proxima Centauri's planet and the hazards of cool animations

It was officially announced today that Proxima Centauri has a potentially earthlike planet.  That's great, especially for fans of science fiction.  Here is a relevant video by Nature:

Did you spot the mistake?  The scientists discovered the planet by seeing the wobble in the star's motion (measured by painstaking spectroscopy of the starlight, and using the Doppler shift of the spectrum to "see" the tiny motion of the star).  The animation tries to show this at 0:55-1:12.  The wobble is because the star and planet actually orbit around a common center of mass located on the line between them.  Instead, the video seems to show the center of mass of the star+planet tracing out a circle around empty space.  Whoops.   Someone should've caught that.  Still an impressive result.

Update:  The makers of the video have updated with a link to a more accurate animation of the Doppler approach:  https://youtu.be/B-oZYm3L1JE.

Tuesday, August 23, 2016

Statistical and Thermal Physics

Eight years ago I taught Rice's undergraduate Statistical and Thermal Physics course, and now after teaching the honors intro physics class for a while, I'm returning to it.   I posted about the course here, and I still feel the same - the subject matter is intellectually very deep, and it's the third example in the undergraduate curriculum (after electricity&magnetism and quantum mechanics) where students really need to pick up a different way of thinking about the world, a formalism that can seem far removed from their daily experience.

One aspect of the course, the classical thermodynamic potentials and how one goes back and forth between them, nearly always comes across as obscure and quasi-magical the first (or second) time students are exposed to it.  Since the last time I taught the course, a nice expository article about why the math works has appeared in the American Journal of Physics (arxiv version).  

Any readers have insights/suggestions on other nice, recent pedagogical resources for statistical and thermal physics?  

Sunday, August 14, 2016

Updated - Short items - new physics or the lack thereof, planets and scale, and professional interactions

Before the start of the new semester takes over, some interesting, fun, and useful items:
Update:. This is awesome.  Watch it.
  • The lack of any obvious exotic physics at the LHC has some people (prematurely, I suspect) throwing around phrases like "nightmare scenario" and "desert" - shorthand for the possibility that any major beyond-standard-model particles may be many orders of magnitude above present accelerator energies.  For interesting discussions of this, see here, herehere, and here.  
  • On the upside, a recent new result has been published that may hint at something weird.  Because protons are built from quarks (and gluons and all sorts of fluctuating ephemeral stuff like pions), their positive charge has some spatial extent, on the order of 10-15 m in radius.  High precision optical spectroscopy of hydrogen-like atoms provides a way to look at this, because the 1s orbital of the electron in hydrogen actually overlaps with the proton a fair bit.  Muons are supposed to be just like electrons in many ways, but 200 times more massive - as a result, a bound muon's 1s orbital overlaps more with the proton and is more sensitive to the proton's charge distribution.  The weird thing is, the muonic hydrogen measurements yield a different size for the proton than the electronic hydrogen ones.  The new measurements are on muonic deuterium, and they, too, show a surprisingly smaller proton than in the ordinary hydrogen case.  Natalie Wolchover's piece in Quanta gives a great discussion of all this, and is a bit less hyperbolic than the piece in ars technica.
  • Rumors abound that the European Southern Observatory is going to announce the discovery of an earthlike planet orbiting in the putative habitable zone around Proxima Centauri, the nearest star to the sun.  However, those rumors all go back to an anonymously sourced article in Der Spiegel.  I'm not holding my breath, but it sure would be cool.
  • If you want a great sense of scale regarding how far it is even to some place as close as Proxima Centauri, check out this page, If the Moon were One Pixel.
  • For new college students:  How to email your professor without being annoying.
  • Hopefully in our discipline, despite the dire pronouncements in the top bullet point, we are not yet at the point of having to offer the physics analog of this psych course.
  • The US Department of Energy helpfully put out this official response to the Netflix series Stranger Things, in which (spoilers!) a fictitious DOE national lab is up to no good.  Just in case you thought the DOE really was in the business of ripping holes to alternate dimensions and creating telekinetic children.

Monday, August 08, 2016

Why is desalination difficult? Thermodynamics.

There are millions of people around the world without access to drinkable fresh water.  At the same time, the world's oceans contain more than 1.3 million cubic kilometers of salt water.  Seems like all we have to do is get the salt out of the water, and we're all set.   Unfortunately, thermodynamics makes this tough.  Imagine that you have a tank full of sea water and magical filter that lets water through but blocks the dissolved salt ions.    You could drag the filter across the tank - this would concentrate the salt in one side of the tank and leave behind fresh water.  However, this takes work.  You can think about the dissolved ions as a dilute gas, and when you're dragging the membrane across the tank, you're compressing that gas.  An osmotic pressure would resist your pushing of the membrane.  Osmotic effects are behind why red blood cells burst in distilled water and why slugs die when coated with salt.  They're also the subject of a great Arthur C. Clarke short story.

In the language of thermodynamics, desalination requires you to increase the chemical potential of the dissolved ions you're removing from the would-be fresh water, by putting them in a more concentrated state.   This sets limits on how energetically expensive it is to desalinate water - see here, slide 12.   The simplest scheme to implement, distillation by boiling and recondensation, requires coming up with the latent heat of the water and is energetically inefficient.  With real-life approximations of the filter I mentioned, you can drive the process, called reverse osmosis, and do better.  Still, the take-away message is, it takes energy to perform desalination for very similar physics reasons that it takes energy to compress a gas.

Interestingly, you can go the other way.  You know that you can get useful work out of a gas reservoirs at two different pressures.  You can imagine using the difference in chemical potential between salt water and fresh water to drive an engine or produce electricity.  In that sense, every time a freshwater stream or river empties into the ocean and the salinity gradient smooths itself by mixing of its own accord, we are wasting possible usable energy.  This was pointed out here, and there is now an extensive wikipedia entry on osmotic power.

Saturday, July 30, 2016

Ask me something.

I realized that I haven't had an open "ask me" post in almost two years.  Is there something in particular you'd like me to write about?  As we head into another academic year, are there matters of interest to (grad or undergrad) students?

Sunday, July 24, 2016

Dark matter, one more time.

There is strong circumstantial evidence that there is some kind of matter in the universe that interacts with ordinary matter via gravity, but is otherwise not readily detected - it is very hard to explain things like the rotation rates of galaxies, the motion of star clusters, and features of the large scale structure of the universe without dark matter.   (The most discussed alternative would be some modification to gravity, but given the success of general relativity at explaining many things including gravitational radiation, this seems less and less likely.)  A favorite candidate for dark matter would be some as-yet undiscovered particle or class of particles that would have to be electrically neutral (dark!) and would only interact very weakly if at all beyond the gravitational attraction.

There have been many experiments trying to detect these particles directly.  The usual assumption is that these particles are all around us, and very occasionally they will interact with the nuclei of ordinary matter via some residual, weak mechanism (say higher order corrections to ordinary standard model physics).  The signature would be energy getting dumped into a nucleus without necessarily producing a bunch of charged particles.   So, you need a detector that can discriminate between nuclear recoils and charged particles.  You want a lot of material, to up the rate of any interactions, and yet the detector has to be sensitive enough to see a single event, and you need pure enough material and surroundings that a real signal wouldn't get swamped by background radiation, including that from impurities.  The leading detection approaches these days use sodium iodide scintillators (DAMA), solid blocks of germanium or silicon (CDMS), and liquid xenon (XENON, LUX, PandaX - see here for some useful discussion and links).

I've been blogging long enough now to have seen rumors about dark matter detection come and go.  See here and here.  Now in the last week both LUX and PandaX have reported their latest results, and they have found nothing - no candidate events at all - after their recent experimental runs.  This is in contrast to DAMA, who have been seeing some sort of signal for years that seems to vary with the seasons.  See here for some discussion.  The lack of any detection at all is interesting.  There's always the possibility that whatever dark matter exists really does only interact with ordinary matter via gravity - perhaps all other interactions are somehow suppressed by some symmetry.  Between the lack of dark matter particle detection and the apparent lack of exotica at the LHC so far, there is a lot of head scratching going on....

Saturday, July 16, 2016

Impact factors and academic "moneyball"

For those who don't know the term:  Moneyball is the title of a book and a movie about the 2002 Oakland Athletics baseball team, a team with a payroll in the bottom 10% of major league baseball at the time.   They used a data-intensive, analytics-based strategy called sabermetrics to find "hidden value" and "market inefficiencies", to put together a very competitive team despite their very limited financial resources.   A recent (very fun if you're a baseball fan) book along the same lines is this one.  (It also has a wonderful discussion of confirmation bias!)

A couple of years ago there was a flurry of articles (like this one and the academic paper on which it was based) about whether a similar data-driven approach could be used in scientific academia - to predict success of individuals in research careers, perhaps to put together a better department or institute (a "roster") by getting a competitive edge at identifying likely successful researchers.

The central problems in trying to apply this philosophy to academia are the lack of really good metrics and the timescales involved in research careers.  Baseball is a paradise for people who love statistics.  The rules have been (largely) unchanged for over a hundred years; the seasons are very long (formerly 154 games, now 162), and in any game an everyday player can get multiple opportunities to show their offensive or defensive skills.   With modern tools it is possible to get quantitative information about every single pitched ball and batted ball.  As a result, the baseball stats community has come up with a huge number of quantitative metrics for evaluating performance in different aspects of the game, and they have a gigantic database against which to test their models.  They even have devised metrics to try and normalize out the effects of local environment (baseball park-neutral or adjusted stats).

Fig. 1, top panel, from this article.  x-axis = # of citations.
The mean of the distribution is strongly affected by the outliers.
In scientific research, there are very few metrics (publications; citation count; impact factor of the journals in which articles are published), and the total historical record available on which to base some evaluation of an early career researcher is practically the definition of what a baseball stats person would call "small sample size".   An article in Nature this week highlights the flaws with impact factor as a metric.  I've written before about this (here and here), pointing out that impact factor is a lousy statistic because it's dominated by outliers, and now I finally have a nice graph (fig. 1 in the article; top panel shown here) to illustrate this.  

So, in academia, the tantalizing fact is that there is almost certainly a lot of "hidden value" out there missed by traditional evaluation approaches.  Just relying on pedigree (where did so-and-so get their doctorate?) and high impact publications (person A must be better than person B because person A published a paper as a postdoc in a high impact glossy journal) almost certainly misses some people who could be outstanding researchers.  However, the lack of good metrics, the small sample sizes, the long timescales associated with research, and enormous local environmental influence (it's just easier to do cutting-edge work at Harvard than at Northern Michigan), all mean that it's incredibly hard to come up with a way to find these people via some analytic approach.  

Wednesday, July 06, 2016

Keeping your (samples) cool is not always easy.

Very often in condensed matter physics we like to do experiments on materials or devices in a cold environment.  As has been appreciated for more than a century, cooling materials down often makes them easier to understand, because at low temperatures there is not enough thermal energy bopping around to drive complicated processes.  There are fewer lattice vibrations.  Electrons settle down more into their lowest available states.  The spread in available electron energies is proportional to \(k_{\mathrm{B}}T\), so any electronic measurement as a function of energy gets sharper-looking at low temperatures.

Sometimes, though, you have to dump energy into the system to do the study you care about.  If you want to measure electronic conduction, you have to apply some voltage \(V\) across your sample to drive a current \(I\), and that \(I \times V\) power shows up as heat.  In our case, we have done work over the last few years trying to do simultaneous electronic measurements and optical spectroscopy on metal junctions containing one or a few molecules (see here).   What we are striving toward is doing inelastic electron tunneling spectroscopy (IETS - see here) at the same time as molecular-scale Raman spectroscopy (see here for example).   The tricky bit is that IETS works best at really low temperatures (say 4.2 K), where the electronic energy spread is small (hundreds of microvolts), but the optical spectroscopy works best when the structure is illuminated by a couple of mW of laser power focused into a ~ 1.5 micron diameter spot.

It turns out that the amount of heating you get when you illuminate a thin metal wire (which can be detected in various ways; for example, we can use the temperature-dependent electrical resistance of the wire itself as a thermometer) isn't too bad when the sample starts out at, say, 100 K.  If the sample/substrate starts out at about 5 K, however, even modest incident laser power directly on the sample can heat the metal wire by tens of Kelvin, as we show in a new paper.  How the local temperature changes with incident laser intensity is rather complicated, and we find that we can model this well if the main roadblock at low temperatures is the acoustic mismatch thermal boundary resistance.  This is a neat effect discussed in detail here.  Vibrational heat transfer between the metal and the underlying insulating substrate is hampered (like \(1/T^3\) at low temperatures) by the fact that the speed of sound is very different between the metal and the insulator.   There are a bunch of other complicated issues (this and this, for example) that can also hinder heat flow in nanostructures, but the acoustic mismatch appears to be the dominant one in our case.   The bottom line:  staying cool in the spotlight is hard.  We are working away on some ideas on mitigating this issue.  Fun stuff.

(Note:  I'm doing some travel, so posting will slow down for a bit.)