Monday, May 21, 2018

Physics around you: the field-effect transistor

While dark matter and quantum gravity routinely get enormous play in the media, you are surrounded every day by physics that enables near miraculous technology.  Paramount among these is the field-effect transistor (FET).   That wikipedia link is actually pretty good, btw.  While I've written before about specific issues regarding FETs (here, here, here), I hadn't said much about the general device.

The idea of the FET is to use a third electrode, a gate, to control the flow of current through a channel between two other electrodes, the source and drain.  The electric field from the gate controls the mobile charge in the channel - this is the field effect.   You can imagine doing this in vacuum, with a hot filament to be a source of electrons, a second electrode (at a positive voltage relative to the source) to collect the electrons, and an intervening grid as the gate.  Implementing this in the solid state was proposed more than once (LilienfeldHeil) before it was done successfully. 

Where is the physics?  There is a ton of physics involved in how these systems actually work.  For example, it's all well and good to talk about "free" electrons moving around in solids in analogy to electrons flying in space in a vacuum tube, but it's far from obvious that you should be able to do this.   Solids are built out of atoms and are inherently quantum mechanical, with particular allowed energies and electronic states picked out by quantum mechanics and symmetries.  The fact that allowed electronic states in periodic solids ("Bloch waves") resemble "free" electron states (plane waves, in the quantum context) is very deep and comes from the underlying symmetry of the material.  [Note that you can have transistors even when the charge carriers should be treated as hopping from site to site - that's how many organic FETs work.]  It's the Pauli principle that allows us to worry only about the highest energy electronic states and not have to worry about, e.g., the electrons deep down in the ion cores of the atoms in the material.  Still, you do have to make sure there aren't a bunch of electronic states at energies where you don't want them - these the are traps and surface states that made FETs hard to get working.  The combo of the Pauli principle and electrostatic screening is why we can largely ignore the electron-electron repulsion in the materials, but still use the gate electrode's electric field to affect the channel.  FETs have also been great tools for learning new physics, as in the quantum Hall effect

What's the big deal?  When you have a switch that is either open or closed, it's easy to realize that you can do binary-based computing with a bunch of them.  The integrated manufacturing of the FET has changed the world.  It's one of the few examples of a truly disruptive technology in the last 100 years.  The device you're using to read this probably contains several billion (!) transistors, and they pretty much all work, for years at a time.  FETs are the underlying technology for both regular and flash memory.  FETs are what drive the pixels in the flat panel display you're viewing.  Truly, they are so ubiquitous that they've become invisible.

Wednesday, May 16, 2018

"Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue. 

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)

Wednesday, May 02, 2018

Short items

A couple of points of interest:
  • Bill Gates apparently turned down an offer from the Trump administration to be presidential science advisor.  It's unclear if this was a serious offer or an off-hand remark.   Either way it underscores what a trivialized and minimal role OSTP appears to be playing in the present administration.  It's a fact of modern existence that there are many intersections between public policy and the need for technical understanding of scientific issues (in the broad sense that includes engineering).   While an engaged and highly functional OSTP doesn't guarantee good policy (because science is only one of many factors that drive decision-making), the US is doing itself a disservice by running a skeleton crew in that office.  
  • Phil Anderson has posted a document (not a paper submitted for publication anywhere, but more of an essay) on the arxiv with the sombre title, "Four last conjectures".  These concern: (1) the true ground state of solids made of atoms that are hard-core bosons, suggesting that at sufficiently low temperatures one could have "non-classical rotational inertia" - not exactly a supersolid, but similar in spirit; (2) a discussion of a liquid phase of (magnetic) vortices in superconductors in the context of heat transport; (3) an exposition of his take on high temperature superconductivity (the "hidden Fermi liquid"), where one can have non-Fermi-liquid scattering rates for longitudinal resistivity, yet Fermi liquid-like scattering rates for scattering in the Hall effect; and (4) a speculation about an alternative explanation (that, in my view, seems ill-conceived) for the accelerating expansion of the universe.   The document is vintage Anderson, and there's a melancholy subtext given that he's 94 years old and is clearly conscious that he likely won't be with us much longer.
  • On a lighter note, a paper (link goes to publicly accessible version) came out a couple of weeks ago explaining how yarn works - that is, how the frictional interactions between a zillion constituent short fibers lead to thread acting like a mechanically robust object.  Here is a nice write-up.

Sunday, April 29, 2018

What is a quantum point contact? What is quantized conductance?

When we teach basic electrical phenomena to high school or college physics students, we usually talk about Ohm's Law, in the form \(V = I R\), where \(V\) is the voltage (how much effort it takes to push charge, in some sense), \(I\) is the current (the flow rate of the charge), and \(R\) is the resistance.  This simple linear relationship is a good first guess about how you might expect conduction to work.  Often we know the voltage and want to find the current, so we write \(I = V/R\), and the conductance is defined as \(G \equiv 1/R\), so \(I = G V\). 

In a liquid flow analogy, voltage is like the net pressure across some pipe, current is like the flow rate of liquid through the pipe, and the conductance characterizes how the pipe limits the flow of liquid.  For a given pressure difference between the ends of the pipe, there are two ways to lower the flow rate of the liquid:  make the pipe longer, and make the pipe narrower.  The same idea applies to electrical conductance of some given material - making the material longer or narrower lowers \(G\) (increases \(R\)).   

Does anything special happen when the conductance becomes small?  What does "small" mean here - small compared to what?  (Physicists love dimensionless ratios, where you compare some quantity of interest with some characteristic scale - see here and here.  I thought I'd written a long post about this before, but according to google I haven't; something to do in the future.)  It turns out that there is a combination of fundamental constants that has the same units as conductance:  \(e^2/h\), where \(e\) is the electronic charge and \(h\) is Planck's constant.  Interestingly, evaluating this numerically gives a characteristic conductance of about 1/(26 k\(\Omega\)).   The fact that \(h\) is in there tells you that this conductance scale is important if quantum effects are relevant to your system (not when you're in the classical limit of, say, a macroscopic, long spool of wire that happens to have \(R \sim 26~\mathrm{k}\Omega\).   
Example of a quantum point contact, from here.

Conductance quantization can happen when you make the conductance approach this characteristic magnitude by having the conductor be very narrow, comparable to the spatial spread of the quantum mechanical electrons.  We know electrons are really quantum objects, described by wavefunctions, and those wavefunctions can have some characteristic spatial scale depending on the electronic energy and how tightly the electron is confined.  You can then think of the connection between the two conductors like a waveguide, so that only a handful of electronic "modes" or "channels" (compatible with the confinement of the electrons and what the wavefunctions are required to do) actually link the two conductors.  (See figure.) Each spatial electronic mode that connects between the two sides has a conductance of \(G_{0} \equiv 2e^{2}/h\), where the 2 comes from the two possible spin states of the electron.  

Conductance quantization in a 2d electron system,
from here.
A junction like this in a semiconductor system is called a quantum point contact.  In semiconductor devices you can use gate electrodes to confine the electrons, and when the conductance reaches the appropriate spatial scale you can see steps in the conductance near integer multiples of \(G_{0}\), the conductance quantum.  A famous example of this is shown in the figure here.  

In metals, because the density of (mobile) electrons is very high, the effective wavelength of the electrons is much shorter, comparable to the size of an atom, a fraction of a nanometer.  This means that constrictions between pieces of metal have to reach the atomic scale to see anything like conductance quantization.  This is, indeed, observed.

For a very readable review of all of this, see this Physics Today article by two of the experimental progenitors of this.  Quantized conductance shows up in other situations when only a countable number of electronic states are actually doing the job of carrying current (like along the edges of systems in the quantum Hall regime, or along the edges of 2d topological materials, or in carbon nanotubes).   

Note 1:  It's really the "confinement so that only a few allowed waves can pass" that gives the quantization here.  That means that other confined wave systems can show the analog of this quantization.  This is explained in the PT article above, and an example is conduction of heat due to phonons.

Note 2:  What about when \(G\) becomes comparable to \(G_{0}\) in a long, but quantum mechanically coherent system?  That's a story for another time, and gets into the whole scaling theory of localization.  

Wednesday, April 25, 2018

Postdoc opportunity

While I have already spammed a number of faculty colleagues about this, I wanted to point out a competitive, endowed postdoctoral opportunity at Rice, made possible through the Smalley-Curl Institute.  (I am interested in hiring a postdoc in general, but the endowed opportunity is a nice one to pursue as well.)

The endowed program is the J Evans Attwell Welch Postdoctoral Fellowship.  This is a competitive, two-year fellowship, and each additionally includes travel funds and research supplies/minor equipment resources.  The deadline for the applications is this coming July 1, 2018 with an anticipated start date around September, 2018.  

I'd be delighted to work with someone on an application for this, and I am looking for a great postdoc in any case.  The best applicant would be a strong student who is interested in working on (i) noise and transport measurements in spin-orbit systems including 2d TIs; (ii) nanoscale studies (incl noise and transport) of correlated materials and non-Fermi liquids; and/or (iii) combined electronic and optical studies down to the molecular scale via plasmonic structures.  If you're a student finishing up and are interested, please contact me, and if you're a faculty member working with possible candidates, please feel free to point out this opportunity.


Saturday, April 21, 2018

The Einstein-de Haas effect

Angular momentum in classical physics is a well-defined quantity tied to the motion of mass about some axis - its value (magnitude and direction) is tied to a particular choice of coordinates.  When we think about some extended object spinning around an axis with some angular velocity \(\mathbf{\omega}\), we can define the angular momentum associated with that rotation by \(\mathbf{I}\cdot \mathbf{\omega}\), where \(\mathbf{I}\) is the "inertia tensor" that keeps track of how mass is distributed in space around the axis.  In general, conservation of angular momentum in isolated systems is a consequence of the rotational symmetry of the laws of physics (Noether's theorem). 

The idea of quantum particles possessing some kind of intrinsic angular momentum is a pretty weird one, but it turns out to be necessary to understand a huge amount of physics.  That intrinsic angular momentum is called "spin", but it's *not* correct to think of it as resulting from the particle being an extended physical object actually spinning.  As I learned from reading The Story of Spin (cool book by Tomonaga, though I found it a bit impenetrable toward the end - more on that below), Kronig first suggested that electrons might have intrinsic angular momentum and used the intuitive idea of spinning to describe it; Pauli pushed back very hard on Kronig about the idea that there could be some physical rotational motion involved - the intrinsic angular momentum is some constant on the order of \(\hbar\).  If it were the usual mechanical motion, dimensionally this would have to go something like \(m r v\), where \(m\) is the mass, \(r\) is the size of the particle, and \(v\) is a speed; as \(r\) gets small, like even approaching a scale we know to be much larger than any intrinsic size of the electron, \(v\) would exceed \(c\), the speed of light.  Pauli pounded on Kronig hard enough that Kronig didn't publish his ideas, and two years later Goudsmit and Uhlenbeck established intrinsic angular momentum, calling it "spin".

Because of its weird intrinsic nature, when we teach undergrads about spin, we often don't emphasize that it is just as much angular momentum as the classical mechanical kind.  If you somehow do something to a system a bunch of spins, that can have mechanical consequences.  I've written about one example before, a thought experiment described by Feynman and approximately implemented in micromechanical devices.  A related concept is the Einstein-de Haas effect, where flipping spins again exerts some kind of mechanical torque.  A new preprint on the arxiv shows a cool implementation of this, using ultrafast laser pulses to demagnetize a ferromagnetic material.  The sudden change of the spin angular momentum of the electrons results, through coupling to the atoms, in the launching of a mechanical shear wave as the angular momentum is dumped into the lattice.   The wave is then detected by time-resolved x-ray measurements.  Pretty cool!

(The part of Tomonaga's book that was hard for me to appreciate deals with the spin-statistics theorem, the quantum field theory statement that fermions have spins that are half-integer multiples of \(\hbar\) while bosons have spins that are integer multiples.  There is a claim that even Feynman could not come up with a good undergrad-level explanation of the argument.  Have any of my readers every come across a clear, accessible hand-wave proof of the spin-statistics theorem?)

Tuesday, April 10, 2018

Chapman Lecture: Using Topology to Build a Better Qubit

Yesterday, we hosted Prof. Charlie Marcus of the Niels Bohr Institute and Microsoft for our annual Chapman Lecture on Nanotechnology.   He gave a very fun, engaging talk about the story of Majorana fermions as a possible platform for topological quantum computing. 

Charlie used quipu to introduce the idea of topology as a way to store information, and made a very nice heuristic argument about how topology encodes information in a global rather than a local sense.  That is, if you have a big, loose tangle of string on the ground, and you do local measurements of little bits of the string, you really can't tell whether it's actually tied in a knot (topologically nontrivial) or just lying in a heap.  This hints at the idea that local interactions (measurements, perturbations) can't necessarily disrupt the topological state of a quantum system.

The talk was given a bit of a historical narrative flow, pointing out that while there had been a lot of breathless prose written about the long search for Majoranas, etc., in fact the timeline was actually rather compressed.  In 2001, Alexei Kitaev proposed a possible way of creating effective Majorana fermions, particles that encode topological information,  using semiconductor nanowires coupled to a (non-existent) p-wave superconductor.   In this scheme, Majorana quasiparticles localize at the ends of the wire.  You can get some feel for the concept by imagining string leading off from the ends of the wire, say downward through the substrate and off into space.  If you could sweep the Majoranas around each other somehow, the history of that wrapping would be encoded in the braiding of the strings, and even if the quasiparticles end up back where they started, there is a difference in the braiding depending on the history of the motion of the quasiparticles.   Theorists got very excited a bout the braiding concept and published lots of ideas, including how one might do quantum computing operations by this kind of braiding.

In 2010, other theorists pointed out that it should be possible to implement the Majoranas in much more accessible materials - InAs semiconductor nanowires and conventional s-wave superconductors, for example.  One experimental feature that could be sought would be a peak in the conductance of a superconductor/nanowire/superconductor device, right at zero voltage, that should turn on above a threshold magnetic field (in the plane of the wire).  That's really what jumpstarted the experimental action.  Fast forward a couple of years, and you have a paper that got a ton of attention, reporting the appearance of such a peak.  I pointed out at the time that that peak alone is not proof, but it's suggestive.  You have to be very careful, though, because other physics can mimic some aspects of the expected Majorana signature in the data.

A big advance was the recent success in growing epitaxial Al on the InAs wires.  Having atomically precise lattice registry between the semiconductor and the aluminum appears to improve the contacts significantly.   Note that this can be done in 2d as well, opening up the possibility of many investigations into proximity-induced superconductivity in gate-able semiconductor devices.  This has enabled some borrowing of techniques from other quantum computing approaches (transmons).

The main take-aways from the talk:

  • Experimental progress has actually been quite rapid, once a realistic material system was identified.
  • While many things point to these platforms as really having Majorana quasiparticles, the true unambiguous proof in the form of some demonstration of non-Abelian statistics hasn't happened yet.  Getting close.
  • Like many solid-state endeavors before, the true enabling advances here have come from high quality materials growth.
  • If this does work, scale-up may actually be do-able, since this does rely on planar semiconductor fabrication for the most part, and topological qubits may have a better ratio of physical qubits to logical qubits than other approaches.
  • Charlie Marcus remains an energetic, engaging speaker, something I first learned when I worked as the TA for the class he was teaching 24 years ago.