Monday, June 18, 2018

Scientific American - what the heck is this?

Today, Scientific American ran this on their blogs page.  This article calls to mind weird mysticism stuff like crystal energy, homeopathy, and tree waves (a reference that attendees of mid-1990s APS meetings might get), and would not be out of place in Omni Magazine in about 1979.

I’ve written before about SciAm and their blogs.  My offer still stands, if they ever want a condensed matter/nano blog that I promise won’t verge into hype or pseudoscience.

Saturday, June 16, 2018

Water at the nanoscale

One reason the nanoscale is home to some interesting physics and chemistry is that the nanometer is a typical scale for molecules.   When the size of your system becomes comparable to the molecular scale, you can reasonably expect something to happen, in the sense that it should no longer be possible to ignore the fact that your system is actually built out of molecules.

Consider water as an example.  Water molecules have finite size (on the order of 0.2 nm between the hydrogens), a definite angled shape, and have a bit of an electric dipole moment (the oxygen has a slight excess of electron density and the hydrogens have a slight deficit).  In the liquid state, the water molecules are basically jostling around and have a typical intermolecular distance comparable to the size of the molecule.  If you confine water down to a nanoscale volume, you know at some point the finite size and interactions (steric and otherwise) between the water molecules have to matter.  For example, squeeze water down to a few molecular layers between solid boundaries, and it starts to act more like an elastic solid than a viscous fluid.  

Another consequence of this confinement in water can be seen in measurements of its dielectric properties - how charge inside rearranges itself in response to an external electric field.  In bulk liquid water, there are two components to the dielectric response.  The electronic clouds in the individual molecules can polarize a bit, and the molecules themselves (with their electric dipole moments) can reorient.  This latter contribution ends up being very important for dc electric fields, and as a result the dc relative dielectric permittivity of water, \(\kappa\), is about 80 (compared with 1 for the vacuum, and around 3.9 for SiO2).   At the nanoscale, however, the motion of the water molecules should be hindered, especially near a surface.  That should depress \(\kappa\) for nanoconfined water.

In a preprint on the arxiv this week, that is exactly what is found.  Using a clever design, water is confined in nanoscale channels defined by a graphite floor, hexagonal boron nitride (hBN) walls, and a hBN roof.  A conductive atomic force microscope tip is used as a top electrode, the graphite is used as a bottom electrode, and the investigators are able to see results consistent with \(\kappa\) falling to roughly 2.1 for layers about 0.6-0.9 nm thick adjacent to the channel floor and ceiling.  The result is neat, and it should provide a very interesting test case for attempts to model these confinement effects computationally.

Friday, June 08, 2018

What are steric interactions?

When first was reading chemistry papers, one piece of jargon jumped out at me:  "steric hindrance", which is an abstruse way of saying that you can't force pieces of molecules (atoms or groups of atoms) to pass through each other.  In physics jargon, they have a "hard core repulsion".  If you want to describe the potential energy of two atoms as you try to squeeze one into the volume of the other, you get a term that blows up very rapidly, like \(1/r^{12}\), where \(r\) is the distance between the nuclei.  Basically, you can do pretty well treating atoms like impenetrable spheres with diameters given by their outer electronic orbitals.  Indeed, Robert Hooke went so far as to infer, from the existence of faceted crystals, that matter is built from effectively impenetrable little spherical atoms.

It's a common thing in popular treatments of physics to point out that atoms are "mostly empty space".  With hydrogen, for example, if you said that the proton was the size of a pea, then the 1s orbital (describing the spatial probability distribution for finding the point-like electron) would be around 250 m in radius.  So, if atoms are such big, puffy objects, then why can't two atoms overlap in real space?  It's not just the electrostatic repulsion, since each atom is overall neutral.

The answer is (once again) the Pauli exclusion principle (PEP) and the fact that electrons obey Fermi statistics.  Sometimes the PEP is stated in a mathematically formal way that can obscure its profound consequences.  For our purposes, the bottom line is:  It is apparently a fundamental property of the universe that you can't stick two identical fermions (including having the same spin) in the same quantum state.    At the risk of getting technical, this can mean a particular atomic orbital, or more generally it can be argued to mean the same little "cell" of volume \(h^{3}\) in r-p phase space.  It just can't happen

If you try to force it, what happens instead?  In practice, to get two carbon atoms, say, to overlap in real space, you would have to make the electrons in one of the atoms leave their ordinary orbitals and make transitions to states with higher kinetic energies.  That energy has to come from somewhere - you have to do work and supply that energy to squeeze two atoms into the volume of one.  Books have been written about this.

Leaving aside for a moment the question of why rigid solids are rigid, it's pretty neat to realize that the physics principle that keeps you from falling through your chair or the floor is really the same principle that holds up white dwarf stars.

Thursday, May 31, 2018

Coming attractions and short items

Here are a few items of interest. 

I am planning to write a couple of posts about why solids are rigid, and in the course of thinking about this, I made a couple of discoveries:

  • When you google "why are solids rigid?", you find a large number of websites that all have exactly the same wording:  "Solids are rigid because the intermolecular forces of attraction that are present in solids are very strong. The constituent particles of solids cannot move from their positions they can only vibrate from their mean positions."  Note that this is (1) not correct, and (2) also not much of an answer.  It seems that the wording is popular because it's an answer that has appeared on the IIT entrance examinations in India.
  • I came across an absolutely wonderful paper by Victor Weisskopf, "Of Atoms, Mountains, and Stars:  A Study in Qualitative Physics", Science 187, 605-612 (1975).  Here is the only link I could find that might be reachable without a subscription.  It is a great example of "thinking like a physicist", showing how far one can get by starting from simple ideas and using order-of-magnitude estimates.  This seems like something that should be required reading of most undergrad physics majors, and more besides.
In politics-of-science news:

  • There is an amendment pending in the US Congress on the big annual defense bill that has the potential to penalize US researchers who have received any (presently not well-defined) resources from Chinese talent recruitment efforts.  (Russia, Iran, and North Korea are also mentioned, but they're irrelevant here, since they are not running such programs.)  The amendment would allow the DOD to deny these folks research funding.  The idea seems to be that such people are perceived by some as a risk in terms of taking DOD-relevant knowledge and giving China an economic or strategic benefit.  Many major US research universities have been encouraging closer ties with China and Chinese universities in the last 15 years.  Makes you wonder how many people would be affected.
  • The present US administration, according to AP, is apparently about to put in place (June 11?) new limitations on Chinese graduate student visas, for those working in STEM (and especially in fields mentioned explicitly in the Chinese government's big economic plan).   It would make relevant student visas one year in duration.  Given that the current visa renewal process can already barely keep up with the demand, it seems like this could become an enormous headache.  I could go on at length about why I think this is a bad idea.  Given that it's just AP that is reporting this so far, perhaps it won't happen or will be more narrowly construed.  We'll see.

Tuesday, May 29, 2018

What is tunneling?

I first learned about quantum tunneling from science fiction, specifically a short story by Larry Niven.  The idea is often tossed out there as one of those "quantum is weird and almost magical!" concepts.  It is surely far from our daily experience.

Imagine a car of mass \(m\) rolling along a road toward a small hill.  Let’s make the car and the road ideal – we’re not going to worry about friction or drag from the air or anything like that.   You know from everyday experience that the car will roll up the hill and slow down.  This ideal car’s total energy is conserved, and it has (conventionally) two pieces, the kinetic energy \(p^2/2m\) (where \(p\) is the momentum; here I’m leaving out the rotational contribution of the tires), and the gravitational potential energy, \(mgz\), where \(g\) is the gravitational acceleration and \(z\) is the height of the center of mass above some reference level.  As the car goes up, so does its potential energy, meaning its kinetic energy has to fall.  When the kinetic energy hits zero, the car stops momentarily before starting to roll backward down the hill.  The spot where the car stops is called a classical turning point.  Without some additional contribution to the energy, you won’t ever find the car on the other side of that hill, because the shaded region is “classically forbidden”.  We’d either have to sacrifice conservation of energy, or the car would have to have negative kinetic energy to exist in the forbidden region.  Since the kinetic piece is proportional to \(p^2\), to have negative kinetic energy would require \(p\) to be imaginary (!).

However, we know that the car is really a quantum object, built out of a huge number (more than \(10^27\)) other quantum objects.  The spatial locations of quantum objects can be described with “wavefunctions”, and you need to know a couple of things about these to get a feel for tunneling.  For the ideal case of a free particle with a definite momentum, the wavefunction really looks like a wave with a wavelength \(h/p\), where \(h\) is Planck’s constant.  Because a wave extends throughout all space, the probability of finding the ideal free particle anywhere is equal, in agreement with the oft-quoted uncertainty principle. 

Here’s the essential piece of physics:  In a classically forbidden region, the wavefunction decays exponentially with distance (mathematically equivalent to the wave having an imaginary wavelength), but it can’t change abruptly.  That means that if you solve the problem of a quantum particle incident on a finite (in energy and spatial size) barrier from one side, there is always some probability that the particle will be found on the far side of the classically forbidden region.  

This means that it’s technically possible for the car to “tunnel” through the hillside and end up on the downslope.  I would not recommend this as a transportation strategy, though, because that’s incredibly unlikely.  The more massive the particle, and the more forbidden the region (that is, the more negative the classical kinetic energy of the particle would have to be in the barrier), the faster the exponential decay of the probability of getting through.  For a 1000 kg car trying to tunnel through a 10 cm high speed bump 1 m long, the probability is around exp(-2.7e20).  That kind of number is why quantum tunneling is not an obvious part of your daily existence.  For something much less massive, like an electron, the tunneling probability from, say, a metal tip to a metal surface decays by around a factor of \(e^2\) for every 0.1 nm of tip-surface distance separation.  It’s that exponential sensitivity to geometry that makes scanning tunneling microscopy possible.

However, quantum tunneling is very much a part of your life.  Protons can tunnel through the repulsion of their positive charges to bind to each other – that’s what powers the sun.  Electrons routinely tunnel in zillions of chemical reactions going on in your body right now, as well as in the photosynthesis process that drives most plant life. 

On a more technological note, tunneling is a key ingredient in the physics of flash memory.  Flash is based on field-effect transistors, and as I described the other day, transistors are switched on or off depending on the voltage applied to a gate electrode.  Flash storage uses transistors with a “floating gate”, a conductive island surrounded by insulating material, some kind of glassy oxide.  Charge can be parked on that gate or removed from it, and depending on the amount of charge there, the underlying transistor channel is either conductive or not.   How does charge get on or off the island?  By a flavor of tunneling called field emission.  The insulator around the floating gate functions as a potential energy barrier for electrons.  If a big electric field is applied via some other electrodes, the barrier’s shape is distorted, allowing electrons to tunnel through it efficiently.  This is a tricky aspect of flash design.  The barrier has to be high/thick enough that charge stuck on the floating gate can stay there a very long time - you wouldn’t want the bits in your SSD or your flash drive losing their status on the timescale of months, right? - but ideally tunable enough that the data can be rewritten quickly, with low error rates, at low voltages.

Monday, May 21, 2018

Physics around you: the field-effect transistor

While dark matter and quantum gravity routinely get enormous play in the media, you are surrounded every day by physics that enables near miraculous technology.  Paramount among these is the field-effect transistor (FET).   That wikipedia link is actually pretty good, btw.  While I've written before about specific issues regarding FETs (here, here, here), I hadn't said much about the general device.

The idea of the FET is to use a third electrode, a gate, to control the flow of current through a channel between two other electrodes, the source and drain.  The electric field from the gate controls the mobile charge in the channel - this is the field effect.   You can imagine doing this in vacuum, with a hot filament to be a source of electrons, a second electrode (at a positive voltage relative to the source) to collect the electrons, and an intervening grid as the gate.  Implementing this in the solid state was proposed more than once (LilienfeldHeil) before it was done successfully. 

Where is the physics?  There is a ton of physics involved in how these systems actually work.  For example, it's all well and good to talk about "free" electrons moving around in solids in analogy to electrons flying in space in a vacuum tube, but it's far from obvious that you should be able to do this.   Solids are built out of atoms and are inherently quantum mechanical, with particular allowed energies and electronic states picked out by quantum mechanics and symmetries.  The fact that allowed electronic states in periodic solids ("Bloch waves") resemble "free" electron states (plane waves, in the quantum context) is very deep and comes from the underlying symmetry of the material.  [Note that you can have transistors even when the charge carriers should be treated as hopping from site to site - that's how many organic FETs work.]  It's the Pauli principle that allows us to worry only about the highest energy electronic states and not have to worry about, e.g., the electrons deep down in the ion cores of the atoms in the material.  Still, you do have to make sure there aren't a bunch of electronic states at energies where you don't want them - these the are traps and surface states that made FETs hard to get working.  The combo of the Pauli principle and electrostatic screening is why we can largely ignore the electron-electron repulsion in the materials, but still use the gate electrode's electric field to affect the channel.  FETs have also been great tools for learning new physics, as in the quantum Hall effect

What's the big deal?  When you have a switch that is either open or closed, it's easy to realize that you can do binary-based computing with a bunch of them.  The integrated manufacturing of the FET has changed the world.  It's one of the few examples of a truly disruptive technology in the last 100 years.  The device you're using to read this probably contains several billion (!) transistors, and they pretty much all work, for years at a time.  FETs are the underlying technology for both regular and flash memory.  FETs are what drive the pixels in the flat panel display you're viewing.  Truly, they are so ubiquitous that they've become invisible.

Wednesday, May 16, 2018

"Active learning" or "research-based teaching" in upper level courses

This past spring Carl Wieman came to Rice's Center for Teaching Excellence, to give us this talk, about improving science pedadogy.  (This video shows a very similar talk given at UC Riverside.) He is very passionate about this, and argues strongly that making teaching more of an active, inquiry-based or research-question-based experience is generally a big improvement over traditional lecture.  I've written previously that I think this is a complicated issue. 

Does anyone in my readership have experience applying this approach to upper-level courses?  For a specific question relevant to my own teaching, have any of you taught or taken a statistical physics course presented in this mode?  I gather that PHYS 403 at UBC and PHYS 170 at Stanford have been done this way.  I'd be interested in learning about how that was implemented and how it worked - please feel free to post in comments or email me.

(Now that the semester is over and some of my reviewing responsibilities are more under control, the frequency of posting should go back up.)