Search This Blog

Sunday, July 31, 2022

Indistinguishability

In thinking about presenting physics to a lay audience, I think we haven't sufficiently emphasized what we mean by particles or objects being "indistinguishable" and everything that touches in modern physics.

The vernacular meaning of "indistinguishable" is clear.  Two objects are indistinguishable if, when examined, they are identical in every way - there is no measurement or test you could do that would allow you to tell the difference between them.  Large objects can only approach this.  Two identical cue balls made at a billiard ball factory might be extremely similar (white, spherical, smooth, very close to the same diameter), but with sufficient tools you could find differences that would allow you to label the balls and tell them apart.  In the limit of small systems, though, as far as we know it is possible to have objects that are truly indistinguishable.  Two hydrogen atoms, for example, are believed to be truly indistinguishable, with exactly the same mass, exactly the same optical and magnetic properties, etc.  

This idea of indistinguishable particles has profound consequences.  In statistical mechanics, entropy is commonly given by \(k_{\mathrm{B}}\ln \Omega\), where \(k_{\mathrm{B}}\) is Boltzmann's constant, and \(\Omega\) is the number of microscopic ways to arrange a system.  If particles are indistinguishable, this greatly affects our counting of configurations.  (A classical example of this issue involves mixing entropy and the Gibbs Paradox.)  

Indistinguishability and quantum processes deeply bothered the founders of quantum theory.  For example, take a bunch of hydrogen atoms all in the 2p excited state.  Any measurement you could do on those atoms would show them to have exactly the same properties, without any way to distinguish the first atom from, say, the fifth atom selected.  Left alone, each of those atoms will decay to the 1s ground state and spit out a photon, but they will do so in a random order at random times, as far as we can tell.  We can talk about the average lifetime of the 2p state, but there doesn't seem to be any way to identify some internal clock that tells each excited atom when to decay, even though the atoms are indistinguishable. 

It gets more unsettling.  Any two electrons are indistinguishable.  So, "common sense" says that swapping any two electrons should get you back to a state that is the same as the initial situation.  However, electrons are fermions and follow Fermi-Dirac statistics.  When swapping any two electrons, the quantum mechanical state, the mathematical object describing the whole system, has to pick up a factor of -1.   Even weirder, there can be interacting many-body situations when swapping nominally indistinguishable particles takes the system to an entirely different state (non-Abelian anyons).  The indistinguishability of electrons has prompted radical ideas in the past, like Wheeler suggesting that there really is only one electron.

TL/DR:  Indistinguishability of particles is intuitively weird, especially in the context of quantum mechanics.

Wednesday, July 13, 2022

A (very late) book review: The Cold Wars

On a recommendation, I just finished reading The Cold Wars:  A History of Superconductivity, by Jean Matricon and Georges Waysand.  This book was originally published in French in 1994, and then translated in English in 2003.  Unfortunately, the book appears to be out of print, and I was fortunate enough to pick up a used hardcover copy.

Beginning with the race to achieve very low temperatures through liquefaction of helium, this work tells a pretty compelling story of the discovery and understanding of superconductivity and superfluidity, ending in the comparatively early days of the high Tc cuprates.  Along the way, the authors introduce a cast of varyingly compelling characters and some fascinating stories that aren't all well-known.  I knew that Landau had been sent to a gulag for a time; I did not know that Kapitsa wrote personally to Stalin to try to argue that this had to be a mistake and was probably a consequence of Landau being very abrasive to the wrong people more than any actual counterrevolutionary beliefs.  (Spoiler:  Landau was abrasive, but he did also sign onto a letter that slammed Stalin hard.)  I did not know that Bardeen and Schrieffer were most concerned about Feynman possibly scooping them thanks to the latter's brilliance and expertise in both superfluidity and diagrammatic methods.  The story is also in there about the initial papers on YBCO and how the chemical formula was "accidentally" wrong in the submitted manuscript, corrected only once the paper was in proofs.

The authors also do a good job of conveying the ebb and flow of science - from the sudden onset of a fashionable topic, to the transition toward more detail and a greater focus on applications.  The social dimensions come through as well, with the coming and going of particular great centers of excellence in the research, and approaches to "big" science.    

The style is accessible, and throughout there are indented sidebars meant to provide scientific context for those readers who are probably not from a physics background.  If you track down a copy, I definitely recommend reading it if you're interested in the topic and the history of this part of condensed matter physics.

Friday, July 08, 2022

More about the costs of doing research

This isn't physics, but it's still something that might be of interest to some readers.  There is still a great deal of mystery among many about how university research is funded and supported.  Five years ago I wrote a bit about "indirect costs", more properly called facilities and administrative costs (F&A).  I recently had the opportunity to learn more about this, and I came across a very helpful document from the Council on Governmental Relations.  COGR is an association of organizations (universities, med centers, research institutes) who help to advocate to government policymakers about research administration and finances.

The document explains a lot about the history of how research finances are handled in the US.  One thing that struck me as I was reading this is the fact that the actual administrative costs that can be charged to grants (which pay for things like running the grants and contracts offices and the accountants who track the money) has been capped only for universities at 26% since 1991, even though there have been more and more reporting and administrative requirements placed on universities ever since.   (If you want to know what happened in 1991 that led to this cap, you can read this [that might be paywalled] or this wiki article whose detailed accuracy I cannot confirm.)

As I wrote before, vice provosts/presidents/chancellors for research at major US universities would be happy to explain at length that F&A cost recovery doesn't come close to covering the actual costs associated with doing university-based research.  ("Universities lose money doing research.")  Obviously this is an oversimplification - if research was truly a large net financial negative, universities wouldn't do it.  Successful research universities accrue benefits from research in terms of stature and reputation that are worth enough to make the enterprise worthwhile.  Of course, the danger is that the balance will shift enough that only the wealthiest, most prestigious universities will be able to afford groundbreaking research in expensive fields (typically the physical sciences and engineering).  

Anyway, if you want to understand the issues better, I encourage reading that document.  I'll write more about physics soon.