Search This Blog

Friday, December 20, 2024

Technological civilization and losing object permanence

In the grand tradition of physicists writing about areas outside their expertise, I wanted to put down some thoughts on a societal trend.  This isn't physics or nanoscience, so feel free to skip this post.

Object permanence is a term from developmental psychology.  A person (or animal) has object permanence if they understand that something still exists even if they can't directly see it or interact with it in the moment.  If a kid realizes that their toy still exists even though they can't see it right now, they've got the concept.  

I'm wondering if modern technological civilization has an issue with an analog of object permanence.  Let me explain what I mean, why it's a serious problem, and end on a hopeful note by pointing out that even if this is the case, we have the tools needed to overcome this.

By the standards of basically any previous era, a substantial fraction of humanity lives in a golden age.  We have a technologically advanced, globe-spanning civilization.  A lot of people (though geographically very unevenly distributed) have grown up with comparatively clean water; comparatively plentiful food available through means other than subsistence agriculture; electricity; access to radio, television, and for the last couple of decades nearly instant access to communications and a big fraction of the sum total of human factual knowledge.  

Whether it's just human nature or a consequence of relative prosperity, there seems to be some timescale on the order of a few decades over which a non-negligible fraction of even the most fortunate seem to forget the hard lessons that got us to this point.  If they haven't seen something with their own eyes or experienced it directly, they decide it must not be a real issue.  I'm not talking about Holocaust deniers or conspiracy theorists who think the moon landings were fake.  There are a bunch of privileged people who have never personally known a time when tens of thousands of their neighbors died from childhood disease (you know, like 75 years ago, when 21,000 Americans were paralyzed every year from polio (!), proportionately like 50,000 today), who now think we should get rid of vaccines, and maybe germs aren't real.  Most people alive today were not alive the last time nuclear weapons were used, so some of them argue that nuclear weapons really aren't that bad (e.g. setting off 2000 one megaton bombs spread across the US would directly destroy less than 5% of the land area, so we're good, right?).  Or, we haven't had massive bank runs in the US since the 1930s, so some people now think that insuring bank deposits is a waste of resources and should stop.  I'll stop the list here, before veering into even more politically fraught territory.  I think you get my point, though - somehow chunks of modern society seem to develop collective amnesia, as if problems that we've never personally witnessed must have been overblown before or don't exist at all.  (Interestingly, this does not seem to happen for most technological problems.  You don't see many people saying, you know, maybe building fires weren't that big a deal, let's go back to the good old days before smoke alarms and sprinklers.)  

While the internet has downsides, including the ability to spread disinformation very effectively, all the available and stored knowledge also has an enormous benefit:  It should make it much harder than ever before for people to collectively forget the achievements of our species.  Sanitation, pasteurization, antibiotics, vaccinations - these are absolutely astonishing technical capabilities that were hard-won and have saved many millions of lives.  It's unconscionable that we are literally risking mass death by voluntarily forgetting or ignoring that.  Nuclear weapons are, in fact, terribleInsuring bank deposits with proper supervision of risk is a key factor that has helped stabilize economies for the last century.  We need to remember historical problems and their solutions, and make sure that the people setting policy are educated about these things. They say that those who cannot remember the past are doomed to repeat it.  As we look toward the new year, I hope that those who are familiar with the hard earned lessons of history are able to make themselves heard over the part of the populace who simply don't believe that old problems were real and could return.



Sunday, December 15, 2024

Items for discussion, including google's latest quantum computing result

As we head toward the end of the calendar year, a few items:

  • Google published a new result in Nature a few days ago.  This made a big news splash, including this accompanying press piece from google themselvesthis nice article in Quanta, and the always thoughtful blog post by Scott Aaronson.  The short version:  Physical qubits as made today in the superconducting platform favored by google don't have the low error rates that you'd really like if you want to run general quantum algorithms on a quantum computer, which could certainly require millions of steps.  The hope of the community is to get around this using quantum error correction, where some number of physical qubits are used to function as one "logical" qubit.  If physical qubit error rates are sufficiently low, and these errors can be corrected with enough efficacy, the logical qubits can function better than the physical qubits, ideally being able to undergo a sequential operations indefinitely without degradation of their information.   One technique for this is called a surface code.  Google have implemented this in their most recent chip 105 physical qubit chip ("Willow"), and they seem to have crossed a huge threshold:  When they increase the size of their correction scheme (going from a 3 (physical qubit) \(\times\) 3 (physical qubit) to 5 \(\times\) 5 to 7 \(\times\) 7), the error rates of the resulting logical qubits fall as hoped.  This is a big deal, as it implies that larger chips, if they could be implemented, should scale toward the desired performance.  This does not mean that general purpose quantum computers are just around the corner, but it's very encouraging.  There are many severe engineering challenges still in place.  For example, the present superconducting qubits must be tweaked and tuned.  The reason google only has 105 of them on the Willow chip is not that they can't fit more - it's that they have to have wires and control capacity to tune and run them.  A few thousand really good logical qubits would be needed to break RSA encryption, and there is no practical way to put millions of wires down a dilution refrigerator.  Rather, one will need cryogenic control electronics
  • On a closely related point, google's article talks about how it would take a classical computer ten septillion years to do what its Willow chip can do.  This is based on a very particularly chosen problem (as I mentioned here five years ago) called random circuit sampling, looking at the statistical properties of the outcome of applying random gate sequences to a quantum computer.  From what I can tell, this is very different than what most people mean when they think of a problem to benchmark a quantum computer's advantage over a classical computer.  I suspect the typical tech-literate person considering quantum computing wants to know, if I ask a quantum computer and a classical computer to factor huge numbers or do some optimization problem, how much faster is the quantum computer for a given size of problem?  Random circuit sampling feels instead much more to me like comparing an experiment to a classical theory calculation.  For a purely classical analog, consider putting an airfoil in a windtunnel and measuring turbulent flow, and comparing with a computational fluids calculation.  Yes, the windtunnel can get you an answer very quickly, but it's not "doing" a calculation, from my perspective.  This doesn't mean random circuit sampling is a poor benchmark, just that people should understand it's rather different from the kind of quantum/classical comparison they may envision.
  • On one unrelated note:  Thanks to a timey inquiry from a reader, I have now added a search bar to the top of the blog.  (Just in time to capture the final decline of science blogging?)
  • On a second unrelated note:  I'd be curious to hear from my academic readers on how they are approaching generative AI, both on the instructional side (e.g., should we abandon traditional assignments and take-home exams?  How do we check to see if students are really learning vs. becoming dependent on tools that have dubious reliability?) and on the research side (e.g., what level of generative AI tool use is acceptable in paper or proposal writing?  What aspects of these tools are proving genuinely useful to PIs?  To students?  Clearly generative AI's ability to help with coding is very nice indeed!)

Saturday, December 07, 2024

Seeing through your head - diffuse imaging

From the medical diagnostic perspective (and for many other applications), you can understand why it might be very convenient to be able to perform some kind of optical imaging of the interior of what you'd ordinarily consider opaque objects.  Even when a wavelength range is chosen so that absorption is minimized, photons can scatter many times as they make their way through dense tissue like a breast.  We now have serious computing power and extremely sensitive photodetectors, which has led to the development of imaging techniques to perform imaging through media that absorb and diffuse photons.  Here is a review of this topic from 2005, and another more recent one (pdf link here).  There are many cool approaches that can be combined, including using pulsed lasers to do time-of-flight measurements (review here), and using "structured illumination" (review here).   

Sure, point that laser at my head.  (Adapted from
Figure 1 of this paper.)

I mention all of this to set the stage for this fun preprint, titled "Photon transport through the entire adult human head".  Sure, you think your head is opaque, but it only attenuates photon fluxes by a factor of around \(10^{18}\).  With 1 Watt of incident power at 800 nm wavelength spread out over a 25 mm diameter circle and pulsed 80 million times a second, time-resolved single-photon detectors like photomultiplier tubes can readily detect the many-times-scattered photons that straggle their way out of your head around 2 nanoseconds later.  (The distribution of arrival times contains a bunch of information.  Note that the speed of light in free space is around 30 cm/ns; even accounting for the index of refraction of tissue, those photons have bounced around a lot before getting through.)  The point of this is that those photons have passed through parts of the brain that are usually considered inaccessible.  This shows that one could credibly use spectroscopic methods to get information out of there, like blood oxygen levels.

Friday, November 29, 2024

Foams! (or, why my split pea side dish boils over every Thanksgiving)

Foams can be great examples of mechanical metamaterials.  

Adapted from TOC figure of this paper
Consider my shaving cream.  You might imagine that the (mostly water) material would just pool as a homogeneous liquid, since water molecules have a strong attraction for one another.  However, my shaving cream contains surfactant molecules.  These little beasties have a hydrophilic/polar end and a hydrophobic/nonpolar end.  The surfactant molecules can lower the overall energy of the fluid+air system by lowering the energy cost of the liquid/surfactant/air interface compared with the liquid/air interface.  There is a balancing act between air pressure, surface tension/energy, and gravity that has to be played, but under the right circumstances you end up with formation of a dense foam comprising many many tiny bubbles.  On the macroscale (much larger than the size of individual bubbles), the foam can look like a very squishy but somewhat mechanically integral solid - it can resist shear, at least a bit, and maintain its own shape against gravity.  For a recent review about this, try this paper (apologies for the paywall) or a taste of this in a post from last year

What brought this to mind was my annual annoyance yesterday in preparing what has become a regular side dish at our family Thanksgiving.  That recipe begins with rinsing, soaking, and then boiling split peas in preparation for making a puree.  Every year, without fail, I try to keep a close eye on the split peas as they cook, because they tend to foam up.  A lot.  Interestingly, this happens regardless of how carefully I rinse them before soaking, and the foaming (a dense white foam of few-micron-scale bubbles) begins well before the liquid starts to boil.  I have now learned two things about this.  First, pea protein, which leaches out of the split peas, is apparently a well-known foam-inducing surfactant, as explained in this paper (which taught me that there is a journal called Food Hydrocolloids).  Second, next time I need to use a bigger pot and try adding a few drops of oil to see if that suppresses the foam formation.

Sunday, November 24, 2024

Nanopasta, no, really

Fig. 1 from the linked paper
Here
is a light-hearted bit of research that touches on some fun physics.  As you might readily imagine, there is a good deal of interdisciplinary and industrial interest in wanting to create fine fibers out of solution-based materials.  One approach, which has historical roots that go back even two hundred years before this 1887 paper, is electrospinning.  Take a material of interest, dissolve it in a solvent, and feed a drop of that solution onto the tip of an extremely sharp metal needle.  Then apply a big voltage (say a few to tens of kV) between that tip and a nearby grounded substrate.  If the solution has some amount of conductivity, the liquid will form a cone on the tip, and at sufficiently large voltages and small target distances, the droplet will be come unstable and form a jet off into the tip-target space.  With the right range of fluid properties (viscosity, conductivity, density, concentration) and the right evaporation rate for the solvent, the result is a continuously forming, drying fiber that flows off the end of the tip.  A further instability amplifies any curves in the fiber path, so that you get a spiraling fiber spinning off onto the substrate.   There are many uses for such fibers, which can be very thin.

The authors of the paper in question wanted to make fibers from starch, which is nicely biocompatible for medical applications.  So, starting from wheat flour and formic acid, they worked out viable parameters and were able to electrospin fibers of wheat starch (including some gluten - sorry, for those of you with gluten intolerances) into nanofibers 300-400 nm in diameter.  The underlying material is amorphous (so, no appreciable starch crystallization).  The authors had fun with this and called the result "nanopasta", but it may actually be useful for certain applications.


Friday, November 22, 2024

Brief items

 A few tidbits that I encountered recently:

  • The saga of Ranga Dias at Rochester draws to a close, as described by the Wall Street Journal.  It took quite some time for this to propagate through their system.  This is after multiple internal investigations that somehow were ineffective, an external investigation, and a lengthy path through university procedures (presumably because universities have to be careful not to shortcut any of their processes, or they open themselves up to lawsuits).
  • At around the same time, Mikhail Eremets passed away.  He was a pioneer in high pressure measurements of material properties and in superconductivity in hydrides.
  • Also coincident, this preprint appeared on the arXiv, a brief statement summarizing some of the evidence for relatively high temperature superconductivity in hydrides at high pressure.
  • Last week Carl Bender gave a very nice colloquium at Rice, where he spoke about a surprising result.  When we teach undergrad quantum mechanics, we tell students that the Hamiltonian (the expression with operators that gives the total energy of a quantum system) has to be Hermitian, because this guarantees that the energy eigenvalues have to be real numbers.  Generically, non-hermitian Hamiltonians would imply complex energies, which would imply non-conservation of total probability. That is one way of treating open quantum systems, when particles can come and go, but for closed quantum systems, we like real energies.  Anyway, it turns out that one can write an explicitly complex Hamiltonian that nonetheless has a completely real energy spectrum, and this has deep connections to PT symmetry conservation.  Here is a nice treatment of this.
  • Just tossing this out:  The entire annual budget for the state of Arkansas is $6.5B.  The annual budget for Stanford University is $9.5B.  
More soon.

Sunday, November 17, 2024

Really doing mechanics at the quantum level

A helpful ad from Science Made Stupid.
Since before the development of micro- and nanoelectromechanical techniques, there has been an interest in making actual mechanical widgets that show quantum behavior.  There is no reason that we should not be able to make a mechanical resonator, like a guitar string or a cantilevered beam, with a high enough resonance frequency so that when it is placed at low temperatures ( \(\hbar \omega \gg k_{\mathrm{B}}T\)), the resonator can sit in its quantum mechanical ground state.  Indeed, achieving this was Science's breakthrough of the year in 2010.  

This past week, a paper was published from ETH Zurich in which an aluminum nitride mechanical resonator was actually used as a qubit, where the ground and first excited states of this quantum (an)harmonic oscillator represented \(|0 \rangle\) and \(|1 \rangle\).  They demonstrate actual quantum gate operations on this mechanical system (which is coupled to a more traditional transmon qubit - the setup is explained in this earlier paper).  

One key trick to being able to make a qubit out of a mechanical oscillator is to have sufficiently large anharmonicity.  An ideal, perfectly harmonic quantum oscillator has an energy spectrum given by \((n + 1/2)\hbar \omega\), where \(n\) is the number of quanta of excitations in the resonator.  In that situation, the energy difference between adjacent levels is always \(\hbar \omega\).  The problem with this from the qubit perspective is, you want to have a quantum two-level system, and how can you controllably drive transitions just between a particular pair of levels when all of the adjacent level transitions cost the same energy?  The authors of this recent paper have achieved a strong anharmonicity, basically making the "spring" of the mechanical resonator softer in one displacement direction than the other.  The result is that the energy difference between levels \(|0\rangle\) and \(|1\rangle\) is very different than the energy difference between levels \(|1\rangle\) and \(|2\rangle\), etc.  (In typical superconducting qubits, the resonance is not mechanical but an electrical \(LC\)-type, and a Josephson junction acts like a non-linear inductor, giving the desired anharmonic properties.)  This kind of mechanical anharmonicity means that you can effectively have interactions between vibrational excitations ("phonon-phonon"), analogous to what the circuit QED folks can do.  Neat stuff.