Search This Blog

Saturday, January 04, 2025

This week in the arXiv: quantum geometry, fluid momentum "tunneling", and pasta sauce

Three papers caught my eye the other day on the arXiv at the start of the new year:

arXiv:2501.00098 - J. Yu et al., "Quantum geometry in quantum materials" - I hope to write up something about quantum geometry soon, but I wanted to point out this nice review even if I haven't done my legwork yet.  The ultrabrief point:  The single-particle electronic states in crystalline solids may be written as Bloch waves, of the form \(u_{n \mathbf{k}}(\mathbf{r}) \exp(i \mathbf{k} \cdot \mathbf{r})\), where the (crystal) momentum is given by \(\hbar \mathbf{k}\) and \(u_{n \mathbf{k}}\) is a function with the real-space periodicity of the crystal lattice and contains an implicit \(\mathbf{k}\) dependence.  You can get very far in understanding solid-state physics without worrying about this, but it turns out that there are a number of very important phenomena that originate from the oft-neglected \(\mathbf{k}\) dependence of \(u_{n \mathbf{k}}\).  These include the anomalous Hall effect, the (intrinsic) spin Hall effect, the orbital Hall effect, etc.  Basically the \(\mathbf{k}\) dependence of \(u_{n \mathbf{k}}\) in the form of derivatives defines an internal "quantum" geometry of the electronic structure.  This review is a look at the consequences of quantum geometry on things like superconductivity, magnetic excitations, excitons, Chern insulators, etc. in quantum materials.

Fig. 1 from arXiv:2501.01253
arXiv:2501.01253 - B. Coquinot et al., "Momentum tunnelling between nanoscale liquid flows" - In electronic materials there is a phenomenon known as Coulomb drag, in which a current driven through one electronic system (often a 2D electron gas) leads, through Coulomb interactions, to a current in adjacent but otherwise electrically isolated electronic system (say another 2D electron gas separated from the first by a few-nm insulating layer).  This paper argues that there should be a similar-in-spirit phenomenon when a polar liquid (like water) flows on one side of a thin membrane (like one or few-layer graphene, which can support electronic excitations like plasmons) - that this could drive flow of a polar fluid on the other side of the membrane (see figure).  They cast this in the language of momentum tunneling across the membrane, but the point is that it's some inelastic scattering process mediated by excitations in the membrane.  Neat idea.

arXiv:2501.00536 - G. Bartolucci et al., "Phase behavior of Cacio and Pepe sauce" - Cacio e pepe is a wonderful Italian pasta dish with a sauce made from pecorino cheese, pepper, and hot pasta cooking water that contains dissolved starch.  When prepared well, it's incredibly creamy, smooth, and satisfying.  The authors here perform a systematic study of the sauce properties as a function of temperature and starch concentration relative to cheese content, finding the part of parameter space to avoid if you don't want the sauce to "break" (condensing out clumps of cheese-rich material and ruining the sauce texture).  That's cool, but what is impressive is that they are actually able to model the phase stability mathematically and come up with a scientifically justified version of the recipe.  Very fun.


Tuesday, December 31, 2024

End of the year thoughts - scientific philanthropy and impact

As we head into 2025, and the prospects for increased (US) government investment in science, engineering, and STEM education seem very limited, I wanted to revisit a topic that I wrote about over a decade ago (!!!), the role of philanthropy and foundations in these areas.  

Personally I think the case for government support of scientific research and education is overwhelmingly clear; while companies depend on having an educated technical workforce (at least for now) and continually improving technology, they are under great short-term financial pressures and genuinely long-term investment in research is rare.  Foundations are not a substitute for nation-state levels of support, but they are a critical component of the research and education landscape.  

Annual citations of the EPR paper from Web of Science,
a case study in the long-term impact of some "pure" scientific
research, and giving hope to practicing scientists that surely
our groundbreaking work will be appreciated sixty years 
after publication.  

A key question I've wondered about for a long time is how to properly judge the impact that research-supporting foundations are making.  The Science Philanthropy Alliance is a great organization that considers these issues deeply.

The nature of long-term research is that it often takes a long time for its true impact (I don't mean just citation counts, but those are an indicator of activity) to be felt.  One (admittedly extreme) example is shown here, the citations-per-year (from Web of Science) of the 1935 Einstein/Podolsky/Rosen paper about entanglement.  (Side note:  You have to love the provocative title, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?", which from the point of view of the authors at the time satisfies Betteridge's Law of Headlines.)  There are few companies that would be willing to invest in supporting research that won't have its true heyday for five or six decades.

One additional tricky bit is that grants are usually given to people and organizations who are already active.  It's often not simple to point to some clear result or change in output that absolutely would not have happened without foundation support.  This is exacerbated by the fact that grants in science and engineering are often given to people and organizations who are not just active but already very well supported - betting on an odds-on favorite is a low risk strategy. 

Many foundations do think very carefully about what areas to support, because they want to "move the needle".  For example, some scientific foundations are consciously reluctant to support closer-to-clinical-stage cancer research, since the total annual investment by governments and pharmaceutical companies in that area numbers in the many billions of dollars, and a modest foundation contribution would be a tiny delta on top of that.  

Here is a list of the wealthiest charitable foundations (only a few of which support scientific research and/or education) and their endowments.  Nearly all of the science-related ones are also plugged in here.  A rough estimate of annual expenditures from endowed entities is about 5% of their holdings.  Recently I've come to think about private universities as one crude comparator for impact.  If a foundation has the same size endowment as a prestigious research university, I think it's worth thinking about the relative downstream impacts of those entities.  (Novo Nordisk Foundation has an endowment three times the size of Harvard's endowment.)  

Another comparator would be the annual research expenditures of a relevant funding agency.  The US NSF put forward $234M into major research instrumentation and facilities in FY2024.  A foundation with a $5B endowment could in principle support all of that from endowment returns.  This lets me make my semiregular pitch about foundational or corporate support for research infrastructure and user facilities around the US.  The entire annual budget for the NSF's NNCI, which supports shared nanofabrication and characterization facilities around the US, is about $16M.   That's a niche where comparatively modest foundation (or corporate) support could have serious impact for interdisciplinary research and education across the country.  I'm sure there are other similar areas out there, and I hope someone is thinking about this.  

Anyway, thanks to my readers - this is now the 20th year of this blog's existence (!!! again), and I hope to be able to keep it up well in the new year.



Friday, December 20, 2024

Technological civilization and losing object permanence

In the grand tradition of physicists writing about areas outside their expertise, I wanted to put down some thoughts on a societal trend.  This isn't physics or nanoscience, so feel free to skip this post.

Object permanence is a term from developmental psychology.  A person (or animal) has object permanence if they understand that something still exists even if they can't directly see it or interact with it in the moment.  If a kid realizes that their toy still exists even though they can't see it right now, they've got the concept.  

I'm wondering if modern technological civilization has an issue with an analog of object permanence.  Let me explain what I mean, why it's a serious problem, and end on a hopeful note by pointing out that even if this is the case, we have the tools needed to overcome this.

By the standards of basically any previous era, a substantial fraction of humanity lives in a golden age.  We have a technologically advanced, globe-spanning civilization.  A lot of people (though geographically very unevenly distributed) have grown up with comparatively clean water; comparatively plentiful food available through means other than subsistence agriculture; electricity; access to radio, television, and for the last couple of decades nearly instant access to communications and a big fraction of the sum total of human factual knowledge.  

Whether it's just human nature or a consequence of relative prosperity, there seems to be some timescale on the order of a few decades over which a non-negligible fraction of even the most fortunate seem to forget the hard lessons that got us to this point.  If they haven't seen something with their own eyes or experienced it directly, they decide it must not be a real issue.  I'm not talking about Holocaust deniers or conspiracy theorists who think the moon landings were fake.  There are a bunch of privileged people who have never personally known a time when tens of thousands of their neighbors died from childhood disease (you know, like 75 years ago, when 21,000 Americans were paralyzed every year from polio (!), proportionately like 50,000 today), who now think we should get rid of vaccines, and maybe germs aren't real.  Most people alive today were not alive the last time nuclear weapons were used, so some of them argue that nuclear weapons really aren't that bad (e.g. setting off 2000 one megaton bombs spread across the US would directly destroy less than 5% of the land area, so we're good, right?).  Or, we haven't had massive bank runs in the US since the 1930s, so some people now think that insuring bank deposits is a waste of resources and should stop.  I'll stop the list here, before veering into even more politically fraught territory.  I think you get my point, though - somehow chunks of modern society seem to develop collective amnesia, as if problems that we've never personally witnessed must have been overblown before or don't exist at all.  (Interestingly, this does not seem to happen for most technological problems.  You don't see many people saying, you know, maybe building fires weren't that big a deal, let's go back to the good old days before smoke alarms and sprinklers.)  

While the internet has downsides, including the ability to spread disinformation very effectively, all the available and stored knowledge also has an enormous benefit:  It should make it much harder than ever before for people to collectively forget the achievements of our species.  Sanitation, pasteurization, antibiotics, vaccinations - these are absolutely astonishing technical capabilities that were hard-won and have saved many millions of lives.  It's unconscionable that we are literally risking mass death by voluntarily forgetting or ignoring that.  Nuclear weapons are, in fact, terribleInsuring bank deposits with proper supervision of risk is a key factor that has helped stabilize economies for the last century.  We need to remember historical problems and their solutions, and make sure that the people setting policy are educated about these things. They say that those who cannot remember the past are doomed to repeat it.  As we look toward the new year, I hope that those who are familiar with the hard earned lessons of history are able to make themselves heard over the part of the populace who simply don't believe that old problems were real and could return.



Sunday, December 15, 2024

Items for discussion, including google's latest quantum computing result

As we head toward the end of the calendar year, a few items:

  • Google published a new result in Nature a few days ago.  This made a big news splash, including this accompanying press piece from google themselvesthis nice article in Quanta, and the always thoughtful blog post by Scott Aaronson.  The short version:  Physical qubits as made today in the superconducting platform favored by google don't have the low error rates that you'd really like if you want to run general quantum algorithms on a quantum computer, which could certainly require millions of steps.  The hope of the community is to get around this using quantum error correction, where some number of physical qubits are used to function as one "logical" qubit.  If physical qubit error rates are sufficiently low, and these errors can be corrected with enough efficacy, the logical qubits can function better than the physical qubits, ideally being able to undergo a sequential operations indefinitely without degradation of their information.   One technique for this is called a surface code.  Google have implemented this in their most recent chip 105 physical qubit chip ("Willow"), and they seem to have crossed a huge threshold:  When they increase the size of their correction scheme (going from a 3 (physical qubit) \(\times\) 3 (physical qubit) to 5 \(\times\) 5 to 7 \(\times\) 7), the error rates of the resulting logical qubits fall as hoped.  This is a big deal, as it implies that larger chips, if they could be implemented, should scale toward the desired performance.  This does not mean that general purpose quantum computers are just around the corner, but it's very encouraging.  There are many severe engineering challenges still in place.  For example, the present superconducting qubits must be tweaked and tuned.  The reason google only has 105 of them on the Willow chip is not that they can't fit more - it's that they have to have wires and control capacity to tune and run them.  A few thousand really good logical qubits would be needed to break RSA encryption, and there is no practical way to put millions of wires down a dilution refrigerator.  Rather, one will need cryogenic control electronics
  • On a closely related point, google's article talks about how it would take a classical computer ten septillion years to do what its Willow chip can do.  This is based on a very particularly chosen problem (as I mentioned here five years ago) called random circuit sampling, looking at the statistical properties of the outcome of applying random gate sequences to a quantum computer.  From what I can tell, this is very different than what most people mean when they think of a problem to benchmark a quantum computer's advantage over a classical computer.  I suspect the typical tech-literate person considering quantum computing wants to know, if I ask a quantum computer and a classical computer to factor huge numbers or do some optimization problem, how much faster is the quantum computer for a given size of problem?  Random circuit sampling feels instead much more to me like comparing an experiment to a classical theory calculation.  For a purely classical analog, consider putting an airfoil in a windtunnel and measuring turbulent flow, and comparing with a computational fluids calculation.  Yes, the windtunnel can get you an answer very quickly, but it's not "doing" a calculation, from my perspective.  This doesn't mean random circuit sampling is a poor benchmark, just that people should understand it's rather different from the kind of quantum/classical comparison they may envision.
  • On one unrelated note:  Thanks to a timey inquiry from a reader, I have now added a search bar to the top of the blog.  (Just in time to capture the final decline of science blogging?)
  • On a second unrelated note:  I'd be curious to hear from my academic readers on how they are approaching generative AI, both on the instructional side (e.g., should we abandon traditional assignments and take-home exams?  How do we check to see if students are really learning vs. becoming dependent on tools that have dubious reliability?) and on the research side (e.g., what level of generative AI tool use is acceptable in paper or proposal writing?  What aspects of these tools are proving genuinely useful to PIs?  To students?  Clearly generative AI's ability to help with coding is very nice indeed!)

Saturday, December 07, 2024

Seeing through your head - diffuse imaging

From the medical diagnostic perspective (and for many other applications), you can understand why it might be very convenient to be able to perform some kind of optical imaging of the interior of what you'd ordinarily consider opaque objects.  Even when a wavelength range is chosen so that absorption is minimized, photons can scatter many times as they make their way through dense tissue like a breast.  We now have serious computing power and extremely sensitive photodetectors, which has led to the development of imaging techniques to perform imaging through media that absorb and diffuse photons.  Here is a review of this topic from 2005, and another more recent one (pdf link here).  There are many cool approaches that can be combined, including using pulsed lasers to do time-of-flight measurements (review here), and using "structured illumination" (review here).   

Sure, point that laser at my head.  (Adapted from
Figure 1 of this paper.)

I mention all of this to set the stage for this fun preprint, titled "Photon transport through the entire adult human head".  Sure, you think your head is opaque, but it only attenuates photon fluxes by a factor of around \(10^{18}\).  With 1 Watt of incident power at 800 nm wavelength spread out over a 25 mm diameter circle and pulsed 80 million times a second, time-resolved single-photon detectors like photomultiplier tubes can readily detect the many-times-scattered photons that straggle their way out of your head around 2 nanoseconds later.  (The distribution of arrival times contains a bunch of information.  Note that the speed of light in free space is around 30 cm/ns; even accounting for the index of refraction of tissue, those photons have bounced around a lot before getting through.)  The point of this is that those photons have passed through parts of the brain that are usually considered inaccessible.  This shows that one could credibly use spectroscopic methods to get information out of there, like blood oxygen levels.

Friday, November 29, 2024

Foams! (or, why my split pea side dish boils over every Thanksgiving)

Foams can be great examples of mechanical metamaterials.  

Adapted from TOC figure of this paper
Consider my shaving cream.  You might imagine that the (mostly water) material would just pool as a homogeneous liquid, since water molecules have a strong attraction for one another.  However, my shaving cream contains surfactant molecules.  These little beasties have a hydrophilic/polar end and a hydrophobic/nonpolar end.  The surfactant molecules can lower the overall energy of the fluid+air system by lowering the energy cost of the liquid/surfactant/air interface compared with the liquid/air interface.  There is a balancing act between air pressure, surface tension/energy, and gravity that has to be played, but under the right circumstances you end up with formation of a dense foam comprising many many tiny bubbles.  On the macroscale (much larger than the size of individual bubbles), the foam can look like a very squishy but somewhat mechanically integral solid - it can resist shear, at least a bit, and maintain its own shape against gravity.  For a recent review about this, try this paper (apologies for the paywall) or a taste of this in a post from last year

What brought this to mind was my annual annoyance yesterday in preparing what has become a regular side dish at our family Thanksgiving.  That recipe begins with rinsing, soaking, and then boiling split peas in preparation for making a puree.  Every year, without fail, I try to keep a close eye on the split peas as they cook, because they tend to foam up.  A lot.  Interestingly, this happens regardless of how carefully I rinse them before soaking, and the foaming (a dense white foam of few-micron-scale bubbles) begins well before the liquid starts to boil.  I have now learned two things about this.  First, pea protein, which leaches out of the split peas, is apparently a well-known foam-inducing surfactant, as explained in this paper (which taught me that there is a journal called Food Hydrocolloids).  Second, next time I need to use a bigger pot and try adding a few drops of oil to see if that suppresses the foam formation.

Sunday, November 24, 2024

Nanopasta, no, really

Fig. 1 from the linked paper
Here
is a light-hearted bit of research that touches on some fun physics.  As you might readily imagine, there is a good deal of interdisciplinary and industrial interest in wanting to create fine fibers out of solution-based materials.  One approach, which has historical roots that go back even two hundred years before this 1887 paper, is electrospinning.  Take a material of interest, dissolve it in a solvent, and feed a drop of that solution onto the tip of an extremely sharp metal needle.  Then apply a big voltage (say a few to tens of kV) between that tip and a nearby grounded substrate.  If the solution has some amount of conductivity, the liquid will form a cone on the tip, and at sufficiently large voltages and small target distances, the droplet will be come unstable and form a jet off into the tip-target space.  With the right range of fluid properties (viscosity, conductivity, density, concentration) and the right evaporation rate for the solvent, the result is a continuously forming, drying fiber that flows off the end of the tip.  A further instability amplifies any curves in the fiber path, so that you get a spiraling fiber spinning off onto the substrate.   There are many uses for such fibers, which can be very thin.

The authors of the paper in question wanted to make fibers from starch, which is nicely biocompatible for medical applications.  So, starting from wheat flour and formic acid, they worked out viable parameters and were able to electrospin fibers of wheat starch (including some gluten - sorry, for those of you with gluten intolerances) into nanofibers 300-400 nm in diameter.  The underlying material is amorphous (so, no appreciable starch crystallization).  The authors had fun with this and called the result "nanopasta", but it may actually be useful for certain applications.