Search This Blog

Sunday, December 15, 2024

Items for discussion, including google's latest quantum computing result

As we head toward the end of the calendar year, a few items:

  • Google published a new result in Nature a few days ago.  This made a big news splash, including this accompanying press piece from google themselvesthis nice article in Quanta, and the always thoughtful blog post by Scott Aaronson.  The short version:  Physical qubits as made today in the superconducting platform favored by google don't have the low error rates that you'd really like if you want to run general quantum algorithms on a quantum computer, which could certainly require millions of steps.  The hope of the community is to get around this using quantum error correction, where some number of physical qubits are used to function as one "logical" qubit.  If physical qubit error rates are sufficiently low, and these errors can be corrected with enough efficacy, the logical qubits can function better than the physical qubits, ideally being able to undergo a sequential operations indefinitely without degradation of their information.   One technique for this is called a surface code.  Google have implemented this in their most recent chip 105 physical qubit chip ("Willow"), and they seem to have crossed a huge threshold:  When they increase the size of their correction scheme (going from a 3 (physical qubit) \(\times\) 3 (physical qubit) to 5 \(\times\) 5 to 7 \(\times\) 7), the error rates of the resulting logical qubits fall as hoped.  This is a big deal, as it implies that larger chips, if they could be implemented, should scale toward the desired performance.  This does not mean that general purpose quantum computers are just around the corner, but it's very encouraging.  There are many severe engineering challenges still in place.  For example, the present superconducting qubits must be tweaked and tuned.  The reason google only has 105 of them on the Willow chip is not that they can't fit more - it's that they have to have wires and control capacity to tune and run them.  A few thousand really good logical qubits would be needed to break RSA encryption, and there is no practical way to put millions of wires down a dilution refrigerator.  Rather, one will need cryogenic control electronics
  • On a closely related point, google's article talks about how it would take a classical computer ten septillion years to do what its Willow chip can do.  This is based on a very particularly chosen problem (as I mentioned here five years ago) called random circuit sampling, looking at the statistical properties of the outcome of applying random gate sequences to a quantum computer.  From what I can tell, this is very different than what most people mean when they think of a problem to benchmark a quantum computer's advantage over a classical computer.  I suspect the typical tech-literate person considering quantum computing wants to know, if I ask a quantum computer and a classical computer to factor huge numbers or do some optimization problem, how much faster is the quantum computer for a given size of problem?  Random circuit sampling feels instead much more to me like comparing an experiment to a classical theory calculation.  For a purely classical analog, consider putting an airfoil in a windtunnel and measuring turbulent flow, and comparing with a computational fluids calculation.  Yes, the windtunnel can get you an answer very quickly, but it's not "doing" a calculation, from my perspective.  This doesn't mean random circuit sampling is a poor benchmark, just that people should understand it's rather different from the kind of quantum/classical comparison they may envision.
  • On one unrelated note:  Thanks to a timey inquiry from a reader, I have now added a search bar to the top of the blog.  (Just in time to capture the final decline of science blogging?)
  • On a second unrelated note:  I'd be curious to hear from my academic readers on how they are approaching generative AI, both on the instructional side (e.g., should we abandon traditional assignments and take-home exams?  How do we check to see if students are really learning vs. becoming dependent on tools that have dubious reliability?) and on the research side (e.g., what level of generative AI tool use is acceptable in paper or proposal writing?  What aspects of these tools are proving genuinely useful to PIs?  To students?  Clearly generative AI's ability to help with coding is very nice indeed!)

Saturday, December 07, 2024

Seeing through your head - diffuse imaging

From the medical diagnostic perspective (and for many other applications), you can understand why it might be very convenient to be able to perform some kind of optical imaging of the interior of what you'd ordinarily consider opaque objects.  Even when a wavelength range is chosen so that absorption is minimized, photons can scatter many times as they make their way through dense tissue like a breast.  We now have serious computing power and extremely sensitive photodetectors, which has led to the development of imaging techniques to perform imaging through media that absorb and diffuse photons.  Here is a review of this topic from 2005, and another more recent one (pdf link here).  There are many cool approaches that can be combined, including using pulsed lasers to do time-of-flight measurements (review here), and using "structured illumination" (review here).   

Sure, point that laser at my head.  (Adapted from
Figure 1 of this paper.)

I mention all of this to set the stage for this fun preprint, titled "Photon transport through the entire adult human head".  Sure, you think your head is opaque, but it only attenuates photon fluxes by a factor of around \(10^{18}\).  With 1 Watt of incident power at 800 nm wavelength spread out over a 25 mm diameter circle and pulsed 80 million times a second, time-resolved single-photon detectors like photomultiplier tubes can readily detect the many-times-scattered photons that straggle their way out of your head around 2 nanoseconds later.  (The distribution of arrival times contains a bunch of information.  Note that the speed of light in free space is around 30 cm/ns; even accounting for the index of refraction of tissue, those photons have bounced around a lot before getting through.)  The point of this is that those photons have passed through parts of the brain that are usually considered inaccessible.  This shows that one could credibly use spectroscopic methods to get information out of there, like blood oxygen levels.