Wednesday, May 27, 2020

The National Science and Technology Foundation?

A proposal is being put in front of Congress that would reshape the National Science Foundation into the National Science and Technology Foundation.  The Senate bill is here, and the House equivalent bill is here.  The actual text of the Senate bill is here in pdf form.   In a nutshell, this "Endless Frontiers" bill (so named to echo the Vannevar Bush report that spurred the creation of the NSF in the first place) would do several things, including:
  • Create a Technology Directorate with its own advisory board (distinct from the National Science Board)
  • Would identify ten key technology areas (enumerated in the bill, initially (i) artificial intelligence and machine learning; (ii) high performance computing, semiconductors, and advanced computer hardware; (iii) quantum computing and information systems; (iv) robotics, automation, and advanced manufacturing; (v) natural or anthropogenic disaster prevention; (vi) advanced communications technology; (vii) biotechnology, genomics, and synthetic biology; (viii) cybersecurity, data storage, and data management technologies; (ix) advanced energy; and (x) materials science, engineering, and exploration relevant to the other key technology focus areas)
  • Would have funds allocated by program managers who may use peer review in an advisory role (so, more like DOD than traditional NSF)
  • Invest $100B over 5 years, with the idea that the rest of NSF would also go up, but this new directorate would get the large bulk of the funding
This article at Science does a good job outlining all of this.  The argument is, basically, that the US is lagging in key areas and is not doing a good job translating basic science into technologies that ensure international primacy (with China being the chief perceived rival, though this is unstated in the bills of course).  If this came to pass, and it's a big "if", this could fundamentally alter the character and mission of the NSF.  Seeing bipartisan congressional enthusiasm for boosting funding to the NSF is encouraging, but I think there are real hazards in pushing funding even farther toward applications, particularly in a governance and funding-decision model that would look so different than traditional NSF.  

It's worth noting that people have been having these arguments for a long time.  Here is a 1980 (!) article from Science back when a "National Technology Foundation" proposal was pending before Congress, for exactly the same perceived reasons (poor translation of basic science into technology and business competitiveness, though the Soviets were presumably the rivals with whom people were concerned about competing).  The NSF has their own history that mentions this, and how this tension led to the creation of the modern Engineering Directorate within NSF.  

Interesting times.  Odds are this won't pass, but it's a sign of bipartisan concern about the US falling behind its technological rivals.

Wednesday, May 20, 2020

Yet more brief items

Between writing deadlines, battling with reviewer 3 (I kid, I kid), and trying to get set for the tentative beginnings of restarting on-campus research, it's been a busy time.  I really do hope to do more blogging soon (suggested topics are always appreciated), but for now, here are a few more brief items:
  • This expression of editorial concern about this paper was an unwelcome surprise.  Hopefully all will become clear.  Here is a statement by the quantum information science-related center at Delft.
  • I happened across this press release, pointing out that nVidia's new chip will contain 54 billion transistors (!) fabbed with a 7 nm process.  For reference, the "7 nm" there is a label describing particular fabrication processes using finFETs, and doesn't really correspond to a physical feature size of 7 nm.  I discussed this here before.  Still impressive.
  • There is a lot of talk about moving cutting-edge semiconductor fabrication plants back to the US.  Intel and parts of GlobalFoundries aside, a large fraction of high end chip volume is produced outside the US.  There have long been national security and intellectual property concerns about the overseas manufacturing of key technologies, and the US DOD has decided that bringing some of this capability back on-shore is safer and more secure.  I'm surprised it's taken this long, though the enormous capital cost in setting up a foundry explains why these things are often done by large consortia.  The pandemic has also shown that depending on overseas suppliers for just-in-time delivery of things may not be the smartest move.
  • Speaking of that, I can't help but wonder about the cycle of unintended consequences that we have in our economic choices.  I've ranted (way) before about how the way the stock market and corporate governance function these days has basically squeezed away most industrial basic research.  Those same attitudes gave us "just-in-time" manufacturing and somehow convinced generations of corporate management that simple things like warehouses and stockrooms were inherently bad.  "Why keep a stockroom around, when you can always order a M5 allen head bolt via the internet and get it shipped overnight from hundreds or thousands of miles away?" runs the argument, the same kind of bogus accounting that implies that the continued existence of a space in the Bell Labs parking lot used to cost Lucent $30K/yr.   So, companies got rid of inventory, got rid of local suppliers, and then were smacked hard by the double-whammy of a US-China trade war and a global pandemic.  Now we are being bombarded with breathless stories about how the pandemic and people working from home might mean the complete delocalization of work - a vision of people working from anywhere, especially places more financially sustainable than the Bay Area.  I'm all for telecommuting when it makes sense, and minimizing environmental impact, and affordable places to live.  That being said, it's hard not to feel like a truly extreme adoption of this idea is risky.  What if, heaven forbid, there's a big disruption to the communications grid, such as a Carrington Event?  Wouldn't that basically obliterate the ability of completely delocalized companies to function?  
  • To end on a much lighter note, these videos (1, 2, 3, 4) have been a positive product of the present circumstances, bringing enjoyment to millions.

Sunday, May 10, 2020

Brief items

Apologies for the slowed frequency of posting.  Academic and research duties have been eating a lot of bandwidth.  Here are a few items that may be of interest:

  • This article about Shoucheng Zhang is informative, but at the same time very sad.  Any geopolitics aside, he was an intense, driven person who put enormous pressure on himself.  It says something about self-perception under depression that he was concerned that he was somehow not being recognized.  
  • This paper caught my eye.  If you want to see whether there is some dependence of electronic conduction on the relative directions of a material's crystal axes and the current, it makes sense to fabricate a series of devices oriented in different directions.  These authors take a single epitaxial film of a material (in this case the unconventional superconductor Sr2RuO4) and carve it into a radial array of differently oriented strips of material with measurement electrodes.   They find that there do seem to be "easy" and "hard" directions for transport in the normal state that don't have an obvious relationship to the crystal symmetry directions.  A similar approach was taken here in a cuprate superconductor.  
  • I like the idea of making characterization tools broadly available for low cost - it's great for the developing world and potentially for use in public secondary education.  This work shows plans for a readily producible optical microscope that can have digital imaging, motorized sample positioning, and focusing for a couple of hundred dollars.  Fancier than the foldscope, but still very cool.  Time to think more about how someone could make a $100 electron microscope....
  • Here is a nice review article from the beginning of the year about spin liquids.
  • I was going to point out this article about ultralow temperature nanoelectronics back in March, but the pandemic distracted me.  From grad school I have a history in this area, and the progress is nice to see.  The technical challenges of truly getting electrons cold are formidable.

Thursday, April 30, 2020

On the flexural rigidity of a slice of pizza

People who eat pizza (not the deep dish casserole style from Chicago, but normal pizza), unbeknownst to most of them, have developed an intuition for a key concept in elasticity and solid mechanics. 

I hope that all right-thinking people agree that pizza slice droop (left hand image) is a problem to be avoided.  Cheese, sauce, and toppings are all in serious danger of sliding off the slice and into the diner's lap if the tip of the slice flops down.  Why does the slice tend to droop?   If you hold the edge of the crust and try to "cantilever" the slice out into space, the weight of the sauce/toppings exerts downward force, and therefore a torque that tries to droop the crust.  

A simple way to avoid this problem is shown in the right-hand image (shamelessly stolen from here).  By bending the pizza slice, with a radius of curvature around an axis that runs from the crust to the slice tip, the same pizza slice becomes much stiffer against bending.   Why does this work?  Despite what the Perimeter Institute says here, I really don't think that differential geometry has much to do with this problem, except in the sense that there are constraints on what the crust can do if its volume is approximately conserved.  

The reason the curved pizza slice is stiffer turns out to be the same reason that an I-beam is stiffer than a square rod of the same cross-sectional area.  Imagine an I-beam with a heavy weight (its own, for example) that would tend to make it droop.  In drooping a tiny bit, the top of the I-beam would get stretched out, elongated along the \(z\) direction - it would be in tension.  The bottom of the I-beam would get squeezed, contracted along the \(z\) direction - it would be in compression.  Somewhere in the middle, the "neutral axis", the material would be neither stretched nor squeezed.  We can pick coordinates such that the line \(y=0\) is the neutral axis, and in the linear limit, the amount of stretching (strain) at a distance \(y\) away from the neutral axis would just be proportional to \(y\).  In the world of linear elasticity, the amount of restoring force per unit area ("normal stress") exhibited by the material is directly proportional to the amount of strain, so the normal stress \(\sigma_{zz} \propto y\).  If we add up all the little contributions of patches of area \(\mathrm{d}A\) to the restoring torque around the neutral axis, we get something proportional to \(\int y^2 \mathrm{d}A\).  The bottom line:  All other things being equal, "beams" with cross-sectional area far away from the neutral axis resist bending torques more than beams with area close to the neutral axis.

Now think of the pizza slice as a beam.  (We will approximate the pizza crust as a homogeneous elastic solid - not crazy, though really it's some kind of mechanical metamaterial carbohydrate foam.)  When the pizza slice is flat, the farthest that some constituent bit of crust can be from the neutral axis is half the thickness of the crust.  When the pizza slice is curved, however, much more of its area is farther from the neutral axis - the curved slice will then resist bending much better, even made from the same thickness of starchy goodness as the flat slice.

(Behold the benefits of my engineering education.) 

Wednesday, April 22, 2020

Brief items

A few more links to help fill the time:
  • Steve Simon at Oxford has put his graduate solid state course lectures online on youtube, linked from here.  I'd previously linked to his undergrad solid state lectures.   Good stuff, and often containing fun historical anecdotes that I hadn't known before.
  • Nature last week had this paper demonstrating operations of Si quantum dot-based qubits at 1.5 K with some decent fidelity.  Neat, showing that long electron spin coherence times are indeed realizable in these structures at comparatively torrid conditions.
  • Speaking of quantum computing, it was reported that John Martinis is stepping down as lead of google's superconducting quantum computation effort (these folks).  I've always thought of him as an absolutely fearless experimentalist, and while no one is indispensable, his departure leads me to lower my expectations about google's progress.  Update:  Forbes has a detailed interview with Martinis about this.  It's a very interesting inside look.  
  • I'd never heard of "the Poynting effect" before, and I thought this write-up was very nice.

Sunday, April 19, 2020

This week in the arxiv - magnons

Ages ago I wrote a description of magnons, which I really should revise.  The ultra-short version:  Magnetically ordered materials are classified by long-ranged patterns of how electronic spins are arranged.  For example, in a (single domain) ferromagnet, the spins all point in the same direction, and it costs energy to perturb that arrangement by tipping a spin.  Classically one can define spin waves, where there is some spatially periodic perturbation of the spin orientation (described by some wave vector \(\mathbf{k}\), and that perturbation then oscillates in time with frequency \(\omega(\mathbf{k})\), like any of a large number of wave-like phenomena.  In the quantum limit, one can talk about the energy of exciting a single magnon, \(\hbar \omega\).  One can use this language to about making wavepackets and propagating magnons to transport angular momentum.

Two papers appeared on the arxiv this week, back-to-back, taking this to the next level.  In condensed matter physics some of the most powerful techniques, in terms of learning about material properties and how they emerge, involve scattering.  That is, taking some probe (say visible light, x-rays, electrons, or neutrons) with a well-defined energy and momentum, firing it at a target of interest, and studying the scattered waves to learn about the target.  A related approach involves interferometery, where the propagation of waves (detected through changes in amplitude and phase) is sensitive to the local environment.  

The two preprints (this one and this one) establish that it is now possible to use magnons in both approaches.  This will likely open up a new route for characterizing and understanding micro- and nanoscale magnetic materials, which will be extremely useful (since, as I had to explain to a referee on a paper several years ago, it's actually not possible to use neutron scattering to probe a few-micron wide, few nm thick piece of material.)  In the former paper, magnons in yttrium iron garnet (a magnetic insulator called YIG, not to be confused with Yig, the Father of Serpents) are launched toward and scattered from a patch of permalloy film, and the scattered waves are detected and imaged sensitively.  In the latter, propagation and interference of magnons in YIG waveguides is imaged.  The great enabling technology for both of these impressive experiments has been the development over the last decade or so in the use of nitrogen-vacancy centers in diamond as incredibly sensitive magnetometers.   Very pretty stuff.


Sunday, April 12, 2020

What are anyons?

Because of the time lag associated with scientific publishing, there are a number of cool condensed matter results coming out now in the midst of the coronavirus impact.  One in particular prompted me to try writing up something brief about anyons aimed at non-experts.  The wikipedia article is pretty good, but what the heck.

One of the subtlest concepts in physics is the idea of "indistinguishable particles".  The basic idea seems simple.  Two electrons, for example, are supposed to be indistinguishable.  There is no measurement you could do on two electrons that would find different properties (say size or charge or response to magnetic fields).  For example, I should be able to pop an electron out of a hydrogen atom and replace it with any other electron, and literally no measurement you could do would be able to tell the difference between the hydrogen atoms before and after such a swap.  The consequences of true indistinguishability are far reaching even in classic physics.  In statistical mechanics, whether or not a collection of particles and that same collection with two particles swapped are really the same microscopic state is a big deal, with testable consequences.

In quantum mechanics, the situation is richer.  Let's imagine that the only parameter that matters is position.  (We are going to use position as shorthand to represent all of the quantum numbers associated with some particle.)  We can describe a two-particle system by some "state vector" (or wavefunction if you prefer) \( | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle\), where the first vector is the position of particle 1 and the second is the position of particle 2.  Now imagine swapping the two particles.   After the swap, the state should be \( | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle\).  The question is, how does that second state relate to the first state?  If the particles are truly indistinguishable, you'd think \( | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle =  | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle \). 

It turns out that that's not the only allowed situation.  One thing that must be true is that swapping the particles can't change the total normalization of the state (how much total stuff there is).  That restriction is written  \( \langle \psi (\mathbf{r_{2}},\mathbf{r_{1}}) | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle = \langle \psi (\mathbf{r_{1}},\mathbf{r_{2}}) | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle \).  If that's the most general restriction, then we can have other possibilities than the states before and after being identical.

For bosons, particles obeying Bose-Einstein statistics, the simple, intuitive situation does hold.  \( | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle =  | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle \).

For fermions, particles obeying Fermi-Dirac statistics, instead  \( | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle = -  | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle \).  This also preserves normalization, but has truly world-altering consequences.  This can only be satisfied for two particles at the same position if the state is identically zero.  This is what leads to the Pauli Principle and basically the existence of atoms and matter as we know them.

In principle, you could have something more general than that.  For so-called "abelian anyons", you could have the situation  \( | \psi (\mathbf{r_{2}},\mathbf{r_{1}}) \rangle = (e^{i \alpha})  | \psi (\mathbf{r_{1}},\mathbf{r_{2}}) \rangle \), where \(\alpha\) is some phase angle.  Then bosons are the special case where \(\alpha = 0\) or some integer multiple of \(2 \pi\), and fermions are the special case where \(\alpha = \pi\) or some odd multiple of \(\pi\).   

You might wonder, how would you ever pick up weird phase angles when particles are swapped in position?  This situation can arise for charged particles restricted to two dimensions in the presence of a magnetic field. The reason is rather technical, but it comes down to the fact that the vector potential \(\mathbf{A}\) leads to complex phase factors like the one above for charged particles.  

This brings me to this paper.  Anyons have been deeply involved in describing the physics of the fractional quantum Hall effect for a long time  (see here for example).  It's tricky to get direct experimental evidence for the unusual phase factor, though.  The authors of this new paper have been basically doing a form of particle swapping via a scattering experiment, and looking at correlations in where the particles end up (via the noise, fluctuations in the relative currents).  They do indeed see what looks like nice evidence for the expected anyonic properties of a particular quantum Hall state.  

(There are also "nonabelian" anyons, but that is for another time.)