Search This Blog

Friday, December 20, 2024

Technological civilization and losing object permanence

In the grand tradition of physicists writing about areas outside their expertise, I wanted to put down some thoughts on a societal trend.  This isn't physics or nanoscience, so feel free to skip this post.

Object permanence is a term from developmental psychology.  A person (or animal) has object permanence if they understand that something still exists even if they can't directly see it or interact with it in the moment.  If a kid realizes that their toy still exists even though they can't see it right now, they've got the concept.  

I'm wondering if modern technological civilization has an issue with an analog of object permanence.  Let me explain what I mean, why it's a serious problem, and end on a hopeful note by pointing out that even if this is the case, we have the tools needed to overcome this.

By the standards of basically any previous era, a substantial fraction of humanity lives in a golden age.  We have a technologically advanced, globe-spanning civilization.  A lot of people (though geographically very unevenly distributed) have grown up with comparatively clean water; comparatively plentiful food available through means other than subsistence agriculture; electricity; access to radio, television, and for the last couple of decades nearly instant access to communications and a big fraction of the sum total of human factual knowledge.  

Whether it's just human nature or a consequence of relative prosperity, there seems to be some timescale on the order of a few decades over which a non-negligible fraction of even the most fortunate seem to forget the hard lessons that got us to this point.  If they haven't seen something with their own eyes or experienced it directly, they decide it must not be a real issue.  I'm not talking about Holocaust deniers or conspiracy theorists who think the moon landings were fake.  There are a bunch of privileged people who have never personally known a time when tens of thousands of their neighbors died from childhood disease (you know, like 75 years ago, when 21,000 Americans were paralyzed every year from polio (!), proportionately like 50,000 today), who now think we should get rid of vaccines, and maybe germs aren't real.  Most people alive today were not alive the last time nuclear weapons were used, so some of them argue that nuclear weapons really aren't that bad (e.g. setting off 2000 one megaton bombs spread across the US would directly destroy less than 5% of the land area, so we're good, right?).  Or, we haven't had massive bank runs in the US since the 1930s, so some people now think that insuring bank deposits is a waste of resources and should stop.  I'll stop the list here, before veering into even more politically fraught territory.  I think you get my point, though - somehow chunks of modern society seem to develop collective amnesia, as if problems that we've never personally witnessed must have been overblown before or don't exist at all.  (Interestingly, this does not seem to happen for most technological problems.  You don't see many people saying, you know, maybe building fires weren't that big a deal, let's go back to the good old days before smoke alarms and sprinklers.)  

While the internet has downsides, including the ability to spread disinformation very effectively, all the available and stored knowledge also has an enormous benefit:  It should make it much harder than ever before for people to collectively forget the achievements of our species.  Sanitation, pasteurization, antibiotics, vaccinations - these are absolutely astonishing technical capabilities that were hard-won and have saved many millions of lives.  It's unconscionable that we are literally risking mass death by voluntarily forgetting or ignoring that.  Nuclear weapons are, in fact, terribleInsuring bank deposits with proper supervision of risk is a key factor that has helped stabilize economies for the last century.  We need to remember historical problems and their solutions, and make sure that the people setting policy are educated about these things. They say that those who cannot remember the past are doomed to repeat it.  As we look toward the new year, I hope that those who are familiar with the hard earned lessons of history are able to make themselves heard over the part of the populace who simply don't believe that old problems were real and could return.



Sunday, December 15, 2024

Items for discussion, including google's latest quantum computing result

As we head toward the end of the calendar year, a few items:

  • Google published a new result in Nature a few days ago.  This made a big news splash, including this accompanying press piece from google themselvesthis nice article in Quanta, and the always thoughtful blog post by Scott Aaronson.  The short version:  Physical qubits as made today in the superconducting platform favored by google don't have the low error rates that you'd really like if you want to run general quantum algorithms on a quantum computer, which could certainly require millions of steps.  The hope of the community is to get around this using quantum error correction, where some number of physical qubits are used to function as one "logical" qubit.  If physical qubit error rates are sufficiently low, and these errors can be corrected with enough efficacy, the logical qubits can function better than the physical qubits, ideally being able to undergo a sequential operations indefinitely without degradation of their information.   One technique for this is called a surface code.  Google have implemented this in their most recent chip 105 physical qubit chip ("Willow"), and they seem to have crossed a huge threshold:  When they increase the size of their correction scheme (going from a 3 (physical qubit) \(\times\) 3 (physical qubit) to 5 \(\times\) 5 to 7 \(\times\) 7), the error rates of the resulting logical qubits fall as hoped.  This is a big deal, as it implies that larger chips, if they could be implemented, should scale toward the desired performance.  This does not mean that general purpose quantum computers are just around the corner, but it's very encouraging.  There are many severe engineering challenges still in place.  For example, the present superconducting qubits must be tweaked and tuned.  The reason google only has 105 of them on the Willow chip is not that they can't fit more - it's that they have to have wires and control capacity to tune and run them.  A few thousand really good logical qubits would be needed to break RSA encryption, and there is no practical way to put millions of wires down a dilution refrigerator.  Rather, one will need cryogenic control electronics
  • On a closely related point, google's article talks about how it would take a classical computer ten septillion years to do what its Willow chip can do.  This is based on a very particularly chosen problem (as I mentioned here five years ago) called random circuit sampling, looking at the statistical properties of the outcome of applying random gate sequences to a quantum computer.  From what I can tell, this is very different than what most people mean when they think of a problem to benchmark a quantum computer's advantage over a classical computer.  I suspect the typical tech-literate person considering quantum computing wants to know, if I ask a quantum computer and a classical computer to factor huge numbers or do some optimization problem, how much faster is the quantum computer for a given size of problem?  Random circuit sampling feels instead much more to me like comparing an experiment to a classical theory calculation.  For a purely classical analog, consider putting an airfoil in a windtunnel and measuring turbulent flow, and comparing with a computational fluids calculation.  Yes, the windtunnel can get you an answer very quickly, but it's not "doing" a calculation, from my perspective.  This doesn't mean random circuit sampling is a poor benchmark, just that people should understand it's rather different from the kind of quantum/classical comparison they may envision.
  • On one unrelated note:  Thanks to a timey inquiry from a reader, I have now added a search bar to the top of the blog.  (Just in time to capture the final decline of science blogging?)
  • On a second unrelated note:  I'd be curious to hear from my academic readers on how they are approaching generative AI, both on the instructional side (e.g., should we abandon traditional assignments and take-home exams?  How do we check to see if students are really learning vs. becoming dependent on tools that have dubious reliability?) and on the research side (e.g., what level of generative AI tool use is acceptable in paper or proposal writing?  What aspects of these tools are proving genuinely useful to PIs?  To students?  Clearly generative AI's ability to help with coding is very nice indeed!)

Saturday, December 07, 2024

Seeing through your head - diffuse imaging

From the medical diagnostic perspective (and for many other applications), you can understand why it might be very convenient to be able to perform some kind of optical imaging of the interior of what you'd ordinarily consider opaque objects.  Even when a wavelength range is chosen so that absorption is minimized, photons can scatter many times as they make their way through dense tissue like a breast.  We now have serious computing power and extremely sensitive photodetectors, which has led to the development of imaging techniques to perform imaging through media that absorb and diffuse photons.  Here is a review of this topic from 2005, and another more recent one (pdf link here).  There are many cool approaches that can be combined, including using pulsed lasers to do time-of-flight measurements (review here), and using "structured illumination" (review here).   

Sure, point that laser at my head.  (Adapted from
Figure 1 of this paper.)

I mention all of this to set the stage for this fun preprint, titled "Photon transport through the entire adult human head".  Sure, you think your head is opaque, but it only attenuates photon fluxes by a factor of around \(10^{18}\).  With 1 Watt of incident power at 800 nm wavelength spread out over a 25 mm diameter circle and pulsed 80 million times a second, time-resolved single-photon detectors like photomultiplier tubes can readily detect the many-times-scattered photons that straggle their way out of your head around 2 nanoseconds later.  (The distribution of arrival times contains a bunch of information.  Note that the speed of light in free space is around 30 cm/ns; even accounting for the index of refraction of tissue, those photons have bounced around a lot before getting through.)  The point of this is that those photons have passed through parts of the brain that are usually considered inaccessible.  This shows that one could credibly use spectroscopic methods to get information out of there, like blood oxygen levels.

Friday, November 29, 2024

Foams! (or, why my split pea side dish boils over every Thanksgiving)

Foams can be great examples of mechanical metamaterials.  

Adapted from TOC figure of this paper
Consider my shaving cream.  You might imagine that the (mostly water) material would just pool as a homogeneous liquid, since water molecules have a strong attraction for one another.  However, my shaving cream contains surfactant molecules.  These little beasties have a hydrophilic/polar end and a hydrophobic/nonpolar end.  The surfactant molecules can lower the overall energy of the fluid+air system by lowering the energy cost of the liquid/surfactant/air interface compared with the liquid/air interface.  There is a balancing act between air pressure, surface tension/energy, and gravity that has to be played, but under the right circumstances you end up with formation of a dense foam comprising many many tiny bubbles.  On the macroscale (much larger than the size of individual bubbles), the foam can look like a very squishy but somewhat mechanically integral solid - it can resist shear, at least a bit, and maintain its own shape against gravity.  For a recent review about this, try this paper (apologies for the paywall) or a taste of this in a post from last year

What brought this to mind was my annual annoyance yesterday in preparing what has become a regular side dish at our family Thanksgiving.  That recipe begins with rinsing, soaking, and then boiling split peas in preparation for making a puree.  Every year, without fail, I try to keep a close eye on the split peas as they cook, because they tend to foam up.  A lot.  Interestingly, this happens regardless of how carefully I rinse them before soaking, and the foaming (a dense white foam of few-micron-scale bubbles) begins well before the liquid starts to boil.  I have now learned two things about this.  First, pea protein, which leaches out of the split peas, is apparently a well-known foam-inducing surfactant, as explained in this paper (which taught me that there is a journal called Food Hydrocolloids).  Second, next time I need to use a bigger pot and try adding a few drops of oil to see if that suppresses the foam formation.

Sunday, November 24, 2024

Nanopasta, no, really

Fig. 1 from the linked paper
Here
is a light-hearted bit of research that touches on some fun physics.  As you might readily imagine, there is a good deal of interdisciplinary and industrial interest in wanting to create fine fibers out of solution-based materials.  One approach, which has historical roots that go back even two hundred years before this 1887 paper, is electrospinning.  Take a material of interest, dissolve it in a solvent, and feed a drop of that solution onto the tip of an extremely sharp metal needle.  Then apply a big voltage (say a few to tens of kV) between that tip and a nearby grounded substrate.  If the solution has some amount of conductivity, the liquid will form a cone on the tip, and at sufficiently large voltages and small target distances, the droplet will be come unstable and form a jet off into the tip-target space.  With the right range of fluid properties (viscosity, conductivity, density, concentration) and the right evaporation rate for the solvent, the result is a continuously forming, drying fiber that flows off the end of the tip.  A further instability amplifies any curves in the fiber path, so that you get a spiraling fiber spinning off onto the substrate.   There are many uses for such fibers, which can be very thin.

The authors of the paper in question wanted to make fibers from starch, which is nicely biocompatible for medical applications.  So, starting from wheat flour and formic acid, they worked out viable parameters and were able to electrospin fibers of wheat starch (including some gluten - sorry, for those of you with gluten intolerances) into nanofibers 300-400 nm in diameter.  The underlying material is amorphous (so, no appreciable starch crystallization).  The authors had fun with this and called the result "nanopasta", but it may actually be useful for certain applications.


Friday, November 22, 2024

Brief items

 A few tidbits that I encountered recently:

  • The saga of Ranga Dias at Rochester draws to a close, as described by the Wall Street Journal.  It took quite some time for this to propagate through their system.  This is after multiple internal investigations that somehow were ineffective, an external investigation, and a lengthy path through university procedures (presumably because universities have to be careful not to shortcut any of their processes, or they open themselves up to lawsuits).
  • At around the same time, Mikhail Eremets passed away.  He was a pioneer in high pressure measurements of material properties and in superconductivity in hydrides.
  • Also coincident, this preprint appeared on the arXiv, a brief statement summarizing some of the evidence for relatively high temperature superconductivity in hydrides at high pressure.
  • Last week Carl Bender gave a very nice colloquium at Rice, where he spoke about a surprising result.  When we teach undergrad quantum mechanics, we tell students that the Hamiltonian (the expression with operators that gives the total energy of a quantum system) has to be Hermitian, because this guarantees that the energy eigenvalues have to be real numbers.  Generically, non-hermitian Hamiltonians would imply complex energies, which would imply non-conservation of total probability. That is one way of treating open quantum systems, when particles can come and go, but for closed quantum systems, we like real energies.  Anyway, it turns out that one can write an explicitly complex Hamiltonian that nonetheless has a completely real energy spectrum, and this has deep connections to PT symmetry conservation.  Here is a nice treatment of this.
  • Just tossing this out:  The entire annual budget for the state of Arkansas is $6.5B.  The annual budget for Stanford University is $9.5B.  
More soon.

Sunday, November 17, 2024

Really doing mechanics at the quantum level

A helpful ad from Science Made Stupid.
Since before the development of micro- and nanoelectromechanical techniques, there has been an interest in making actual mechanical widgets that show quantum behavior.  There is no reason that we should not be able to make a mechanical resonator, like a guitar string or a cantilevered beam, with a high enough resonance frequency so that when it is placed at low temperatures ( \(\hbar \omega \gg k_{\mathrm{B}}T\)), the resonator can sit in its quantum mechanical ground state.  Indeed, achieving this was Science's breakthrough of the year in 2010.  

This past week, a paper was published from ETH Zurich in which an aluminum nitride mechanical resonator was actually used as a qubit, where the ground and first excited states of this quantum (an)harmonic oscillator represented \(|0 \rangle\) and \(|1 \rangle\).  They demonstrate actual quantum gate operations on this mechanical system (which is coupled to a more traditional transmon qubit - the setup is explained in this earlier paper).  

One key trick to being able to make a qubit out of a mechanical oscillator is to have sufficiently large anharmonicity.  An ideal, perfectly harmonic quantum oscillator has an energy spectrum given by \((n + 1/2)\hbar \omega\), where \(n\) is the number of quanta of excitations in the resonator.  In that situation, the energy difference between adjacent levels is always \(\hbar \omega\).  The problem with this from the qubit perspective is, you want to have a quantum two-level system, and how can you controllably drive transitions just between a particular pair of levels when all of the adjacent level transitions cost the same energy?  The authors of this recent paper have achieved a strong anharmonicity, basically making the "spring" of the mechanical resonator softer in one displacement direction than the other.  The result is that the energy difference between levels \(|0\rangle\) and \(|1\rangle\) is very different than the energy difference between levels \(|1\rangle\) and \(|2\rangle\), etc.  (In typical superconducting qubits, the resonance is not mechanical but an electrical \(LC\)-type, and a Josephson junction acts like a non-linear inductor, giving the desired anharmonic properties.)  This kind of mechanical anharmonicity means that you can effectively have interactions between vibrational excitations ("phonon-phonon"), analogous to what the circuit QED folks can do.  Neat stuff.


Tuesday, November 05, 2024

Recent papers to distract....

Time for blogging has continued to be scarce, but here are a few papers to distract (and for readers who are US citizens:  vote if you have not already done so!).

  • Reaching back, this preprint by Aharonov, Collins, Popescu talks about a thought experiment in which angular momentum can seemingly be transferred from one region to another even though the probability of detecting spin-carrying particles between the two regions can be made arbitrarily low.  I've always found these kinds of discussions to be fun, even when the upshot for me is usually, "I must not really understand the subtleties of weak measurements in quantum mechanics."  This is a specific development based on the quantum Cheshire cat idea.  I know enough to understand that when one is talking about post-selection in quantum experiments, some questions are just not well-posed.  If we send a wavepacked of photons at a barrier, and we detect with a click a photon that (if it was in the middle of the incident wavepacket) seems to have therefore traversed the barrier faster than c, that doesn't mean much, since the italicized parenthetical clause above is uncheckable in principle.  
  • Much more recently, this paper out last week in Nature reports the observation of superconductivity below 200 mK in a twisted bilayer of WSe2.  I believe that this is the first observation of superconductivity in a twisted bilayer of an otherwise nonsuperconducting 2D semiconductor other than graphene.  As in the graphene case, the superconductivity shows up at a particular filling of the moiré lattice, and interestingly seems to happen around zero applied vertical electric field (displacement field) in the device.  I don't have much to say here beyond that it's good to see interesting results in a broader class of materials - that suggests that there is a more general principle at work than "graphene is special".
  • This preprint from last week from Klein et al. is pretty impressive.  It's been known for over 25 years (see here) that it is possible to use a single-electron transistor (SET) as a scannable charge sensor and potentiometer.  Historically, making these devices and operating them has been a real art.  They are fragile, static-sensitive, and fabricating them from evaporated metal on the tips of drawn optical fibers is touchy.  There have been advances in recent years from multiple quarters, and this paper demonstrates a particularly interesting idea: Use a single charge trap in a layer of WSe2 as the SET, and effectively put the sample of interest on the scannable tip.  This is an outgrowth of the quantum twisting microscope.

Sunday, October 20, 2024

Guide to faculty searches, 2024 edition

As you can tell from my posting frequency lately, I have been unusually busy.  I hope to be writing about more condensed matter and nano science soon.   In the meantime, I realized that I have not re-posted or updated my primer on how tenure-track faculty searches work in physics since 2015.  Academia hasn't changed much since then, but even though the previous posts can be found via search engines, it's probably a good idea to put this out there again.  Interestingly, here is a link to a Physics Today article from 2001 about this topic, and here is a link to the same author's 2020 updated version.

Here are the steps in the typical tenure-track faculty search process.  Non-tenure-track hiring can be very similar depending on the institution.  (Just to define the terminology:  "Teaching professor" usually = non-tenure-track, expected to teach several courses per semester, usually no expectations of research except perhaps education research, no lab space.  "Research professor" usually = non-tenure-track, research responsibilities and usually not expected to teach; often entirely paid on research grant funds, either their own or those of a tenure-track PI.)
  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The challenge with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire.  In tight financial times, this can become more challenging. I know of public universities having to cancel searches in 2008/2009 even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some lead time and not have to back-track.
  • Note that some universities and colleges/schools within universities have other processes outside the traditional "department argues for and gets a faculty line to fill" method.  "Cluster hiring", for example, is when, say, the university decides to hire several faculty members whose research is all thematically related to "energy and sustainability", a broad topic that could clearly involve chemistry, physics, materials science, chemical engineering, electrical engineering, etc.  The logistics of cluster hiring can vary quite a bit from place to place.  I have opinions about the best ways to do this; one aspect that my own institution does well is to recognize that anyone hired has to have an actual primary departmental home - that way the tenure process and the teaching responsibilities are unambiguous.
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.  In cross-departmental searches (as in the cluster hiring described above), a dean or equivalent would likely put together the committee.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A committed effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (reaching out through relevant professional societies, social media, society mailing lists - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Back when I was applying for jobs, MIT and Berkeley had run the same ad every year, grazing for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean usually meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
  • Applications come in.  This is all done electronically, thank goodness.  The fact that I feel this way tells you about how old I am.  Some online systems can be clunky, since occasionally universities try to use the same software to hire faculty as they do to hire groundskeepers, but generally things go smoothly.  The two most common software systems out there in the US are Interfolio and Academic Jobs Online.  Each have their own idiosyncracies.  Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later.  I understand this perspective, but I tend to disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)  
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least weed out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab.  "Type A" organized people often bring that intensity to teaching as well.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is formed, and the committee meets to hash that down to, in the end, four or five to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within a committee on which I've served. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.  

Update:  As pointed out by a commenter, a relatively recent wrinkle is the use of zoom interviews.  Rather than inviting 5-ish candidates to campus for interviews, many places are now doing some zoom interviews with a larger pool (more like 10 candidates) and then down-selecting to a smaller number to invite to campus.   Making sure that the interview formats are identical across all the candidates (e.g., having scripts to make sure that the same questions are always asked in the same order) is one way to mitigate unintentional biases that can otherwise be present.

Tips for candidates:

  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture and human nature, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons.  I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision - I've now been here 24+ years, and we've enjoyed ourselves (my occasional Texas politics blog posts aside).
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement or a statement about mentoring or inclusion, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Proof-read cover letters and other documents.  Saying that you're very excited about the possibilities at University A when you sent that application to University B is a bit awkward.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.  I would steer clear of any use of AI help in writing any of the materials, unless it's purely at the "please check this for grammatical mistakes and typographical errors" level. 
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require $1.5M in startup and an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it explicitly, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to that of a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
As always, more feedback in the comments is appreciated.

Tuesday, October 01, 2024

CHIPS and Science - the reality vs the aspiration

I already wrote about this issue here back in August, but I wanted to highlight a policy statement that I wrote with colleagues as part of Rice's Baker Institute's Election 2024: Policy Playbook, which "delivers nonpartisan, expert insights into key issues at stake on the 2024 campaign trail and beyond. Presented by Rice University and the Baker Institute for Public Policy, the series offers critical context, analysis, and recommendations to inform policymaking in the United States and Texas."

The situation is summarized in this graph.  It will be very difficult to achieve the desired policy goals of the CHIPS and Science Act if Congress doesn't come remotely close to appropriations that match the targets in the Act.  What is not shown in this plot are the cuts to STEM education pieces of NSF and other agencies, despite the fact that a main goal of the Act is supposed to be education and workforce development to support the semiconductor industry.

Anyway, please take a look.  It's a very brief document.

Sunday, September 29, 2024

Annual Nobel speculation thread

Not that prizes are the be-all and end-all, but this has become an annual tradition.  Who are your speculative laureates this year for physics and chemistry?  As I did last year and for several years before, I will put forward my usual thought that the physics prize could be Aharonov and Berry for geometric phases in physics (even though Pancharatnam is intellectually in there and died in 1969).  This is a long shot, as always. Given that attosecond experiments were last year, and AMO/quantum info foundations were in 2022, and climate + spin glasses/complexity were 2021, it seems like astro is "due".   

Sunday, September 22, 2024

Lots to read, including fab for quantum and "Immaterial Science"

Sometimes there are upticks in the rate of fun reading material.  In the last few days:

  • A Nature paper has been published by a group of authors predominantly from IMEC in Belgium, in which they demonstrate CMOS-compatible manufacturing of superconducting qubit hardware (Josephson junctions, transmon qubits, based on aluminum) across 300 mm diameter wafers.  This is a pretty big deal - their method for making the Al/AlOx/Al tunnel junctions is different than the shadow evaporation method routinely used in small-scale fab.  They find quite good performance of the individual qubits with strong uniformity across the whole wafer, testing representative random devices.  They did not actually do multi-qubit operations, but what they have shown is certainly a necessary step if there is ever going to be truly large-scale quantum information processing based on this kind of superconducting approach.
  • Interestingly, Friday on the arXiv, a group led by researchers at Karlsruhe demonstrated spin-based quantum dot qubits in Si/SiGe, made on 300 mm substrates.  This fab process comes complete with an integrated Co micromagnet for help in conducting electric dipole spin resonance.  They demonstrate impressive performance in terms of single-qubit properties and operations, with the promise that the coherence times would be at least an order of magnitude longer if they had used isotopically purified 28Si material.  (The nuclear spins of the stray 29Si atoms in the ordinary Si used here are a source of decoherence.)  
So, while tremendous progress has been made with atomic physics approaches to quantum computing (tweezer systems like thision trapping), it's not wise to count out the solid-state approaches.  The engineering challenges are formidable, but solid-state platforms are based on fab approaches that can make billions of transistors per chip, with complex 3D integration.

  • On the arXiv this evening is also this review about "quantum geometry", which seems like a pretty readable overview of how the underlying structure of the wavefunctions in crystalline solids (the part historically neglected for decades, but now appreciated through its relevance to topology and a variety of measurable consequences) affects electronic and optical response.  I just glanced at it, but I want to make time to look it over in detail.
  • Almost 30 years ago, Igor Dolgachev at Michigan did a great service by writing up a brief book entitled "A Brief Introduction to Physics for Mathematicians".  That link is to the pdf version hosted on his website.  Interesting to see how this is presented, especially since a number of approaches routinely shown to undergrad physics majors (e.g., almost anything we do with Dirac delta functions) generally horrify rigorous mathematics students.
  • Also fun (big pdf link here) is the first fully pretty and typeset issue of the amusing Journal of Immaterial Science, shown at right.  There is a definite chemistry slant to the content, and I encourage you to read their (satirical) papers as they come out on their website


Monday, September 16, 2024

Fiber optics + a different approach to fab

 Two very brief items of interest:

  • This article is a nice popular discussion of the history of fiber optics and the remarkable progress it's made for telecommunications.  If you're interested in a more expansive but very accessible take on this, I highly recommend City of Light by Jeff Hecht (not to be confused with Eugene Hecht, author of the famous optics textbook).
  • I stumbled upon an interesting effort by Yokogawa, the Japanese electronics manufacturer, to provide an alternative path for semiconductor device prototyping that they call minimal fab.  The idea is, instead of prototyping circuits on 200 mm wafers or larger (the industry standard for large scale production is 200 mm or 300 mm.  Efforts to go up to 450 mm wafers have been shelved for now.), there are times when it makes sense to work on 12.5 mm substrates.  Their setup uses maskless photolithography and is intended to be used without needing a cleanroom.  Admittedly, this limits it strongly in terms of device size to 1970s-era micron scales (presumably this could be pushed to 1-2 micron with a fancier litho tool), and it's designed for single-layer processing (not many-layer alignments with vias).  Still, this could be very useful for startup efforts, and apparently it's so simple that a child could use it.

Saturday, September 07, 2024

Seeing through tissue and Kramers-Kronig

There is a paper in Science this week that is just a great piece of work.  The authors find that by dyeing living tissue with a particular biocompatible dye molecule, they can make that tissue effectively transparent, so you can see through it.  The paper includes images (and videos) that are impressive. 
Seeing into a living mouse, adapted from here.

How does this work?  There are a couple of layers to the answer.  

Light scatters at the interface between materials with dissimilar optical properties (summarized mathematically as the frequency-dependent index of refraction, \(n\), related to the complex dielectric function \(\tilde{\epsilon}\).   Light within a material travels with a phase velocity of \(c/n\).).  Water and fatty molecules have different indices, for example, so little droplets of fat in suspension scatter light strongly, which is why milk is, well, milky.  This kind of scattering is mostly why visible light doesn't make it through your skin very far.  Lower the mismatch between indices, and you turn down scattering at the interfaces.  Here is a cute demo of this that I pointed out about 15 (!) years ago:


Frosted glass scatters visible light well because it has surface bumpiness on the scale of the wavelength of visible light, and the index of refraction of glass is about 1.5 for visible light, while air has an index close to 1.  Fill in those bumps with something closer to the index of glass, like clear plastic packing tape, and suddenly you can see through frosted glass.  

In the dyed tissue, the index of refraction of the water-with-dye becomes closer to that of the fatty molecules that make up cell membranes, making that layer of tissue have much-reduced scattering, and voilà, you can see a mouse's internal organs.  Amazingly, this index matching idea is the plot device in HG Wells' The Invisible Man!

The physics question is then, how and why does the dye, which looks yellow and absorbs strongly in the blue/purple, change the index of refraction of the water in the visible?  The answer lies with a concept that very often seems completely abstract to students, the Kramers-Kronig relations.  

We describe how an electric field (from the light) polarizes a material using the frequency-dependent complex permittivity \(\tilde{\epsilon}(\omega) = \epsilon'(\omega) + i \epsilon''(\omega)\), where \(\omega\) is the frequency.  What this means is that there is a polarization that happens in-phase with the driving electric field (proportional to the real part of \(\tilde{\epsilon}(\omega)\)) and a polarization that lags or leads the phase of the driving electric field (the imaginary part, which leads to dissipation and absorption).   

The functions \(\epsilon'(\omega)\) and \(\epsilon''(\omega)\) can't be anything you want, though. Thanks to causality, the response of a material now can only depend on what the electric field has done in the past.  That restriction means that, when we decide to work in the frequency domain by Fourier transforming, there are relationships, the K-K relations, that must be obeyed between integrals of \(\epsilon'(\omega)\) and \(\epsilon''(\omega)\).  The wikipedia page has both a traditional (and to many students, obscure) derivation, as well as a time-domain picture.  

So, the dye molecules, with their very strong absorption in the blue/purple, make \(\epsilon''(\omega)\) really large in that frequency range.  The K-K relations require some compensating changes in \(\epsilon'(\omega)\) at lower frequencies to make up for this, and the result is the index matching described above.  

This work seems like it should have important applications in medical imaging, and it's striking to me that this had not been done before.  The K-K relations have been known in their present form for about 100 years.  It's inspiring that new, creative insights can still come out of basic waves and optics.

Saturday, August 31, 2024

Items of interest

The start of the semester has been very busy, but here are some items that seem interesting:

  • As many know, there has been a lot of controversy in recent years about high pressure measurements of superconductivity.  Here is a first-hand take by one of the people who helped bring the Dias scandal into the light.  It's a fascinating if depressing read.
  • Adapted from [1].
    Related, a major challenge in the whole diamond anvil cell search for superconductivity is trying to perform techniques more robust and determinative than 4-point resistance measurements and optical spectroscopy.  Back in March I had pointed out a Nature paper incorporating nitrogen-vacancy centers into the diamond anvils themselves to try in situ magnetometry of the Meissner effect.  Earlier this month, I saw this Phys Rev Lett paper, in which the authors have incorporated a tunnel junction directly onto the diamond anvil facet.  In addition to the usual Au leads for conduction measurements, they also have Ta leads that are coated with a native Ta2O5 oxide layer that functions as a tunnel barrier.  They've demonstrated clean-looking tunneling spectroscopy on sulphur at 160 GPa, which is pretty impressive.  Hopefully this will eventually be applied to the higher pressures and more dramatic systems of, e.g., H2S, reported to show 203 K superconductivity.  I do wonder if they will have problems applying this to hydrides, as one could imagine that having lots of hydrogen around might not be good for the oxide tunnel barriers. 
  • Saw a talk this week by Dr. Dev Shenoy, head of the US DoD's microelectronics effort.  It was very interesting and led me down the rabbit hole of learning more about the extreme ultraviolet lithography machines that are part of the state of the art.  The most advanced of these are made by ASML, are as big as a freight car, and cost almost $400M a piece.  Intel put up a video about taking delivery of one.  The engineering is pretty ridiculous.  Working with 13.5 nm light, you have to use mirrors rather than lenses, and the flatness/precision requirements on the optics are absurd.  It would really be transformative if someone could pull a SpaceX and come up with an approach that works as well but only costs $50M per machine, say.  (Of course, if it were easy, someone would have done it.  I'm also old enough to remember Bell Labs' effort at a competing approach, projective electron beam lithography.)
  • Lastly, Dan Ralph from Cornell has again performed a real pedagogical service to the community.  A few years ago, he put on the arXiv a set of lecture notes about the modern topics of Berry curvature and electronic topology meant to slot into an Ashcroft and Mermin solid state course.  Now he has uploaded another set of notes, this time on electron-electron interactions, the underpinnings of magnetism, and superconductivity, that again are at the right level to modernize and complement that kind of a course.  Highly recommended.

Saturday, August 17, 2024

Experimental techniques: bridge measurements

When we teach undergraduates about materials and measuring electrical resistance, we tend to gloss over the fact that there are specialized techniques for this - it's more than just hooking up a battery and an ammeter.  If you want to get high precision results, such as measuring the magnetoresistance \(\Delta R(B)\), where \(B\) is a magnetic field, to a part in \(10^{5}\) or better, more sophisticated tools are needed.  Bridge techniques compose a class of these, where instead of, say, measuring the voltage drop across a sample with a known current, instead you measure the difference between that voltage drop and the voltage drop across a known reference resistor.   

Why is this good?  Well, imagine that your sample resistance is something like 1 kOhm, and you want to look for changes in that resistance on the order of 10 milliOhms.  Often we need to use relatively low currents because in condensed matter physics we are doing low temperature measurements and don't want to heat up the sample.  If you used 1 microAmp of current, then the voltage drop across the sample would be about 1 mV and the changes you're looking for would be 10 nV, which is very tough to measure on top of a 1 mV background.  If you had a circuit where you were able to subtract off that 1 mV and only look at the changes, this is much more do-able.
Wheatstone bridge, from wikipedia

Sometimes in undergrad circuits, we teach the Wheatstone bridge, shown at right.  The idea is, you dial around the variable resistor \(R_{2}\) until the voltage \(V_{G} = 0\).  When the bridge is balanced like this, that means that \(R_{2}/R_{1} = R_{x}/R_{3}\), where \(R_{x}\) is the sample you care about and \(R_{1}\) and \(R_{3}\) are reference resistors that you know.  Now you can turn up the sensitivity of your voltage measurement to be very high, since you're looking at deviations away from \(V_{G} = 0\).   

You can do better in sensitivity by using an AC voltage source instead of the battery shown, and then use a lock-in amplifier for the voltage detection across the bridge.  That helps avoid some slow, drift-like confounding effects or thermoelectric voltages. 

Less well-known:  Often in condensed matter and nanoscale physics, the contact resistances where the measurement leads are attached aren't negligible.  If we are fortunate we can set up a four-terminal measurement that mitigates this concern, so that our the voltage measured on the sample is ideally not influenced by the contacts where current is injected or collected.  
A Kelvin bridge, from wikipedia

Is there a way to do a four-terminal bridge measurement?  Yes, it's called a Kelvin bridge, shown at right in its DC version.  When done properly, you can use variable resistors to null out the contact resistances.  This was originally developed back in the late 19th/early 20th century to measure resistances smaller than an Ohm or so (and so even small contact resistances can be relevant).  In many solid state systems, e.g., 2D materials, contact resistances can be considerably larger, so this comes in handy even for larger sample resistances.  

There are also capacitance bridges and inductance bridges - see here for something of an overview.  A big chunk of my PhD involved capacitance bridge measurements to look at changes in the dielectric response with \(10^{-7}\) levels of sensitivity.

One funny story to leave you:  When I was trying to understand all about the Kelvin bridge while I was a postdoc, I grabbed a book out of the Bell Labs library about AC bridge techniques that went back to the 1920s.  The author kept mentioning something cautionary about looking out for "the head effect".  I had no idea what this was; the author was English, and I wondered whether this was some British/American language issue, like how we talk about electrical "ground" in the US, but in the UK they say "earth".  Eventually I realized what this was really about.  Back before lock-ins and other high sensitivity AC voltmeters were readily available, it was common to run an AC bridge at a frequency of something like 1 kHz, and to use a pair of headphones as the detector.  The human ear is very sensitive, so you could listen to the headphones and balance the bridge until you couldn't hear the 1 kHz tone anymore (meaning the AC \(V_{G}\) signal on the bridge was very small).  The "head effect" is when you haven't designed your bridge correctly, so that the impedance of your body screws up the balance of the bridge when you put the headphones on.  The "head effect" = bridge imbalance because of the capacitance or inductance of your head.  See here.

Sunday, August 04, 2024

CHIP and Science, NSF support, and hypocrisy

Note: this post is a semi-rant about US funding for science education; if this isn't your cup of tea, read no further.


Two years ago, the CHIPS and Science Act (link goes to the full text of the bill, via the excellent congress.gov service of the Library of Congress) was signed into law.  This has gotten a lot of activity going in the US related to the semiconductor industry, as briefly reviewed in this recent discussion on Marketplace.  There are enormous investments by industry in semiconductor development and manufacturing in the US (as well as funding through US agencies such as DARPA, e.g.).  It was recognized in the act that the long-term impact of all of this will be contingent in part upon "workforce development" - having ongoing training and education of cohorts of people who can actually support all of this.  The word "workforce" shows up 222 times in the actual bill.   Likewise, there is appreciation that basic research is needed to set up sustained success and competitiveness - that's one reason why the act authorizes $81B over five years for the National Science Foundation, which would have roughly doubled the NSF budget over that period.

The reality has been sharply different.  Authorizations are not the same thing as appropriations, and the actual appropriation last year fell far short of the aspirational target.  NSF's budget for FY24 was $9.085B (see here) compared with $9.899B for FY23; the STEM education piece was $1.172B in FY24 (compared to $1.371B in FY23), a 17% year-over-year reduction.  That's even worse than the House version of the budget, which had proposed to cut the STEM education by 12.8%.  In the current budget negotiations (see here), the House is now proposing an additional 14.7% cut specifically to STEM education.  Just to be clear, that is the part of NSF's budget that is supposed to oversee the workforce development parts of CHIPS and Science.  Specifically, the bill says that the NSF is supposed to support "undergraduate scholarships, including at community colleges, graduate fellowships and traineeships, postdoctoral awards, and, as appropriate, other awards, to address STEM workforce gaps, including for programs that recruit, retain, and advance students to a bachelor's degree in a STEM discipline concurrent with a secondary school diploma, such as through existing and new partnerships with State educational agencies."  This is also the part of NSF that does things like Research Experience for Undergraduates and Research Experience for Teachers programs, and postdoctoral fellowships.  

Congressional budgeting in the US is insanely complicated and fraught for many reasons.  Honest, well-motivated people can have disagreements about priorities and appropriate levels of government spending.  That said, I think it is foolish not to support the educational foundations needed for the large investments in high tech manufacturing and infrastructure.  The people who oppose this kind of STEM education support tend to be the same people who also oppose allowing foreign talent into the country in high tech sectors.  If the US is serious about this kind of investment for future tech competitiveness, half-measures and failing to follow through are decidedly not helpful.

Sunday, July 28, 2024

Items of interest

 A couple of interesting papers that I came across this week:

  • There is long been an interest in purely electronic cooling techniques (no moving parts!) that would work at cryogenic temperatures.  You're familiar with ordinary evaporative cooling - that's what helps cool down your tea or coffee when you blow across the top if your steaming mug, and it's what makes you feel cold when you step out of the shower.  In evaporative cooling, the most energetic molecules can escape from the liquid into the gas phase, and the remaining molecules left behind reestablish thermal equilibrium at a lower temperature.  One can make a tunnel junction between a normal metal and a superconductor, and under the right circumstances, the hottest (thermally excited) electrons in the normal metal can be driven into the superconductor, leading to net cooling of the remaining electrons in the normal metal.  This is pretty neat, but it's had somewhat limited utility due to relatively small cooling power - here is a non-paywalled review that includes discussion of these approaches.  This week, the updated version of this paper went on the arXiv, demonstrating in Al/AlOx/Nb junctions, it is possible to cool from about 2.4 K to about 1.6 K, purely via electronic means.  This seems like a nice advance, especially as the quantum info trends have pushed hard on improving wafer-level Nb electronics.
  • I've written before about chirality-induced spin selectivity (see the first bullet here).  This is a still poorly understood phenomenon in which electrons passing through a chiral material acquire a net spin polarization, depending on the handedness of the chirality and the direction of the current.  This new paper in Nature is a great demonstration.  Add a layer of chiral perovskite to the charge injection path of a typical III-V multiple quantum well semiconductor LED, and the outcoming light acquires a net circular polarization, the sign of which depends on the sign of the chirality.  This works at room temperature, by the way.  

Saturday, July 20, 2024

The physics of squeaky shoes

In these unsettling and trying times, I wanted to write about the physics of a challenge I'm facing in my professional life: super squeaky shoes.  When I wear a particularly comfortable pair of shoes at work, when I walk in some hallways in my building (but not all), my shoes squeak very loudly with every step. How and why does this happen, physically?  

The shoes in question.

To understand this, we need to talk a bit about a friction, the sideways interfacial force between two surfaces when one surface is sheared (or attempted to be sheared) with respect to the other.  (Tribology is the study of friction, btw.)  In introductory physics we teach some (empirical) "laws" of friction, described in detail on the wikipedia page linked above as well as here:

  1.  For static friction (no actual sliding of the surfaces relative to each other), the frictional force \(F_{f} \le \mu_{s}N\), where \(\mu_{s}\) is the "coefficient of static friction" and \(N\) is the normal force (pushing the two surfaces together).  The force is directed in the plane and takes on the magnitude needed so that no sliding happens, up to its maximum value, at which point the surfaces start slipping relative to each other.
  2. For sliding or kinetic friction, \(F_{f} = \mu_{k}N\), where \(\mu_{k}\) is the coefficient of kinetic or sliding friction, and the force is directed in the plane to oppose the relative sliding motion.  The friction coefficients depend on the particular materials and their surface conditions.
  3. The friction forces are independent of the apparent contact area between the surfaces.  
  4. The kinetic friction force is independent of the relative sliding speed between the surfaces.
These "laws", especially (3) and (4), are truly weird once we know a bit more about physics, and I discuss this a little in my textbook.  The macroscopic friction force is emergent, meaning that it is a consequence of the materials being made up of many constituent particles interacting.  It's not a conservative force, in that energy dissipated through the sliding friction force doing work is "lost" from the macroscopic movement of the sliding objects and ends up in the microscopic vibrational motion (and electronic distributions, if the objects are metals).  See here for more discussion of friction laws.

Shoe squeaking happens because of what is called "stick-slip" motion.  When I put my weight on my right shoe, the rubber sole of the shoe deforms and elastic forces (like a compressed spring) push the rubber to spread out, favoring sliding rubber at the rubber-floor interface.  At some point, the local static friction maximum force is exceeded and the rubber begins to slide relative to the floor.  That lets the rubber "uncompress" some, so that the spring-like elastic forces are reduced, and if they fall back below \(\mu_{s}N\), that bit of sole will stick on the surface again.  A similar situation is shown in this model from Wolfram, looking at a mass (attached to an anchored spring) interacting with a conveyer belt.   If this start/stop cyclic motion happens at acoustic sorts of frequencies in the kHz, it sounds like a squeak, because the start-stop motion excites sound waves in the air (and the solid surfaces).  This stick-slip phenomenon is also why brakes on cars and bikes squeal, why hinges on doors in spooky houses creak, and why that one board in your floor makes that weird noise.  It's also used in various piezoelectric actuators

Macroscopic friction emerges from a zillion microscopic interactions and is affected by the chemical makeup of the surfaces, their morphology and roughness, any adsorbed layers of moisture or contaminants (remember: every surface around you right now is coated in a few molecular layers of water and hydrocarbon contamination), and van der Waals forces, among other things.  The reason my shoes squeak in some hallways but not others has to do with how the floors have been cleaned.  I could stop the squeaking by altering the bottom surface of my soles, though I wouldn't want to use a lubricant that is so effective that it seriously lowers \(\mu_{s}N\) and makes me slip.  

Friction is another example of an emergent phenomenon that is everywhere around us, of enormous technological and practical importance, and has some remarkable universality of response.  This kind of emergence is at the heart of the physics of materials, and trying to predict friction and squeaky shoes starting from elementary particle physics is just not do-able. 


Sunday, July 14, 2024

Brief items - light-driven diamagnetism, nuclear recoil, spin transport in VO2

Real life continues to make itself felt in various ways this summer (and that's not even an allusion to political madness), but here are three papers (two from others and a self-indulgent plug for our work) you might find interesting.

  • There has been a lot of work in recent years particularly by the group of Andrea Cavalleri, in which they use infrared light to pump particular vibrational modes in copper oxide superconductors (and other materials) (e.g. here).  There are long-standing correlations between the critical temperature for superconductivity, \(T_{c}\), and certain bond angles in the cuprates.  Broadly speaking, using time-resolved spectroscopy, measurements of the optical conductivity in these pumped systems show superconductor-like forms as a function of energy even well above the equilibrium \(T_{c}\), making it tempting to argue that the driven systems are showing nonequilibrium superconductivity.  At the same time, there has been a lot of interest in looking for other signatures, such as signs of the ways uperconductors expel magnetic flux through the famous Meissner effect.  In this recent result (arXiv here, Nature here), magneto-optic measurements in this same driven regime show signs of field build-up around the perimeter of the driven cuprate material in a magnetic field, as would be expected from Meissner-like flux expulsion.  I haven't had time to read this in detail, but it looks quite exciting.  
  • Optical trapping of nanoparticles is a very useful tool, and with modern techniques it is possible to measure the position and response of individual trapped particles to high precision (see here and here).  In this recent paper, the group of David Moore at Yale has been able to observe the recoil of such a particle due to the decay of a single atomic nucleus (which spits out an energetic alpha particle).  As an experimentalist, I find this extremely impressive, in that they are measuring the kick given to a nanoparticle a trillion times more massive than the ejected helium nucleus.  
  • From our group, we have published a lengthy study (arXiv here, Phys Rev B here) of local/longitudinal spin Seebeck response in VO2, a material with an insulating state that is thought to be magnetically inert.  This corroborates our earlier work, discussed here.  In brief, in ideal low-T VO2, the vanadium atoms are paired up into dimers, and the expectation is that the unpaired 3d electrons on those atoms form singlets with zero net angular momentum.  The resulting material would then not be magnetically interesting (though it could support triplet excitations called triplons).  Surprisingly, at low temperatures we find a robust spin Seebeck response, comparable to what is observed in ordered insulating magnets like yttrium iron garnet.  It seems to have the wrong sign to be from triplons, and it doesn't seem possible to explain the details using a purely interfacial model.  I think this is intriguing, and I hope other people take notice.
Hoping for more time to write as the summer progresses.  Suggestions for topics are always welcome, though I may not be able to get to everything.