Sunday, November 17, 2024

Really doing mechanics at the quantum level

A helpful ad from Science Made Stupid.
Since before the development of micro- and nanoelectromechanical techniques, there has been an interest in making actual mechanical widgets that show quantum behavior.  There is no reason that we should not be able to make a mechanical resonator, like a guitar string or a cantilevered beam, with a high enough resonance frequency so that when it is placed at low temperatures ( \(\hbar \omega \gg k_{\mathrm{B}}T\)), the resonator can sit in its quantum mechanical ground state.  Indeed, achieving this was Science's breakthrough of the year in 2010.  

This past week, a paper was published from ETH Zurich in which an aluminum nitride mechanical resonator was actually used as a qubit, where the ground and first excited states of this quantum (an)harmonic oscillator represented \(|0 \rangle\) and \(|1 \rangle\).  They demonstrate actual quantum gate operations on this mechanical system (which is coupled to a more traditional transmon qubit - the setup is explained in this earlier paper).  

One key trick to being able to make a qubit out of a mechanical oscillator is to have sufficiently large anharmonicity.  An ideal, perfectly harmonic quantum oscillator has an energy spectrum given by \((n + 1/2)\hbar \omega\), where \(n\) is the number of quanta of excitations in the resonator.  In that situation, the energy difference between adjacent levels is always \(\hbar \omega\).  The problem with this from the qubit perspective is, you want to have a quantum two-level system, and how can you controllably drive transitions just between a particular pair of levels when all of the adjacent level transitions cost the same energy?  The authors of this recent paper have achieved a strong anharmonicity, basically making the "spring" of the mechanical resonator softer in one displacement direction than the other.  The result is that the energy difference between levels \(|0\rangle\) and \(|1\rangle\) is very different than the energy difference between levels \(|1\rangle\) and \(|2\rangle\), etc.  (In typical superconducting qubits, the resonance is not mechanical but an electrical \(LC\)-type, and a Josephson junction acts like a non-linear inductor, giving the desired anharmonic properties.)  This kind of mechanical anharmonicity means that you can effectively have interactions between vibrational excitations ("phonon-phonon"), analogous to what the circuit QED folks can do.  Neat stuff.


Tuesday, November 05, 2024

Recent papers to distract....

Time for blogging has continued to be scarce, but here are a few papers to distract (and for readers who are US citizens:  vote if you have not already done so!).

  • Reaching back, this preprint by Aharonov, Collins, Popescu talks about a thought experiment in which angular momentum can seemingly be transferred from one region to another even though the probability of detecting spin-carrying particles between the two regions can be made arbitrarily low.  I've always found these kinds of discussions to be fun, even when the upshot for me is usually, "I must not really understand the subtleties of weak measurements in quantum mechanics."  This is a specific development based on the quantum Cheshire cat idea.  I know enough to understand that when one is talking about post-selection in quantum experiments, some questions are just not well-posed.  If we send a wavepacked of photons at a barrier, and we detect with a click a photon that (if it was in the middle of the incident wavepacket) seems to have therefore traversed the barrier faster than c, that doesn't mean much, since the italicized parenthetical clause above is uncheckable in principle.  
  • Much more recently, this paper out last week in Nature reports the observation of superconductivity below 200 mK in a twisted bilayer of WSe2.  I believe that this is the first observation of superconductivity in a twisted bilayer of an otherwise nonsuperconducting 2D semiconductor other than graphene.  As in the graphene case, the superconductivity shows up at a particular filling of the moirĂ© lattice, and interestingly seems to happen around zero applied vertical electric field (displacement field) in the device.  I don't have much to say here beyond that it's good to see interesting results in a broader class of materials - that suggests that there is a more general principle at work than "graphene is special".
  • This preprint from last week from Klein et al. is pretty impressive.  It's been known for over 25 years (see here) that it is possible to use a single-electron transistor (SET) as a scannable charge sensor and potentiometer.  Historically, making these devices and operating them has been a real art.  They are fragile, static-sensitive, and fabricating them from evaporated metal on the tips of drawn optical fibers is touchy.  There have been advances in recent years from multiple quarters, and this paper demonstrates a particularly interesting idea: Use a single charge trap in a layer of WSe2 as the SET, and effectively put the sample of interest on the scannable tip.  This is an outgrowth of the quantum twisting microscope.

Sunday, October 20, 2024

Guide to faculty searches, 2024 edition

As you can tell from my posting frequency lately, I have been unusually busy.  I hope to be writing about more condensed matter and nano science soon.   In the meantime, I realized that I have not re-posted or updated my primer on how tenure-track faculty searches work in physics since 2015.  Academia hasn't changed much since then, but even though the previous posts can be found via search engines, it's probably a good idea to put this out there again.  Interestingly, here is a link to a Physics Today article from 2001 about this topic, and here is a link to the same author's 2020 updated version.

Here are the steps in the typical tenure-track faculty search process.  Non-tenure-track hiring can be very similar depending on the institution.  (Just to define the terminology:  "Teaching professor" usually = non-tenure-track, expected to teach several courses per semester, usually no expectations of research except perhaps education research, no lab space.  "Research professor" usually = non-tenure-track, research responsibilities and usually not expected to teach; often entirely paid on research grant funds, either their own or those of a tenure-track PI.)
  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The challenge with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire.  In tight financial times, this can become more challenging. I know of public universities having to cancel searches in 2008/2009 even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some lead time and not have to back-track.
  • Note that some universities and colleges/schools within universities have other processes outside the traditional "department argues for and gets a faculty line to fill" method.  "Cluster hiring", for example, is when, say, the university decides to hire several faculty members whose research is all thematically related to "energy and sustainability", a broad topic that could clearly involve chemistry, physics, materials science, chemical engineering, electrical engineering, etc.  The logistics of cluster hiring can vary quite a bit from place to place.  I have opinions about the best ways to do this; one aspect that my own institution does well is to recognize that anyone hired has to have an actual primary departmental home - that way the tenure process and the teaching responsibilities are unambiguous.
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.  In cross-departmental searches (as in the cluster hiring described above), a dean or equivalent would likely put together the committee.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A committed effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (reaching out through relevant professional societies, social media, society mailing lists - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Back when I was applying for jobs, MIT and Berkeley had run the same ad every year, grazing for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean usually meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
  • Applications come in.  This is all done electronically, thank goodness.  The fact that I feel this way tells you about how old I am.  Some online systems can be clunky, since occasionally universities try to use the same software to hire faculty as they do to hire groundskeepers, but generally things go smoothly.  The two most common software systems out there in the US are Interfolio and Academic Jobs Online.  Each have their own idiosyncracies.  Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later.  I understand this perspective, but I tend to disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)  
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least weed out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab.  "Type A" organized people often bring that intensity to teaching as well.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is formed, and the committee meets to hash that down to, in the end, four or five to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within a committee on which I've served. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.  

Update:  As pointed out by a commenter, a relatively recent wrinkle is the use of zoom interviews.  Rather than inviting 5-ish candidates to campus for interviews, many places are now doing some zoom interviews with a larger pool (more like 10 candidates) and then down-selecting to a smaller number to invite to campus.   Making sure that the interview formats are identical across all the candidates (e.g., having scripts to make sure that the same questions are always asked in the same order) is one way to mitigate unintentional biases that can otherwise be present.

Tips for candidates:

  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture and human nature, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons.  I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision - I've now been here 24+ years, and we've enjoyed ourselves (my occasional Texas politics blog posts aside).
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement or a statement about mentoring or inclusion, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Proof-read cover letters and other documents.  Saying that you're very excited about the possibilities at University A when you sent that application to University B is a bit awkward.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.  I would steer clear of any use of AI help in writing any of the materials, unless it's purely at the "please check this for grammatical mistakes and typographical errors" level. 
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require $1.5M in startup and an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it explicitly, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to that of a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
As always, more feedback in the comments is appreciated.

Tuesday, October 01, 2024

CHIPS and Science - the reality vs the aspiration

I already wrote about this issue here back in August, but I wanted to highlight a policy statement that I wrote with colleagues as part of Rice's Baker Institute's Election 2024: Policy Playbook, which "delivers nonpartisan, expert insights into key issues at stake on the 2024 campaign trail and beyond. Presented by Rice University and the Baker Institute for Public Policy, the series offers critical context, analysis, and recommendations to inform policymaking in the United States and Texas."

The situation is summarized in this graph.  It will be very difficult to achieve the desired policy goals of the CHIPS and Science Act if Congress doesn't come remotely close to appropriations that match the targets in the Act.  What is not shown in this plot are the cuts to STEM education pieces of NSF and other agencies, despite the fact that a main goal of the Act is supposed to be education and workforce development to support the semiconductor industry.

Anyway, please take a look.  It's a very brief document.

Sunday, September 29, 2024

Annual Nobel speculation thread

Not that prizes are the be-all and end-all, but this has become an annual tradition.  Who are your speculative laureates this year for physics and chemistry?  As I did last year and for several years before, I will put forward my usual thought that the physics prize could be Aharonov and Berry for geometric phases in physics (even though Pancharatnam is intellectually in there and died in 1969).  This is a long shot, as always. Given that attosecond experiments were last year, and AMO/quantum info foundations were in 2022, and climate + spin glasses/complexity were 2021, it seems like astro is "due".   

Sunday, September 22, 2024

Lots to read, including fab for quantum and "Immaterial Science"

Sometimes there are upticks in the rate of fun reading material.  In the last few days:

  • A Nature paper has been published by a group of authors predominantly from IMEC in Belgium, in which they demonstrate CMOS-compatible manufacturing of superconducting qubit hardware (Josephson junctions, transmon qubits, based on aluminum) across 300 mm diameter wafers.  This is a pretty big deal - their method for making the Al/AlOx/Al tunnel junctions is different than the shadow evaporation method routinely used in small-scale fab.  They find quite good performance of the individual qubits with strong uniformity across the whole wafer, testing representative random devices.  They did not actually do multi-qubit operations, but what they have shown is certainly a necessary step if there is ever going to be truly large-scale quantum information processing based on this kind of superconducting approach.
  • Interestingly, Friday on the arXiv, a group led by researchers at Karlsruhe demonstrated spin-based quantum dot qubits in Si/SiGe, made on 300 mm substrates.  This fab process comes complete with an integrated Co micromagnet for help in conducting electric dipole spin resonance.  They demonstrate impressive performance in terms of single-qubit properties and operations, with the promise that the coherence times would be at least an order of magnitude longer if they had used isotopically purified 28Si material.  (The nuclear spins of the stray 29Si atoms in the ordinary Si used here are a source of decoherence.)  
So, while tremendous progress has been made with atomic physics approaches to quantum computing (tweezer systems like thision trapping), it's not wise to count out the solid-state approaches.  The engineering challenges are formidable, but solid-state platforms are based on fab approaches that can make billions of transistors per chip, with complex 3D integration.

  • On the arXiv this evening is also this review about "quantum geometry", which seems like a pretty readable overview of how the underlying structure of the wavefunctions in crystalline solids (the part historically neglected for decades, but now appreciated through its relevance to topology and a variety of measurable consequences) affects electronic and optical response.  I just glanced at it, but I want to make time to look it over in detail.
  • Almost 30 years ago, Igor Dolgachev at Michigan did a great service by writing up a brief book entitled "A Brief Introduction to Physics for Mathematicians".  That link is to the pdf version hosted on his website.  Interesting to see how this is presented, especially since a number of approaches routinely shown to undergrad physics majors (e.g., almost anything we do with Dirac delta functions) generally horrify rigorous mathematics students.
  • Also fun (big pdf link here) is the first fully pretty and typeset issue of the amusing Journal of Immaterial Science, shown at right.  There is a definite chemistry slant to the content, and I encourage you to read their (satirical) papers as they come out on their website


Monday, September 16, 2024

Fiber optics + a different approach to fab

 Two very brief items of interest:

  • This article is a nice popular discussion of the history of fiber optics and the remarkable progress it's made for telecommunications.  If you're interested in a more expansive but very accessible take on this, I highly recommend City of Light by Jeff Hecht (not to be confused with Eugene Hecht, author of the famous optics textbook).
  • I stumbled upon an interesting effort by Yokogawa, the Japanese electronics manufacturer, to provide an alternative path for semiconductor device prototyping that they call minimal fab.  The idea is, instead of prototyping circuits on 200 mm wafers or larger (the industry standard for large scale production is 200 mm or 300 mm.  Efforts to go up to 450 mm wafers have been shelved for now.), there are times when it makes sense to work on 12.5 mm substrates.  Their setup uses maskless photolithography and is intended to be used without needing a cleanroom.  Admittedly, this limits it strongly in terms of device size to 1970s-era micron scales (presumably this could be pushed to 1-2 micron with a fancier litho tool), and it's designed for single-layer processing (not many-layer alignments with vias).  Still, this could be very useful for startup efforts, and apparently it's so simple that a child could use it.

Saturday, September 07, 2024

Seeing through tissue and Kramers-Kronig

There is a paper in Science this week that is just a great piece of work.  The authors find that by dyeing living tissue with a particular biocompatible dye molecule, they can make that tissue effectively transparent, so you can see through it.  The paper includes images (and videos) that are impressive. 
Seeing into a living mouse, adapted from here.

How does this work?  There are a couple of layers to the answer.  

Light scatters at the interface between materials with dissimilar optical properties (summarized mathematically as the frequency-dependent index of refraction, \(n\), related to the complex dielectric function \(\tilde{\epsilon}\).   Light within a material travels with a phase velocity of \(c/n\).).  Water and fatty molecules have different indices, for example, so little droplets of fat in suspension scatter light strongly, which is why milk is, well, milky.  This kind of scattering is mostly why visible light doesn't make it through your skin very far.  Lower the mismatch between indices, and you turn down scattering at the interfaces.  Here is a cute demo of this that I pointed out about 15 (!) years ago:


Frosted glass scatters visible light well because it has surface bumpiness on the scale of the wavelength of visible light, and the index of refraction of glass is about 1.5 for visible light, while air has an index close to 1.  Fill in those bumps with something closer to the index of glass, like clear plastic packing tape, and suddenly you can see through frosted glass.  

In the dyed tissue, the index of refraction of the water-with-dye becomes closer to that of the fatty molecules that make up cell membranes, making that layer of tissue have much-reduced scattering, and voilĂ , you can see a mouse's internal organs.  Amazingly, this index matching idea is the plot device in HG Wells' The Invisible Man!

The physics question is then, how and why does the dye, which looks yellow and absorbs strongly in the blue/purple, change the index of refraction of the water in the visible?  The answer lies with a concept that very often seems completely abstract to students, the Kramers-Kronig relations.  

We describe how an electric field (from the light) polarizes a material using the frequency-dependent complex permittivity \(\tilde{\epsilon}(\omega) = \epsilon'(\omega) + i \epsilon''(\omega)\), where \(\omega\) is the frequency.  What this means is that there is a polarization that happens in-phase with the driving electric field (proportional to the real part of \(\tilde{\epsilon}(\omega)\)) and a polarization that lags or leads the phase of the driving electric field (the imaginary part, which leads to dissipation and absorption).   

The functions \(\epsilon'(\omega)\) and \(\epsilon''(\omega)\) can't be anything you want, though. Thanks to causality, the response of a material now can only depend on what the electric field has done in the past.  That restriction means that, when we decide to work in the frequency domain by Fourier transforming, there are relationships, the K-K relations, that must be obeyed between integrals of \(\epsilon'(\omega)\) and \(\epsilon''(\omega)\).  The wikipedia page has both a traditional (and to many students, obscure) derivation, as well as a time-domain picture.  

So, the dye molecules, with their very strong absorption in the blue/purple, make \(\epsilon''(\omega)\) really large in that frequency range.  The K-K relations require some compensating changes in \(\epsilon'(\omega)\) at lower frequencies to make up for this, and the result is the index matching described above.  

This work seems like it should have important applications in medical imaging, and it's striking to me that this had not been done before.  The K-K relations have been known in their present form for about 100 years.  It's inspiring that new, creative insights can still come out of basic waves and optics.

Saturday, August 31, 2024

Items of interest

The start of the semester has been very busy, but here are some items that seem interesting:

  • As many know, there has been a lot of controversy in recent years about high pressure measurements of superconductivity.  Here is a first-hand take by one of the people who helped bring the Dias scandal into the light.  It's a fascinating if depressing read.
  • Adapted from [1].
    Related, a major challenge in the whole diamond anvil cell search for superconductivity is trying to perform techniques more robust and determinative than 4-point resistance measurements and optical spectroscopy.  Back in March I had pointed out a Nature paper incorporating nitrogen-vacancy centers into the diamond anvils themselves to try in situ magnetometry of the Meissner effect.  Earlier this month, I saw this Phys Rev Lett paper, in which the authors have incorporated a tunnel junction directly onto the diamond anvil facet.  In addition to the usual Au leads for conduction measurements, they also have Ta leads that are coated with a native Ta2O5 oxide layer that functions as a tunnel barrier.  They've demonstrated clean-looking tunneling spectroscopy on sulphur at 160 GPa, which is pretty impressive.  Hopefully this will eventually be applied to the higher pressures and more dramatic systems of, e.g., H2S, reported to show 203 K superconductivity.  I do wonder if they will have problems applying this to hydrides, as one could imagine that having lots of hydrogen around might not be good for the oxide tunnel barriers. 
  • Saw a talk this week by Dr. Dev Shenoy, head of the US DoD's microelectronics effort.  It was very interesting and led me down the rabbit hole of learning more about the extreme ultraviolet lithography machines that are part of the state of the art.  The most advanced of these are made by ASML, are as big as a freight car, and cost almost $400M a piece.  Intel put up a video about taking delivery of one.  The engineering is pretty ridiculous.  Working with 13.5 nm light, you have to use mirrors rather than lenses, and the flatness/precision requirements on the optics are absurd.  It would really be transformative if someone could pull a SpaceX and come up with an approach that works as well but only costs $50M per machine, say.  (Of course, if it were easy, someone would have done it.  I'm also old enough to remember Bell Labs' effort at a competing approach, projective electron beam lithography.)
  • Lastly, Dan Ralph from Cornell has again performed a real pedagogical service to the community.  A few years ago, he put on the arXiv a set of lecture notes about the modern topics of Berry curvature and electronic topology meant to slot into an Ashcroft and Mermin solid state course.  Now he has uploaded another set of notes, this time on electron-electron interactions, the underpinnings of magnetism, and superconductivity, that again are at the right level to modernize and complement that kind of a course.  Highly recommended.

Saturday, August 17, 2024

Experimental techniques: bridge measurements

When we teach undergraduates about materials and measuring electrical resistance, we tend to gloss over the fact that there are specialized techniques for this - it's more than just hooking up a battery and an ammeter.  If you want to get high precision results, such as measuring the magnetoresistance \(\Delta R(B)\), where \(B\) is a magnetic field, to a part in \(10^{5}\) or better, more sophisticated tools are needed.  Bridge techniques compose a class of these, where instead of, say, measuring the voltage drop across a sample with a known current, instead you measure the difference between that voltage drop and the voltage drop across a known reference resistor.   

Why is this good?  Well, imagine that your sample resistance is something like 1 kOhm, and you want to look for changes in that resistance on the order of 10 milliOhms.  Often we need to use relatively low currents because in condensed matter physics we are doing low temperature measurements and don't want to heat up the sample.  If you used 1 microAmp of current, then the voltage drop across the sample would be about 1 mV and the changes you're looking for would be 10 nV, which is very tough to measure on top of a 1 mV background.  If you had a circuit where you were able to subtract off that 1 mV and only look at the changes, this is much more do-able.
Wheatstone bridge, from wikipedia

Sometimes in undergrad circuits, we teach the Wheatstone bridge, shown at right.  The idea is, you dial around the variable resistor \(R_{2}\) until the voltage \(V_{G} = 0\).  When the bridge is balanced like this, that means that \(R_{2}/R_{1} = R_{x}/R_{3}\), where \(R_{x}\) is the sample you care about and \(R_{1}\) and \(R_{3}\) are reference resistors that you know.  Now you can turn up the sensitivity of your voltage measurement to be very high, since you're looking at deviations away from \(V_{G} = 0\).   

You can do better in sensitivity by using an AC voltage source instead of the battery shown, and then use a lock-in amplifier for the voltage detection across the bridge.  That helps avoid some slow, drift-like confounding effects or thermoelectric voltages. 

Less well-known:  Often in condensed matter and nanoscale physics, the contact resistances where the measurement leads are attached aren't negligible.  If we are fortunate we can set up a four-terminal measurement that mitigates this concern, so that our the voltage measured on the sample is ideally not influenced by the contacts where current is injected or collected.  
A Kelvin bridge, from wikipedia

Is there a way to do a four-terminal bridge measurement?  Yes, it's called a Kelvin bridge, shown at right in its DC version.  When done properly, you can use variable resistors to null out the contact resistances.  This was originally developed back in the late 19th/early 20th century to measure resistances smaller than an Ohm or so (and so even small contact resistances can be relevant).  In many solid state systems, e.g., 2D materials, contact resistances can be considerably larger, so this comes in handy even for larger sample resistances.  

There are also capacitance bridges and inductance bridges - see here for something of an overview.  A big chunk of my PhD involved capacitance bridge measurements to look at changes in the dielectric response with \(10^{-7}\) levels of sensitivity.

One funny story to leave you:  When I was trying to understand all about the Kelvin bridge while I was a postdoc, I grabbed a book out of the Bell Labs library about AC bridge techniques that went back to the 1920s.  The author kept mentioning something cautionary about looking out for "the head effect".  I had no idea what this was; the author was English, and I wondered whether this was some British/American language issue, like how we talk about electrical "ground" in the US, but in the UK they say "earth".  Eventually I realized what this was really about.  Back before lock-ins and other high sensitivity AC voltmeters were readily available, it was common to run an AC bridge at a frequency of something like 1 kHz, and to use a pair of headphones as the detector.  The human ear is very sensitive, so you could listen to the headphones and balance the bridge until you couldn't hear the 1 kHz tone anymore (meaning the AC \(V_{G}\) signal on the bridge was very small).  The "head effect" is when you haven't designed your bridge correctly, so that the impedance of your body screws up the balance of the bridge when you put the headphones on.  The "head effect" = bridge imbalance because of the capacitance or inductance of your head.  See here.

Sunday, August 04, 2024

CHIP and Science, NSF support, and hypocrisy

Note: this post is a semi-rant about US funding for science education; if this isn't your cup of tea, read no further.


Two years ago, the CHIPS and Science Act (link goes to the full text of the bill, via the excellent congress.gov service of the Library of Congress) was signed into law.  This has gotten a lot of activity going in the US related to the semiconductor industry, as briefly reviewed in this recent discussion on Marketplace.  There are enormous investments by industry in semiconductor development and manufacturing in the US (as well as funding through US agencies such as DARPA, e.g.).  It was recognized in the act that the long-term impact of all of this will be contingent in part upon "workforce development" - having ongoing training and education of cohorts of people who can actually support all of this.  The word "workforce" shows up 222 times in the actual bill.   Likewise, there is appreciation that basic research is needed to set up sustained success and competitiveness - that's one reason why the act authorizes $81B over five years for the National Science Foundation, which would have roughly doubled the NSF budget over that period.

The reality has been sharply different.  Authorizations are not the same thing as appropriations, and the actual appropriation last year fell far short of the aspirational target.  NSF's budget for FY24 was $9.085B (see here) compared with $9.899B for FY23; the STEM education piece was $1.172B in FY24 (compared to $1.371B in FY23), a 17% year-over-year reduction.  That's even worse than the House version of the budget, which had proposed to cut the STEM education by 12.8%.  In the current budget negotiations (see here), the House is now proposing an additional 14.7% cut specifically to STEM education.  Just to be clear, that is the part of NSF's budget that is supposed to oversee the workforce development parts of CHIPS and Science.  Specifically, the bill says that the NSF is supposed to support "undergraduate scholarships, including at community colleges, graduate fellowships and traineeships, postdoctoral awards, and, as appropriate, other awards, to address STEM workforce gaps, including for programs that recruit, retain, and advance students to a bachelor's degree in a STEM discipline concurrent with a secondary school diploma, such as through existing and new partnerships with State educational agencies."  This is also the part of NSF that does things like Research Experience for Undergraduates and Research Experience for Teachers programs, and postdoctoral fellowships.  

Congressional budgeting in the US is insanely complicated and fraught for many reasons.  Honest, well-motivated people can have disagreements about priorities and appropriate levels of government spending.  That said, I think it is foolish not to support the educational foundations needed for the large investments in high tech manufacturing and infrastructure.  The people who oppose this kind of STEM education support tend to be the same people who also oppose allowing foreign talent into the country in high tech sectors.  If the US is serious about this kind of investment for future tech competitiveness, half-measures and failing to follow through are decidedly not helpful.

Sunday, July 28, 2024

Items of interest

 A couple of interesting papers that I came across this week:

  • There is long been an interest in purely electronic cooling techniques (no moving parts!) that would work at cryogenic temperatures.  You're familiar with ordinary evaporative cooling - that's what helps cool down your tea or coffee when you blow across the top if your steaming mug, and it's what makes you feel cold when you step out of the shower.  In evaporative cooling, the most energetic molecules can escape from the liquid into the gas phase, and the remaining molecules left behind reestablish thermal equilibrium at a lower temperature.  One can make a tunnel junction between a normal metal and a superconductor, and under the right circumstances, the hottest (thermally excited) electrons in the normal metal can be driven into the superconductor, leading to net cooling of the remaining electrons in the normal metal.  This is pretty neat, but it's had somewhat limited utility due to relatively small cooling power - here is a non-paywalled review that includes discussion of these approaches.  This week, the updated version of this paper went on the arXiv, demonstrating in Al/AlOx/Nb junctions, it is possible to cool from about 2.4 K to about 1.6 K, purely via electronic means.  This seems like a nice advance, especially as the quantum info trends have pushed hard on improving wafer-level Nb electronics.
  • I've written before about chirality-induced spin selectivity (see the first bullet here).  This is a still poorly understood phenomenon in which electrons passing through a chiral material acquire a net spin polarization, depending on the handedness of the chirality and the direction of the current.  This new paper in Nature is a great demonstration.  Add a layer of chiral perovskite to the charge injection path of a typical III-V multiple quantum well semiconductor LED, and the outcoming light acquires a net circular polarization, the sign of which depends on the sign of the chirality.  This works at room temperature, by the way.  

Saturday, July 20, 2024

The physics of squeaky shoes

In these unsettling and trying times, I wanted to write about the physics of a challenge I'm facing in my professional life: super squeaky shoes.  When I wear a particularly comfortable pair of shoes at work, when I walk in some hallways in my building (but not all), my shoes squeak very loudly with every step. How and why does this happen, physically?  

The shoes in question.

To understand this, we need to talk a bit about a friction, the sideways interfacial force between two surfaces when one surface is sheared (or attempted to be sheared) with respect to the other.  (Tribology is the study of friction, btw.)  In introductory physics we teach some (empirical) "laws" of friction, described in detail on the wikipedia page linked above as well as here:

  1.  For static friction (no actual sliding of the surfaces relative to each other), the frictional force \(F_{f} \le \mu_{s}N\), where \(\mu_{s}\) is the "coefficient of static friction" and \(N\) is the normal force (pushing the two surfaces together).  The force is directed in the plane and takes on the magnitude needed so that no sliding happens, up to its maximum value, at which point the surfaces start slipping relative to each other.
  2. For sliding or kinetic friction, \(F_{f} = \mu_{k}N\), where \(\mu_{k}\) is the coefficient of kinetic or sliding friction, and the force is directed in the plane to oppose the relative sliding motion.  The friction coefficients depend on the particular materials and their surface conditions.
  3. The friction forces are independent of the apparent contact area between the surfaces.  
  4. The kinetic friction force is independent of the relative sliding speed between the surfaces.
These "laws", especially (3) and (4), are truly weird once we know a bit more about physics, and I discuss this a little in my textbook.  The macroscopic friction force is emergent, meaning that it is a consequence of the materials being made up of many constituent particles interacting.  It's not a conservative force, in that energy dissipated through the sliding friction force doing work is "lost" from the macroscopic movement of the sliding objects and ends up in the microscopic vibrational motion (and electronic distributions, if the objects are metals).  See here for more discussion of friction laws.

Shoe squeaking happens because of what is called "stick-slip" motion.  When I put my weight on my right shoe, the rubber sole of the shoe deforms and elastic forces (like a compressed spring) push the rubber to spread out, favoring sliding rubber at the rubber-floor interface.  At some point, the local static friction maximum force is exceeded and the rubber begins to slide relative to the floor.  That lets the rubber "uncompress" some, so that the spring-like elastic forces are reduced, and if they fall back below \(\mu_{s}N\), that bit of sole will stick on the surface again.  A similar situation is shown in this model from Wolfram, looking at a mass (attached to an anchored spring) interacting with a conveyer belt.   If this start/stop cyclic motion happens at acoustic sorts of frequencies in the kHz, it sounds like a squeak, because the start-stop motion excites sound waves in the air (and the solid surfaces).  This stick-slip phenomenon is also why brakes on cars and bikes squeal, why hinges on doors in spooky houses creak, and why that one board in your floor makes that weird noise.  It's also used in various piezoelectric actuators

Macroscopic friction emerges from a zillion microscopic interactions and is affected by the chemical makeup of the surfaces, their morphology and roughness, any adsorbed layers of moisture or contaminants (remember: every surface around you right now is coated in a few molecular layers of water and hydrocarbon contamination), and van der Waals forces, among other things.  The reason my shoes squeak in some hallways but not others has to do with how the floors have been cleaned.  I could stop the squeaking by altering the bottom surface of my soles, though I wouldn't want to use a lubricant that is so effective that it seriously lowers \(\mu_{s}N\) and makes me slip.  

Friction is another example of an emergent phenomenon that is everywhere around us, of enormous technological and practical importance, and has some remarkable universality of response.  This kind of emergence is at the heart of the physics of materials, and trying to predict friction and squeaky shoes starting from elementary particle physics is just not do-able. 


Sunday, July 14, 2024

Brief items - light-driven diamagnetism, nuclear recoil, spin transport in VO2

Real life continues to make itself felt in various ways this summer (and that's not even an allusion to political madness), but here are three papers (two from others and a self-indulgent plug for our work) you might find interesting.

  • There has been a lot of work in recent years particularly by the group of Andrea Cavalleri, in which they use infrared light to pump particular vibrational modes in copper oxide superconductors (and other materials) (e.g. here).  There are long-standing correlations between the critical temperature for superconductivity, \(T_{c}\), and certain bond angles in the cuprates.  Broadly speaking, using time-resolved spectroscopy, measurements of the optical conductivity in these pumped systems show superconductor-like forms as a function of energy even well above the equilibrium \(T_{c}\), making it tempting to argue that the driven systems are showing nonequilibrium superconductivity.  At the same time, there has been a lot of interest in looking for other signatures, such as signs of the ways uperconductors expel magnetic flux through the famous Meissner effect.  In this recent result (arXiv here, Nature here), magneto-optic measurements in this same driven regime show signs of field build-up around the perimeter of the driven cuprate material in a magnetic field, as would be expected from Meissner-like flux expulsion.  I haven't had time to read this in detail, but it looks quite exciting.  
  • Optical trapping of nanoparticles is a very useful tool, and with modern techniques it is possible to measure the position and response of individual trapped particles to high precision (see here and here).  In this recent paper, the group of David Moore at Yale has been able to observe the recoil of such a particle due to the decay of a single atomic nucleus (which spits out an energetic alpha particle).  As an experimentalist, I find this extremely impressive, in that they are measuring the kick given to a nanoparticle a trillion times more massive than the ejected helium nucleus.  
  • From our group, we have published a lengthy study (arXiv here, Phys Rev B here) of local/longitudinal spin Seebeck response in VO2, a material with an insulating state that is thought to be magnetically inert.  This corroborates our earlier work, discussed here.  In brief, in ideal low-T VO2, the vanadium atoms are paired up into dimers, and the expectation is that the unpaired 3d electrons on those atoms form singlets with zero net angular momentum.  The resulting material would then not be magnetically interesting (though it could support triplet excitations called triplons).  Surprisingly, at low temperatures we find a robust spin Seebeck response, comparable to what is observed in ordered insulating magnets like yttrium iron garnet.  It seems to have the wrong sign to be from triplons, and it doesn't seem possible to explain the details using a purely interfacial model.  I think this is intriguing, and I hope other people take notice.
Hoping for more time to write as the summer progresses.  Suggestions for topics are always welcome, though I may not be able to get to everything.

Saturday, July 06, 2024

What is a Wigner crystal?

Last week I was at the every-2-years Gordon Research Conference on Correlated Electron Systems at lovely Mt. Holyoke.  It was very fun, but one key aspect of the culture of the GRCs is that attendees are not supposed to post about them on social media, thus encouraging presenters to show results that have not yet been published.  So, no round up from me, except to say that I think I learned a lot.

The topic of Wigner crystals came up, and I realized that (at least according to google) I have not really written about these, and now seems to be a good time.

First, let's talk about crystals in general.  If you bring together an ensemble of objects (let's assume they're identical for now) and throw in either some long-range attraction or an overall confining constraint, plus a repulsive interaction that is effective at short range, you tend to get formation of a crystal, if an object's kinetic energy is sufficiently small compared to the interactions.  A couple of my favorite examples of this are crystals from drought balls and bubble rafts.  As the kinetic energy (usually parametrized by a temperature when we're talking about atoms and molecules as the objects) is reduced, the system crystallizes, spontaneously breaking continuous translational and rotational symmetry, leading to configurations with discrete translational and rotational symmetry.  Using charged colloidal particles as buiding blocks, the attractive interaction is electrostatic, because the particles have different charges, and they have the usual "hard core repulsion".  The result can be all kinds of cool colloidal crystal structures.

In 1934, Eugene Wigner considered whether electrons themselves could form a crystal, if the electron-electron repulsion is sufficiently large compared to their kinetic energy.  For a cold quantum mechanical electron gas, where the kinetic energy is related to the Fermi energy of the electrons, the essential dimensionless parameter here is \(r_{s}\), the Wigner-Seitz radius.  Serious calculations have shown that you should get a Wigner crystal for electrons in 2D if \(r_{s} > \sim 31\).  (You can also have a "classical" Wigner crystal, when the electron kinetic energy is set by the temperature rather than quantum degeneracy; an example of this situation is electrons floating on the surface of liquid helium.)

Observing Wigner crystals in experiments is very challenging, historically.  When working in ultraclean 2D electron gases in GaAs/AlGaAs structures, signatures include looking for "pinning" of the insulating 2D electronic crystal on residual disorder, leading to nonlinear conduction at the onset of "sliding"; features in microwave absorption corresponding to melting of the crystal; changes in capacitance/screening, etc.  Large magnetic fields can be helpful in bringing about Wigner crystallization (tending to confine electronic wavefunctions, and quenching kinetic energy by having Landau Levels).  

In recent years, 2D materials and advances in scanning tunneling microscopy (STM) have led to a lot of progress in imaging Wigner crystals.  One representative paper is this, in which the moirĂ© potential in a bilayer system helps by flattening the bands and therefore reducing the kinetic energy.  Another example is this paper from April, looking at Wigner crystals at high magnetic field in Bernal-stacked bilayer graphene.   One aspect of these experiments that I find amazing is that the STM doesn't melt the crystals, since it's either injecting or removing charge throughout the imaging process.  The crystals are somehow stable enough that any removed electron gets rapidly replaced without screwing up the spatial order.  Very cool.

Two additional notes:

Saturday, June 22, 2024

What is turbulence? (And why are helicopters never quiet?)

Fluid mechanics is very often left out of the undergraduate physics curriculum.  This is a shame, as it's very interesting and directly relevant to many broad topics (atmospheric science, climate, plasma physics, parts of astrophysics).  Fluid mechanics is a great example of how it is possible to have comparatively simple underlying equations and absurdly complex solutions, and that's probably part of the issue.  The space of solutions can be mapped out using dimensionless ratios, and two of the most important are the Mach number (\(\mathrm{Ma} \equiv u/c_{s}\), where \(u\) is the speed of some flow or object, and \(c_{s}\) is the speed of sound) and the Reynolds number (\(\mathrm{Re} \equiv \rho u d/\mu\), where \(\rho\) is the fluid's mass density, \(d\) is some length scale, and \(\mu\) is the viscosity of the fluid). 

From Laurence Kedward, wikimedia commons

There is a nice physical interpretation of the Reynolds number.  It can be rewritten as \(\mathrm{Re} = (\rho u^{2})/(\mu u/d)\).  The numerator is the "dynamic pressure" of a fluid, the force per unit area that would be transferred to some object if a fluid of density \(\rho\) moving at speed \(u\) ran into the object and was brought to a halt.  This is in a sense the consequence of the inertia of the moving fluid, so this is sometimes called an inertial force.  The denominator, the viscosity multiplied by a velocity gradient, is the viscous shear stress (force per unit area) caused by the frictional drag of the fluid.  So, the Reynolds number is a ratio of inertial forces to viscous forces.  

When \(\mathrm{Re}\ll 1\), viscous forces dominate.  That means that viscous friction between adjacent layers of fluid tend to smooth out velocity gradients, and the velocity field \(\mathbf{u}(\mathbf{r},t) \) tends to be simple and often analytically solvable.  This regime is called laminar flow.  Since \(d\) is just some characteristic size scale, for reasonable values of density and viscosity for, say, water, microfluidic devices tend to live in the laminar regime.  

When \(\mathrm{Re}\gg 1\), frictional effects are comparatively unimportant, and the fluid "pushes" its way along.  The result is a situation where the velocity field is unstable to small perturbations, and there is a transition to turbulent flow.  The local velocity field has big, chaotic variations as a function of space and time.  While the microscopic details of \(\mathbf{u}(\mathbf{r},t)\) are often not predictable, on a statistical level we can get pretty far since mass conservation and momentum conservation can be applied to a region of space (the control volume or Eulerian approach).

Turbulent flow involves a cascade of energy flow down through eddies at length scales all the way down eventually to the mean free path of the fluid molecules.   This right here is why helicopters are never quiet.  Even if you started with a completely uniform downward flow of air below the rotor (enough of a momentum flux to support the weight of the helicopter), the air would quickly transition to turbulence, and there would be pressure fluctuations over a huge range of timescales that would translate into acoustic noise.  You might not be able to hear the turbine engine directly from a thousand feet away, but you can hear the resulting sound from the turbulent airflow.  

If you're interested in fluid mechanics, this site is fantastic, and their links page has some great stuff.

Friday, June 14, 2024

Artificial intelligence, extrapolation, and physical constraints

Disclaimer and disclosure:  The "arrogant physicist declaims about some topic far outside their domain expertise (like climate change or epidemiology or economics or geopolitics or....) like everyone actually in the field is clueless" trope is very overplayed at this point, and I've generally tried to avoid doing this.  Still, I read something related to AI earlier this week, and I wanted to write about it.  So, fair warning: I am not an expert about AI, machine learning, or computer science, but I wanted to pass this along and share some thoughts.  Feel even more free than usual to skip this and/or dismiss my views.

This is the series of essays, and here is a link to the whole thing in one pdf file.  The author works for OpenAI.  I learned about this from Scott Aaronson's blog (this post), which is always informative.

In a nutshell, the author basically says that he is one of a quite small group of people who really know the status of AI development; that we are within a couple of years of the development of artificial general intelligence; that this will lead essentially to an AI singularity as AGI writes ever-smarter versions of AGI; that the world at large is sleepwalking toward this and its inherent risks; and that it's essential that western democracies have the lead here, because it would be an unmitigated disaster if authoritarians in general and the Chinese government in particular should take the lead - if one believes in extrapolating exponential progressions, then losing the initiative rapidly translates into being hopelessly behind forever.

I am greatly skeptical of many aspects of this (in part because of the dangers of extrapolating exponentials), but it is certainly thought-provoking.  

I doubt that we are two years away from AGI.  Indeed, I wonder if our current approaches are somewhat analogous to Ptolemeiac epicycles.  It is possible in principle to construct extraordinarily complex epicyclic systems that can reproduce predictions of the motions of the planets to high precision, but actual newtonian orbital mechanics is radically more compact, efficient, and conceptually unified.  Current implementations of AI systems use enormous numbers of circuit elements that consume tens to hundreds of MW of electricity.  In contrast, your brain hosts a human-level intelligence, consumes about 20 W, and masses about 1.4 kg.  I just wonder if our current architectural approach is not the optimal one toward AGI.  (Of course, a lot of people are researching neuromorphic computing, so maybe that resolves itself.)

The author also seems to assume that whatever physical resources are needed for rapid exponential progress in AI will become available.  Huge numbers of GPUs will be made.  Electrical generating capacity and all associated resources will be there.  That's not obvious to me at all.  You can't just declare that vastly more generating capacity will be available in three years - siting and constructing GW-scale power plants takes years alone.  TSMC is about as highly motivated as possible to build their new facilities in Arizona, and the first one has taken three years so far, with the second one delayed likely until 2028.  Actual construction and manufacturing at scale cannot be trivially waved away.

I do think that AI research has the potential to be enormously disruptive.  It also seems that if a big corporation or nation-state thought that they could gain a commanding advantage by deploying something even if it's half-baked and the long-term consequences are unknown, they will 100% do it.  I'd be shocked if the large financial companies aren't already doing this in some form.  I also agree that broadly speaking as a species we are unprepared for the consequences of this research, good and bad.  Hopefully we will stumble forward in a way where we don't do insanely stupid things (like putting the WOPR in charge of the missiles without humans in the loop).   

Ok, enough of my uninformed digression.  Back to physics soon.

Update:  this is a fun, contrasting view by someone who definitely disagrees with Aschenbrenner about the imminence of AGI.

Sunday, June 02, 2024

Materials families: Halide perovskites

Looking back, I realized that I haven't written much about halide perovskites, which is quite an oversight given how much research impact they're having.  I'm not an expert, and there are multiple extensive review articles out there (e.g. here, here, here, here, here), so this will only be a very broad strokes intro, trying to give some context to why these systems are important, remarkable, and may have plenty of additional tricks to play.

From ACS Energy Lett. 5, 2, 604–610 (2020).

Perovskites are a class of crystals based on a structural motif (an example is ABX3, originally identified in the mineral CaTiO3, though there are others) involving octahedrally coordinated metal atoms.  As shown in the figure, each B atom is in the center of an octahedron defined by six X atoms.  There are many flavors of purely inorganic perovskites, including the copper oxide semiconductors and various piezo and ferroelectric oxides.  

The big excitement in recent years, though, involves halide perovskites, in which the X atom = Cl, Br, I, the B atom is most often Pb or Sn.  These materials are quite ionic, in the sense that the B atom is in the 2+ oxidation state, the X atom is in the 1- oxidation state, and whatever is in the A site is in the 1+ oxidation state (whether it's Cs+ or a molecular ion like methylammonium (MA = [CH3NH3]+) or foramidinium (FA = [HC(NH2)2]+).  

From Chem. Rev. 123, 13, 8154–8231 (2023).

There is an enormous zoo of materials based on these building blocks, made even more rich by the capability of organic chemists to toss in various small organic, covalent ligands to alter spacings between the components (and hence electronic overlap and bandwidths), tilt or rotate the octahedra, add in chirality, etc.  Forms that are 3D, effectively 2D (layers of corner-sharing octahedra), 1D, and "OD" (with isolated octahedra) exist.  Remarkably:

  • These materials can be processed in solution form, and it's possible to cast highly crystalline films.
  • Despite the highly ionic character of much of the bonding, many of these materials are semiconductors, with bandgaps in the visible.
  • Despite the differences in what chemists and semiconductor physicists usually mean by "pure", these materials can be sufficiently clean and free of the wrong kinds of defects that it is possible to make solar cells with efficiencies greater than 26% (!) (and very bright light emitting diodes).  
These features make the halide perovskites extremely attractive for possible applications, especially in photovoltaics and potentially light sources (even quantum emitters).  They are seemingly much more forgiving (in terms of high carrier mobility, vulnerability to disorder, and having a high dielectric polarizability and hence lower exciton binding energy and greater ease of charge extraction) than most organic semiconductors.  The halide perovskites do face some serious challenges (chemical stability under UV illumination and air/moisture exposure; the unpleasantness of Pb), but their promise is enormous

Sometimes nature seems to provide materials with particularly convenient properties.  Examples include water and the fact that ordinary ice is less dense than the liquid form; silicon and its outstanding oxide; gallium arsenide and the fact that it can be grown with great purity and stoichiometry even in an extremely As rich environment; I'm sure commenters can provide many more.  The halide perovskites seem to be another addition to this catalog, and as material properties continue to improve, condensed matter physicists are going to be looking for interesting things to do in these systems.