Search This Blog

Sunday, January 25, 2026

What is superconductivity?

A friend pointed out that, while I've written many posts that have to do with superconductivity, I've never really done a concept post about it.  Here's a try, as I attempt to distract myself from so many things happening these days.

The superconducting state is a truly remarkable phase of matter that is hosted in many metals (though ironically not readily in the pure elements (Au, Ag, Cu) that are the best ordinary conductors of electricity - see here for some references).  First, some definitional/phenomenological points:

  • The superconducting state is a distinct thermodynamic phase.  In the language of phase transitions developed by Ginzburg and Landau back in the 1950s, the superconducting state has an order parameter that is nonzero, compared to the non-superconducting metal state.   When you cool down a metal and it becomes a superconductor, this really is analogous (in some ways) to when you cool down liquid water and it becomes ice, or (a better comparison) when you cool down very hot solid iron and it becomes a magnet below 770 °C.
  • In the superconducting state, at DC, current can flow with zero electrical resistance.  Experimentally, this can be checked by setting up a superconducting current loop and monitoring the current via the magnetic field it produces.  If you find that the current will decay over somewhere between \(10^5\) and \(\infty\) years, that's pretty convincing that the resistance is darn close to zero. 
  • This is not just "perfect" conduction.  If you placed a conductor in a magnetic field, turned on perfect conduction, and then tried to change the magnetic field, currents would develop currents that would preserve the amount of magnetic flux through the perfect conductor.  In contrast, a key signature of superconductivity is the Meissner-Oschenfeld Effect:  if superconductivity is turned on in the presence of a (sufficiently small) magnetic field, currents will develop spontaneously at the surface of the material to exclude all magnetic flux from the bulk of the superconductor.  (That is, the magnetic field from the currents will be oppositely directed to the external field and of just the right size and distribution to give \(\mathbf{B}=0\) in the bulk of the material.)  Observation of the bulk Meissner effect is among the strongest evidence for true superconductivity, much more robust than a measurement that seems to indicate zero voltage drop.  Indeed, as a friend of mine pointed out to me, a one-phrase description of a superconductor is "a perfect diamagnet".  
  • There are two main types of superconductors, uncreatively termed "Type I" and "Type II".  In Type I superconductors, an external \(\mathbf{H} = \mathbf{B}/\mu_{0}\) fails to penetrate the bulk of the material until it reaches a critical field \(H_{c}\), at which point the superconducting state is suppressed completely.  In a Type II superconductor, above some lower critical field \(H_{c,1}\) magnetic flux begins to penetrate the material in the form of vortices, each of which has a non-superconducting ("normal") core.  Above an upper critical field \(H_{c,2}\), superconductivity is suppressed. 
  • Interestingly, a lot of this can be "explained" by the London Equations, which were introduced in the 1930s despite a complete lack of a viable microscopic theory of superconductivity.
  • Magnetic flux through a conventional superconducting ring (or through a vortex core) is quantized precisely in units of \(h/2e\), where \(h\) is Planck's constant and \(e\) is the electronic charge.  
  • (It's worth noting that in magnetic fields and with AC currents, there are still electrical losses in superconductors, due in part to the motion of vortices.)
Physically, what is the superconducting state?  Why does it happen and why does it have the weird properties described above as well as others?  There are literally entire textbooks and semester-long courses on this, so what follows is very brief and non-authoritative.  
  • In an ordinary metal at low temperatures, neglecting e-e interactions and other complications, the electrons fill up states (because of the Pauli Principle) starting from the lowest energy up to some highest value, the Fermi energy.  (See here for some mention of this.)   Empty electronic states are available at essentially no energy cost - exciting electrons from filled states to empty states are "gapless".  
  • Electrical conduction takes place through the flow of these electronic quasiparticles.   (For more technical readers:  We can think of these quasiparticles like little wavepackets, and as each one propagates around the wavepacket accumulates a certain amount of phase.  The phases of different quasiparticles are arbitrary, but the change in the phase going around some trajectory is well defined.)
  • In a superconductor, there is some effective attractive interaction between electrons that we have thus far neglected.  In conventional superconductors, this involves lattice vibrations (as in this wikipedia description), though other attractive interactions are possible.  At sufficiently low temperatures, the ordinary metal state is unstable, and the system will spontaneously form pairs of electrons (or holes).  Those pairs then condense into a single coherent state described by an amplitude \(|\Psi|\) and a phase, \(\phi\), shared by all the pairs.  The conventional theory of this was formulated by Bardeen, Cooper, and Schrieffer in 1957.  A couple of nice lecture note presentations of this are here (courtesy Yuval Oreg) and here (courtesy Dan Arovas), if you want the technical details.  This leads to an energy gap that characterizes how much it costs to create individual quasiparticles.  Conduction in a superconductor takes place through the flow of pairs.  (A clue to this is the appearance of the \(2e\) in the flux quantization.)
  • This taking on of a global phase for the pairs of electrons is a spontaneous breaking of gauge symmetry - this is discussed pedagogically for physics students here.  Understanding this led to figuring out the Anderson-Higgs mechanism, btw. 
  • The result is a state with a kind of rigidity; precisely how this leads to the phenomenology of superconductivity is not immediately obvious, to me anyway.  If someone has a link to a great description of this, please put it in the comments.  (Interestingly google gemini is not too bad at discussing this.)
  • The existence of this global phase is hugely important, because it's the basis for the Josephson effect(s), which in turn has led to the basis of exquisite magnetic field sensing, all the superconducting approaches to quantum information, and the definition of the volt, etc.
  • The paired charge carriers are described by a pairing symmetry of their wave functions in real space.  In conventional BCS superconductors, each pair has no orbital angular momentum ("\(s\)-wave"), and the spins are in a singlet state.  In other superconductors, pairs can have \(l = 1\) orbital angular momentum ("\(p\)-wave", with spins in the triplet configuration), \(l = 2\) orbital angular momentum ("\(d\)-wave", with spins in a singlet again), etc.  The pairing state determines whether the energy gap is directionally uniform (\(s\)-wave) or whether there are directions ("nodes") along which the gap goes to zero.  
I have necessarily left out a ton here.  Superconductivity continues to be both technologically critical and scientifically fascinating.  One major challenge in understanding the microscopic mechanisms behind particular superconductors is that the superconducting state itself is in a sense generic - many of its properties (like phase rigidity) are emergent regardless of the underlying microscopic picture, which is amazing.

One other point, added after initial posting. In quantum computing approaches, a major challenge is how to build robust effective ("logical") qubits from individual physical qubits that are not perfect (meaning that they suffer from environmental decoherence among other issues).  The phase coherence of electronic quasiparticles in ordinary metals is generally quite fragile; inelastic interactions with each other, with phonons, with impurity spins, etc. can all lead to decoherence.  However, starting from those ingredients, superconductivity shows that it is possible to construct, spontaneously, a collective state with very long-lived coherence.  I'm certain I'm not the first to wonder about whether there are lessons to be drawn here in terms of the feasibility of and approaches to quantum error correction.

Sunday, January 11, 2026

What is the Kondo effect?

The Kondo effect is a neat piece of physics, an archetype of a problem involving strong electronic correlations and entanglement, with a long and interesting history and connections to bulk materials, nanostructures, and important open problems.  

First, some stage setting.  In the late 19th century, with the development of statistical physics and the kinetic theory of gases, and the subsequent discovery of electrons by JJ Thomson, it was a natural idea to try modeling the electrons in solids as a gas, as done by Paul Drude in 1900.  Being classical, the Drude model misses a lot (If all solids contain electrons, why aren't all solids metals?  Why is the specific heat of metals orders of magnitudes lower than what a classical electron gas would imply?), but it does introduce the idea of electrons as having an elastic mean free path, a typical distance traveled before scattering off something (an impurity? a defect?) into a random direction.  In the Drude picture, as \(T \rightarrow 0\), the only thing left to scatter charge carriers is disorder ("dirt"), and the resistivity of a conductor falls monotonically and approaches \(\rho_{0}\), the "residual resistivity", a constant set in part by the number of defects or impurities in the material.  In the semiclassical Sommerfeld model, and then later in nearly free electron model, this idea survives.

Resistivity growing at low \(T\)
for gold with iron impurities, fig 
One small problem:  in the 1930s (once it was much easier to cool materials down to very low temperatures), it was noticed that in many experiments (here and here, for example) the electrical resistivity of metals did not seem to fall and then saturate at some \(\rho_{0}\).  Instead, as \(T \rightarrow 0\), \(\rho(T)\) would go through a minimum and then start increasing again, approximately like \(\delta \rho(T) \propto - \ln(T/T_{0})\), where \(T_{0}\) is some characteristic temperature scale.  This is weird and problematic, especially since the logarithm formally diverges as \(T \rightarrow 0\).   

Over time, it became clear that this phenomenon was associated with magnetic impurities, atoms that have unpaired electrons typically in \(d\) orbitals, implying that somehow the spin of the electrons was playing an important role in the scattering process.  In 1964, Jun Kondo performed the definitive perturbative treatment of this problem, getting the \(\ln T\) divergence.  

[Side note: many students learning physics are at least initially deeply uncomfortable with the idea of approximations (that many problems can't be solved analytically and exactly, so we need to take limiting cases and make controlled approximations, like series expansions).  What if a series somehow doesn't converge?  This is that situation.]

The Kondo problem is a particular example of a "quantum impurity problem", and it is a particular limiting case of the Anderson impurity model.  Physically, what is going on here?  A conduction electron from the host metal could sit on the impurity atom, matching up with the unpaired impurity electron.  However (much as we can often get away with ignoring it) like charges repel, and it is energetically very expensive (modeled by some "on-site" repulsive energy \(U\)) to do that.  Parking that conduction electron long-term is not allowed, but a virtual process can take place, whereby a conduction electron with spin opposite to the localized moment can (in a sense) pop on there and back off, or swap places with the localized electron.  The Pauli principle enforces this opposed spin restriction, leading to entanglement between the local electron and the conduction electron as they form a singlet.  Moreover, this process generally involves conduction electrons at the Fermi surface of the metal, so it is a strongly interacting many-body problem.  As the temperature is reduced, this process becomes increasingly important, so that the impurity's scattering cross section of conduction electrons grows as \(T\) falls, causing the resistivity increase.  

Top: Cartoon of the Kondo scattering process. Bottom:
Ground state is a many-body singlet between the local
moment and the conduction electrons.

The eventual \(T = 0\) ground state of this system is a many-body singlet, with the localized spin entangled with a "Kondo cloud" of conduction electrons.  The roughly \(\ln T\) resistivity correction rolls over and saturates.   There ends up being a sharp peak (resonance) in the electronic density of states right at the Fermi energy.  Interestingly, this problem actually can be solved exactly and analytically (!), as was done by Natan Andrei in this paper in 1980 and reviewed here.  

This might seem to be the end of the story, but the Kondo problem has a long reach!  With the development of the scanning tunneling microscope, it became possible to see Kondo resonances associated with individual magnetic impurities (see here).  In semiconductor quantum dot devices, if the little dot has an odd number of electrons, then it can form a Kondo resonance that spans from the source electrode through the dot and into the drain electrode.  This leads to a peak in the conductance that grows and saturates as \(T \rightarrow 0\) because it involves forward scattering.  (See here and here).  The same can happen in single-molecule transistors (see here, here, here, and a review here).  Zero-bias peaks in the conductance from Kondo-ish physics can be a confounding effect when looking for other physics.

Of course, one can also have a material where there isn't a small sprinkling of magnetic impurities, but a regular lattice of spin-hosting atoms as well as conduction electrons.  This can lead to heavy fermion systems, or Kondo insulators, and more exotic situations.   

The depth of physics that can come out of such simple ingredients is one reason why the physics of materials is so interesting.  

Sunday, January 04, 2026

Updated: CM/nano primer - 2026 edition

This is a compilation of posts related to some basic concepts of the physics of materials and nanoscale physics.  I realized the other day that I hadn't updated this since 2019, and therefore a substantial audience may not have seen these.  Wikipedia's physics entries have improved greatly over the years, but hopefully these are a complement that's useful to students and maybe some science writers.  Please let me know if there are other topics that you think would be important to include.  

What is temperature?
What is chemical potential?
What is mass?
Fundamental units and condensed matter

What are quasiparticles?
Quasiparticles and what is "real"
What is effective mass?
What is a phonon?
What is a plasmon?
What are magnons?
What are skyrmions?
What are excitons?
What is quantum coherence?
What are universal conductance fluctuations?
What is a quantum point contact?  What is quantized conductance?
What is tunneling?

What are steric interactions?
(effectively) What is the normal force?
What is disorder, to condensed matter physicists?
What is band theory?
What is a "valley"? 
What are quantum oscillations?
What is a metal?
What is a bad metal?  What is a strange metal?
What is a Tomonaga-Luttinger liquid?

What is a crystal?

Saturday, January 03, 2026

What are dislocations?

How do crystalline materials deform?  When you try to shear or stretch a crystalline solid, in the elastic regime the atoms just slightly readjust their positions (at right).  The "spring constant" that determines the amount of deformation originates from the chemical bonds - how and to what extent the electrons are shared between the neighboring atoms.  In this elastic regime, if the applied stress is removed, the atoms return to their original positions.  Now imagine cranking up the applied stress.  In the "brittle" limit, eventually bonds rupture and the material fractures abruptly in a runaway process.  (You may never have thought about this, but crack propagation is a form of mechanochemistry, in that bonds are broken and other chemical processes then have to take place to make up for those changes.) 

In many materials, especially metals, rather than abruptly ripping apart, materials can deform plastically, so that even when the external stress is removed, the atoms remain displaced somehow.  The material has been deformed "irreversibly", meaning that the microscopic bonding of at least some of the atoms has been modified.  The mechanism here is the presence and propagation of defects in the crystal stacking called dislocations, the existence of which was deduced back in the 1930s when people first came to appreciate that metals are generally far easier to deform than expectations from a simple calculation assuming perfect bonding.    

(a) Edge dislocation, where the copper-colored spheres
are an "extra" plane of atoms.  (b) A (red) path enclosing 
the edge dislocation; the Burgers vector is shown with 
the black arrow. (c) A screw dislocation.  (Images from 

Dislocations are topological line defects (as opposed to point defects like vacancies, impurities, or interstitials), characterized by a vector along the line of the defect, and a Burgers vector.  Imagine taking some number of lattice site steps going around a closed loop in a crystal plane of the material.   For example, in the \(x-y\) plane, you go 4 sites in the \(+x\) direction, 4 sites in the \(+y\) direction, 4 sites in the \(-x\) direction, and 4 sites in the \(-y\) direction.  If you ended up back where you started, then you have not enclosed a dislocation.  If you end up shifted sideways in the plane relative to your starting point, your path has enclosed an edge dislocation (see (a) and (b) to the right).  The Burgers vector connects the endpoint of the path with the beginning point of the path.  An edge dislocation is the end of an "extra" plane of atoms in a crystal (the orange atoms in (a)).  If you go around the path in the \(x-y\) plane and end up shifted out of the initial plane (so that the Burgers vector is pointing along \(z\), parallel to the dislocation line), your path enclosed a screw dislocation (see (c) in the figure).   Edge and screw dislocations are the two major classes of mobile dislocations.  There are also mixed dislocations, in which the dislocation line meanders around, so that displacements can look screw-like along some orientations of the line and edge-like along others.  (Here is some nice educational material on this, albeit dated in its web presentation.)  

A few key points:
  • Mobile dislocations are the key to plastic deformation and the "low" yield strength of ductile materials compared to the idea situation.  Edge dislocations propagate sideways along their Burgers vectors when shear stresses are applied to the plane in which the dislocation lies.  This is analogous to moving a rug across the floor by propagating a lump rather than trying to shift the entire rug at once.  Shearing the material by propagating an edge dislocation involves breaking and reforming bonds along the line, which is much cheaper energetically than breaking all the bonds in the shear plane at once.  To picture how a screw dislocation propagates in the presence of shear, imagine trying to tear a stack of paper.  (I was taught to picture tearing a phone book, which shows how ancient I am.)  
  • A dislocation is a great example of an emergent object.  Materials scientists and mechanical engineers interested in this talk about dislocations as entities that have positions, can move, and can interact.  One could describe everything in terms of the positions of the individual atoms in the solid, but it is often much more compact and helpful to think about dislocations as objects unto themselves. 
  • Dislocations can multiply under deformation.  Here is a low-tech but very clear video about one way this can happen, the Frank-Read source (more discussion here, and here is the original theory paper by Frank and Read).  In case you think this is just some hand-wavy theoretical idea, here is a video from a transmission electron microscopy showing one of these sources in action.
  • Dislocations are associated with local strain (and therefore stress). This is easiest for me to see in the end-on look at the edge dislocation in (a), where clearly there is compressive strain below where the "extra" orange plane of atoms starts, and tensile strain above there where the lattice is spreading to make room for that plane.   Because of these strain fields and the topological nature of dislocations, they can tangle with each other and hinder their propagation.  When this happens, a material becomes more difficult to deform plastically, a phenomenon called work hardening that you have seen if you've ever tried to break a paperclip by bending the metal back and forth.
  • Controlling the nucleation and pinning of dislocations is key to the engineering of tough, strong materials.  This paper is an example of this, where in a particular alloy, crystal rotation makes it possible to accommodate a lot of strain from dislocations in "kink bands". 




Friday, January 02, 2026

EUV lithography - a couple of quick links

Welcome to the new year!

I've written previously (see here, item #3) about the extreme ultraviolet lithography tools used in modern computer chip fabrication.   These machines are incredible, the size of a railway car, and cost hundreds of millions of dollars each.  Veritasium has put out a new video about these, which I will try to embed here.  Characteristically, it's excellent, and I wanted to bring it to your attention.


It remains an interesting question whether there could be a way of achieving this kind of EUV performance through an alternative path.  As I'd said a year ago, if you could do this for only $50M per machine, it would be hugely impactful.  

A related news item:  There are claims that a Chinese effort in Shenzen has a prototype EUV machine now (that fills an entire factory floor, so not exactly compact or cheap).  It will be a fascinating industrial race if multiple players are able to make the capital investments needed to compete in this area.

Wednesday, December 24, 2025

Public comment period re US science, ends Dec 26

For those in the US:  Here is a link for submitting a public comment regarding OSTP's request for information about "Accelerating the American Scientific Enterprise".   The US government does this at some rate, requesting public feedback and input on policies under consideration or development.  Unfortunately, this public comment period ends on December 26.  

Here is the slightly longer version of what I submitted (I had to trim it back b/c of character limits).  It doesn't touch on everything in the RFI, but it's a start.  Many readers will likely disagree with what I wrote, and that's fine.  Submit your own comments.  Better that OSTP hear from a variety of perspectives.

Update:  I’ve had some ask me what the point of this is, since the comments will likely not be read in a meaningful way.  I assume they will feed the comments through an AI tool and have staffers spot check ones that are flagged as interesting (by whatever criteria). I know I’m old fashioned, but I still feel like silence is acquiescence edging toward complicity. Saying something at least helped me organize my ideas a bit.

----

I appreciate the desire and need to improve the American scientific enterprise. Other rival nations are making enormous investments in basic and applied scientific research, and it is critical for US competitiveness that the US not flag or falter.

I have two overarching suggestions before delving into the specific questions posed in the RFI.  

First, it is of vital importance that the US scientific enterprise solicit advice and input from practicing scientists and engineers through advisory panels and committees.  In the last year, the US government research agencies have dissolved or disbanded a huge fraction of advisory bodies.  I think this is a mistake.  While there are always concerns about whether experts are too entrenched to be bold, across-the-board devaluing expertise is not the way to go.

Second, there needs to be coherence between priorities and financial allocations.  Complaining that the US is falling behind in critical research areas (e.g. advanced materials) while simultaneously slashing federal support of those research areas is both self-defeating and internally inconsistent.  There have been large scale cuts across many agencies in critical scientific areas, seemingly at odds with the stated policies about innovation and competitiveness.  

Regarding specific points from the RFI:

Encouraging public-private collaboration runs into the problem that existing economic incentives (the constant need for short-term returns) disfavor companies investing in research that doesn't have clear, nearly-immediate payoffs.  Norm Augustine has spoken publicly about how opening a research laboratory caused Lockheed Martin's stock to fall. (see here:  https://www.aaas.org/news/basic-research-needs-sustained-federal-investment-says-norman-augustine)  Moreover, since many very wealthy corporations (e.g. Tesla, 3M) already pay almost no federal taxes, it is challenging to encourage public-private partnerships through offering companies tax incentives.  The issue is less of a problem for small and startup companies, in part thanks to programs like SBIR and STTR.  Programs like ARPA-E and ARPA-H also are important to consider and expand.

The most financially well-off sectors, all of which depend critically on a spectrum of research from basic to the applied and near to long term, are the financial industry, energy, biomedical/pharma, and tech/semiconductors (especially tied into AI).  In the past, research consortia like International Sematech were very effective at guiding research of tremendous economic benefit.  Some state governments have had interesting and impactful programs (e.g., the Cancer Prevention Research Institute in Texas, supported by state bonds).  The US could encourage the development of industrial consortia aligned to specific aspects of research; these could be supported financially by joint investment of the constituting companies and special bond issues expressly for these topics, with the proviso that funds be used to support use-inspired basic and early-stage applied research.  It is critical that any such plan have actual oversight, to make sure that this does not become a giant give-away to the companies that have the best lobbyists.

In terms of supporting models for research that complement traditional university structures and enable projects that require vast resources, interdisciplinary coordination, or extended timelines:  There are already large-scale center/institute programs that have been deployed in the past by various federal agencies.  While not perfect, these are not a bad model - in the last year, these have suffered greatly due to budgetary uncertainties and outright cancellations.  You can't complain about failing to do research requiring extended timelines if funding is annualized and perpetually at risk.  One approach to dealing with this would be to endow such structures at the outset for multiyear support. Congress would have to be willing to do this, and given the level of impasse that seems very difficult, but with executive branch support it could be possible.  The short version: Don't complain about lack of long-term stickiness if the US government cannot be considered a reliable partner year over year.

In terms of identifying and developing scientific talent across the country:  the RFI mentions "leveraging digital tools", which sounds a lot like trying to use AI to identify people.  This is frought with risk.  Programs such as the NSF graduate research fellowship, the NDSEG, the SCGSR, and others are essential at developing talent, as are many research experience for undergraduate programs and internship programs at national laboratories.  The recent cuts to these programs are short-sighted and counterproductive.  If we want the US to develop top technical talent and have people choose technical paths over "easier" trajectories, then we need to make it financially worthwhile for students to pursue these aims.  Federally supported undergraduate scholarship, graduate fellowships, policies that reward companies for providing internships, policies that, e.g. reduce endowment taxes if university resources are spent providing such support, etc. are all worth considering.  Again, an advisory task force could likely provide quantitative information about what strategies are likely to work best.

I could go on, but these are a starting point for a discussion.  I would be happy to talk further with interested parties and engage in discussions going forward, if that would be helpful.



Tuesday, December 16, 2025

An open position at Rice, Assistant or Associate Professor of Advanced Computational Materials

The Rice Advanced Materials Institute has an open search in the area of computational materials.   The full position posting is here.  Please spread the word!

The brief description:  

The Rice Advanced Materials Institute (RAMI) at Rice University, located in Houston TX, seeks applications for tenure-track Assistant and/or Associate Professor position in the area of advanced computational and/or applied artificial intelligence (AI)/machine learning (ML)-informed materials research with an anticipated start date of July 1, 2026 to January 1, 2027. RAMI, working in conjunction with the Schools of Engineering and Computing and Natural Sciences and the Departments therein, seeks applicants from diverse backgrounds for potential tenure-track faculty positions in Departments to be determined by the best natural fit and input of the candidate (e.g., potential departments could include, but not be limited to, Chemical and Biomolecular Engineering, Chemistry, Electrical and Computer Engineering, Materials Science and NanoEngineering, Mechanical Engineering, or Physics and Astronomy). More experienced candidates nearing the transition to or having already recently transitioned to the Associate Professor level and conducting transformative research projects will be considered. 

 

We seek outstanding candidates with research interests in theoretical, computational, modeling, and/or simulation of materials science (and related fields) and the application of advanced AI and ML approaches to accelerate, improve, or revolutionize the same. Research spanning all of aspects of materials including soft/hard matter, inorganic/organic materials, etc. is welcomed, as long as that research maps significantly to one or more of the RAMI core research areas, including: 

 

  • Next-generation Electronics/Photonics – Developing materials to enable a new paradigm in microelectronics ranging from memory/logic to communications to sensors and beyond, with a focus on low-power/voltage function.

  • Energy Materials (Systems) – Developing materials innovations to transform energy storage and conversion/harvesting.

  • Materials for the Environment – Developing materials innovations to assure responsible use of natural resources and long-term stewardship of our air, soil, and water resources.

This effort is being initiated through RAMI, a campus-wide institute with the goal of expanding Rice’s already strong efforts in materials research across many departments in science and engineering. RAMI includes >70 current faculty members working on a wide array of research that overlap with RAMI’s core research focus areas. Rice considers efforts to bolster these areas as a focal point of materials research in the coming years.

 

The selected candidates will be expected to teach and develop undergraduate and graduate courses within their expertise and home department; perform high-quality research in their specialized area and present findings from their research in peer-reviewed publications and conferences; establish a strong research program supported by extramural funding; be involved in service to the university and broader scientific community; and collaborate with faculty in diverse disciplines. Successful candidates will have a strong commitment to teaching, advising, and mentoring undergraduate and graduate students from diverse backgrounds. 


Applications will be reviewed in a rolling fashion but should be received by 11:59 PM (eastern time) on January 4, 2026. Applications received after this deadline will be considered as they arrive and until the position is filled.