Monday, April 15, 2024

The future of the semiconductor industry, + The Mechanical Universe

 Three items of interest:

  • This article is a nice review of present semiconductor memory technology.  The electron micrographs in Fig. 1 and the scaling history in Fig. 3 are impressive.
  • This article in IEEE Spectrum is a very interesting look at how some people think we will get to chips for AI applications that contain a trillion (\(10^{12}\)) transistors.  For perspective, the processor in my laptop used to write this has about 40 billion transistors.  (The article is nice, though the first figure commits the terrible sin of having no y-axis number or label; clearly it's supposed to represent exponential growth as a function of time in several different parameters.)
  • Caltech announced the passing of David Goodstein, renowned author of States of Matter and several books about the energy transition.  I'd written about my encounter with him, and I wanted to take this opportunity to pass along a working link to the youtube playlist for The Mechanical Universe.  While the animation can look a little dated, it's worth noting that when this was made in the 1980s, the CGI was cutting edge stuff that was presented at siggraph.

Friday, April 12, 2024

Electronic structure and a couple of fun links

Real life has been very busy recently.  Posting will hopefully pick up soon.  

One brief item.  Earlier this week, Rice hosted Gabi Kotliar for a distinguished lecture, and he gave a very nice, pedagogical talk about different approaches to electronic structure calculations.  When we teach undergraduate chemistry on the one hand and solid state physics on the other, we largely neglect electron-electron interactions (except for very particular issues, like Hund's Rules).  Trying to solve the many-electron problem fully is extremely difficult.  Often, approximating by solving the single-electron problem (e.g. finding the allowed single-electron states for a spatially periodic potential as in a crystal) and then "filling up"* those states gives decent results.   As we see in introductory courses, one can try different types of single-electron states.  We can start with atomic-like orbitals localized to each site, and end up doing tight binding / LCAO / Hückel (when applied to molecules).  Alternately, we can do the nearly-free electron approach and think about Bloch wavesDensity functional theory, discussed here, is more sophisticated but can struggle with situations when electron-electron interactions are strong.

One of Prof. Kotliar's big contributions is something called dynamical mean field theory, an approach to strongly interacting problems.  In a "mean field" theory, the idea is to reduce a many-particle interacting problem to an effective single-particle problem, where that single particle feels an interaction based on the averaged response of the other particles.  Arguably the most famous example is in models of magnetism.  We know how to write the energy of a spin \(\mathbf{s}_{i}\) in terms of its interactions \(J\) with other spins \(\mathbf{s}_{j}\) as \(\sum_{j} J \mathbf{s}_{i}\cdot \mathbf{s}_{j}\).  If there are \(z\) such neighbors that interact with spin \(i\), then we can try instead writing that energy as \(zJ \mathbf{s}_{i} \cdot \langle \mathbf{s}_{i}\rangle\), where the angle brackets signify the average.  From there, we can get a self-consistent equation for \(\langle \mathbf{s}_{i}\rangle\).  

Dynamical mean field theory is rather similar in spirit; there are non-perturbative ways to solve some strong-interaction "quantum impurity" problems.  DMFT is like a way of approximating a whole lattice of strongly interacting sites as a self-consistent quantum impurity problem for one site.  The solutions are not for wave functions but for the spectral function.  We still can't solve every strongly interacting problem, but Prof. Kotliar makes a good case that we have made real progress in how to think about many systems, and when the atomic details matter.

*Here, "filling up" means writing the many-electron wave function as a totally antisymmetric linear combination of single-electron states, including the spin states.

PS - two fun links:

Friday, March 29, 2024

Thoughts on undergrad solid-state content

Figuring out what to include in an undergraduate introduction to solid-state physics course is always a challenge.   Books like the present incarnation of Kittel are overstuffed with more content than can readily fit in a one-semester course, and because that book has grown organically from edition to edition, it's organizationally not the most pedagogical.  I'm a big fan of and have been teaching from my friend Steve Simon's Oxford Solid State Basics, which is great but a bit short for a (US) one-semester class.  Prof. Simon is interested in collecting opinions on what other topics would be good to include in a hypothetical second edition or second volume, and we thought that crowdsourcing it to this blog's readership could be fun.  As food for thought, some possibilities that occurred to me were:

  • A slightly longer discussion of field-effect transistors, since they're the basis for so much modern technology
  • A chapter or two on materials of reduced dimensionality (2D electron gas, 1D quantum wires, quantum point contacts, quantum dots; graphene and other 2D materials)
  • A discussion of fermiology (Shubnikov-DeHaas, DeHaas-van Alphen) - this is in Kittel, but it's difficult to explain in an accessible way
  • An introduction to the quantum Hall effect
  • Some mention of topology (anomalous velocity?  Berry connection?)
  • An intro to superconductivity (though without second quantization and the gap equation, this ends up being phenomenology)
  • Some discussion of Ginzburg-Landau treatment of phase transitions (though I tend to think of that as a topic for a statistical/thermal physics course)
  • An intro to Fermi liquid theory
  • Some additional discussion of electronic structure methods beyond the tight binding and nearly-free electron approaches in the present book (Wannier functions, an intro to density functional theory)
What do people think about this?

Sunday, March 24, 2024

Items of interest

The time since the APS meeting has been very busy, hence the lack of posting.  A few items of interest:

  • The present issue of Nature Physics has several articles about physics education that I really want to read. 
  • This past week we hosted N. Peter Armitage for a really fun colloquium "On Ising's Model of Magnetism" (a title that he acknowledged borrowing from Peierls).  In addition to some excellent science about spin chains, the talk included a lot of history of science about Ising that I hadn't known.  An interesting yet trivial tidbit: when he was in Germany and later Luxembourg, the pronunciation was "eeesing", while after emigrating to the US, he changed it to "eye-sing", so however you've been saying it to yourself, you're not wrong.  The fact that the Isings survived the war in Europe is amazing, given that he was a Jew in an occupied country.  Someone should write a biography....
  • When I participated in a DOD-related program 13 years ago, I had the privilege to meet General Al Gray, former commandant of the US Marine Corps.  He just passed away this week, and people had collected Grayisms (pdf), his takes on leadership and management.  I'm generally not a big fan of leadership guides and advice books, but this is good stuff, told concisely.
  • It took a while, but a Scientific American article that I wrote is now out in the April issue.
  • Integrating nitrogen-vacancy centers for magnetic field sensing directly into the diamond anvils seems like a great way to make progress on characterizing possible superconductivity in hydrides at high pressures.
  • Congratulations to Peter Woit on 20 (!!) years of blogging at Not Even Wrong.  

Thursday, March 07, 2024

APS March Meeting 2024, Day 4 and wrap-up

Because of the timing of my flight back to Houston, I really only went to one session today, in which my student spoke as did some collaborators.  It was a pretty interesting collection of contributed talks.  

  • The work that's been done on spin transport in multiferroic insulators is particularly interesting to me.  A relevant preprint is this one, in which electric fields are used to reorient \(\mathbf{P}\) in BiFeO3, which correspondingly switches the magnetization in this system (which is described by a complicated spin cycloid order) and therefore modulates the transmission of spin currents (as seen in ferromagnetic resonance).  
  • Similarly adding a bit of La to BiFeO3 to favor single ferroelectric domain formation was a neat complement to this.
  • There were also multiple talks showing the utility of the spin Hall magnetoresistance as a way to characterize spin transport between magnetic insulators and strong spin-orbit coupled metals.
Some wrap-up thoughts:
  • This meeting venue and environment was superior in essentially every way relative to last year's mess in Las Vegas.  Nice facilities, broadly good rooms, room sizes, projectors, and climate control.  Lots of hotels.  Lots of restaurants that are not absurdly expensive.  I'd be very happy to have the meeting in Minneapolis again at some point.  There was even a puppy-visiting booth at the exhibit hall on Tuesday and Thursday.
  • Speaking of the exhibit hall, I think this is the first time I've been at a meeting where a vendor was actually running a dilution refrigerator on the premises.  
  • Only one room that I was in had what I would describe as a bad projector (poor color balance, loud fan, not really able to be focused crisply).  I also did not see any session chair this year blow it by allowing speakers to blow past their allotted times.
  • We really lucked out on the weather.  
  • Does anyone know what happens if someone ignores the "Warning: Do Not Drive Over Plate" label on the 30 cm by 40 cm yellow floor plate in the main lobby?  Like, does it trigger a self-destruct mechanism, or the apocalypse or something?
  • Next year's combined March/April meeting in Anaheim should be interesting - hopefully the venue is up to the task, and likewise I hope there are good, close housing and food options.

Wednesday, March 06, 2024

APS March Meeting 2024, Day 3

My highlights today are a bit thin, because I was fortunate enough to spend time catching up with collaborators and old friends, but here goes:
  • Pedram Roushan from Google gave an interesting talk about noisy intermediate-scale quantum experiments for simulation.  He showed some impressive data looking at the propagation of (simulated) magnons in the 1D Heisenberg spin chain.
  • In the same session, Lieven Vandersypen from Delft presented their recent results using gate-defined Ge/SiGe quantum dot arrays to simulate a small-scale version of the Hubbard model.  Looking at exciton formation and propagation in a Hubbard ladder while being able to tune many parameters, the data are pretty neat, though I have to say it seems like scaling this up to large arrays will be extremely challenging in terms of layout and tuning.  He also showed some in-preparation work on spin propagation in similar arrays - neat.
  • In a completely different session, Jacques Prost, recipient of this year's Onsager Prize, gave an interesting talk about broken symmetries and dynamics of living tissue.  This included cell motion driven by nematicity (living tissue as liquid crystal....) and how in a cylindrical environment this can lead to rotation of growing tissue.  These sorts of interactions in "active matter" can be related to how tissue grows and differentiates in living systems.
  • My colleague Gustavo Scuseria is this year's recipient of the Aneesha Rahman Prize, and he gave a good explanation of his group's recent work on using dualities to map strongly correlated models onto more tractable (polynomial-growth rather than exponential growth in problem size) equivalent weakly correlated models.
  • In a session on quantum spin liquids, Tyrel McQueen of Johns Hopkins spoke about two examples of his group's recent work.  Chemical substitution can help tune interactions in a Kitaev spin liquid candidate, and they've also examined the controlled interplay of charge density waves and magnetic order.  The talk did a great job of conveying a taste of the breadth and depth of the space of quantum magnets.
  • Lastly, Chih-Yuan Lu, recipient of this year's George E. Pake Prize, gave a very nice historical overview of the development of semiconductor electronics from the integrated circuit to the present frontiers (of gate-all-around transistors and 3D integrated NAND memory).
Two other notes not directly germane to the APS meeting:
  • The AAAS appropriations tracker shows how outlays for the coming year are shaping up for NSF and the other agencies.  </begin rant>Can someone explain to me why the conference NSF budget allocation for research ends up -8.5%, when the House pushed +0.3% and the Senate pushed -2.9%? Also, cutting the STEM education budget (which includes GRFP) by 28% seems terrible.  Griping about US STEM competitiveness and the need for developing the next-generation technical workforce, while simultaneously cutting research training resources:  Congress in action.   Once again, they feel good about supporting the authorization of doubling the NSF budget over five years, but don't actually want to appropriate the funds to do it.  </end rant>
  • Purely by random chance (ahem), I want to point to this column.

Tuesday, March 05, 2024

APS March Meeting 2024, Day 2

A decent part of today was spent in conversation with friends and colleagues, but here are some high points of scientific talks:

More tomorrow....

Monday, March 04, 2024

APS March Meeting 2024, Day 1

There is no question that the meeting venue in Minneapolis is superior in multiple ways to last year's meeting in Las Vegas.  The convention center doesn't feel scarily confining, and it also doesn't smell like a combination of cigarettes and desperation.

Here are a few highlights from the day:

  • There was an interesting session about "polar materials", systems that have the same kind of broken inversion symmetry within a unit cell as ferroelectrics; this includes "polar metals" which host mobile charge carriers.  One polar material family involving multiferroic insulators was presented by Daniel Flavián, in which dielectric (capacitance) measurements can show magnetic quantum critical phenomena, as in here and here.  Both sets of materials examined, Rb2Cu2Mo3O12 and Cs2Cu2Mo3O12, show remarkable dielectric effects due to fluctuating electric dipoles, connected to quantum critical points at B-field driven transitions between magnetic ordered states.
  • Natalia Drichko from Johns Hopkins showed Raman spectroscopy data on an organic Mott insulator, in which melting charge order is connected to spin fluctuations.
  • Pavel Volkov from U Conn discussed doped strontium titanate (STO), an example of an incipient polar metal, and looking at how polar fluctuations might be connected with the mechanism behind the unusual superconductivity of STO. 
  • The last talk of that session that I saw was Pablo Jarillo-Herrero giving a characteristically clear presentation about sliding ferroelectricity.  Taking a material like hBN and trying to stack a bilayer with perfect A-A alignment is not energetically favored - it's lower in energy if the two layers shift relative to each other by a third of a lattice parameter, resulting in an out-of-plane electric dipole moment, pointing either up or down depending on the direction of the shift.  Applying a sufficiently large electric field perpendicular to the plane can switch the system - this works on TMDs as well.  Putting a moire bilayer in the mix, and you can get some neat charge ratcheting effects
  • The session on transport in non-Fermi liquids was fun and informative.  I thought the discussion of possible intrinsic nonlinear transport in strange metals was intriguing.
  • I also saw a couple of interesting invited talks (here and here) about experiments that try to use electronic transport in adjacent layers to probe nontrivial magnetic properties of adjacent spin ices.  Very cool.
More tomorrow....

Sunday, March 03, 2024

APS March Meeting 2024 - coming soon

This week I'm going to be at the APS March Meeting in Minneapolis.  As I've done in past years, I will try to write up some highlights of talks that I am able to see, though it may be hit-or-miss.  If readers have suggestions for sessions or talks that they think will be particularly interesting, please put them in the comments.

Sunday, February 25, 2024

2024 version: Advice on choosing a graduate school

It's been four years since I posted the previous version of this, so it feels like the time is right for an update.

This is written on the assumption that you have already decided, after careful consideration, that you want to get an advanced degree (in physics, though much of this applies to any other science or engineering discipline).  This might mean that you are thinking about going into academia, or it might mean that you realize such a degree will help prepare you for a higher paying technical job outside academia.  Either way,  I'm not trying to argue the merits of a graduate degree - let's take it as given that this is what you want to do.

  • It's ok at the applicant stage not to know exactly what research area you want to be your focus.  While some prospective grad students are completely sure of their interests, that's more the exception than the rule.  I do think it's good to have narrowed things down a bit, though.  If a school asks for your area of interest from among some palette of choices, try to pick one (rather than going with "undecided").  We all know that this represents a best estimate, not a rigid commitment.
  • If you get the opportunity to visit a school, you should go.  A visit gives you a chance to see a place, get a subconscious sense of the environment (a "gut" reaction), and most importantly, an opportunity to talk to current graduate students.  Always talk to current graduate students if you get the chance - they're the ones who really know the score.  A professor should always be able to make their work sound interesting, but grad students can tell you what a place is really like.
  • International students may have a very challenging time being able to visit schools in the US, between the expense (many schools can help defray costs a little but cannot afford to pay for airfare for trans-oceanic travel) and visa challenges.  Trying to arrange zoom discussions with people at the school is a possibility, but that can also be challenging.  I understand that this constraint tends to push international students toward making decisions based heavily on reputation rather than up-close information.  
  • Picking an advisor and thesis area are major decisions, but it's important to realize that those decisions do not define you for the whole rest of your career.  I would guess (and if someone had real numbers on this, please post a comment) that the very large majority of science and engineering PhDs end up spending most of their careers working on topics and problems distinct from their theses.  Your eventual employer is most likely going to be paying for your ability to think critically, structure big problems into manageable smaller ones, and knowing how to do research, rather than the particular detailed technical knowledge from your doctoral thesis.  A personal anecdote:  I did my graduate work on the ultralow temperature properties of amorphous insulators.  I no longer work at ultralow temperatures, and I don't study glasses either; nonetheless, I learned a huge amount in grad school about the process of research that I apply all the time.
  • Always go someplace where there is more than one faculty member with whom you might want to work.  Even if you are 100% certain that you want to work with Prof. Smith, and that the feeling is mutual, you never know what could happen, in terms of money, circumstances, etc.  Moreover, in grad school you will learn a lot from your fellow students and other faculty.  An institution with many interesting things happening will be a more stimulating intellectual environment, and that's not a small issue.
  • You should not go to grad school because you're not sure what else to do with yourself.  You should not go into research if you will only be satisfied by a Nobel Prize.  In both of those cases, you are likely to be unhappy during grad school.  
  • I know grad student stipends are low, believe me.  However, it's a bad idea to make a grad school decision based purely on a financial difference of a few hundred or a thousand dollars a year.  Different places have vastly different costs of living - look into this.  Stanford's stipends are profoundly affected by the cost of housing near Palo Alto and are not an expression of generosity.  Pick a place for the right reasons.
  • Likewise, while everyone wants a pleasant environment, picking a grad school largely based on the weather is silly.  
  • Pursue external fellowships if given the opportunity.  It's always nice to have your own money and not be tied strongly to the funding constraints of the faculty, if possible.  (It's been brought to my attention that at some public institutions the kind of health insurance you get can be complicated by such fellowships.  In general, I still think fellowships are very good if you can get them.)
  • Be mindful of how departments and programs are run.  Is the program well organized?  What is a reasonable timetable for progress?  How are advisors selected, and when does that happen?  Who sets the stipends?  What are TA duties and expectations like?  Are there qualifying exams?  Where have graduates of that department gone after the degree?  Are external internships possible/unusual/routine? Know what you're getting into!  Very often, information like this is available now in downloadable graduate program handbooks linked from program webpages.   
  • When talking with a potential advisor, it's good to find out where their previous students have gone and how long a degree typically takes in their group.  What is their work style and expectations?   How is the group structured, in terms of balancing between team work to accomplish goals vs. students having individual projects over which they can have some ownership? 
  • Some advice on what faculty look for in grad students:  Be organized and on-time with things.  Be someone who completes projects (as opposed to getting most of the way there and wanting to move on).  Doctoral research should be a collaboration.  If your advisor suggests trying something and it doesn't work (shocking how that happens sometimes), rather than just coming to group meeting and saying "It didn't work", it's much better all around to be able to say "It didn't work, but I think we should try this instead", or "It didn't work, but I think I might know why", even if you're not sure. 
  • It's fine to try to communicate with professors at all stages of the process.  We'd much rather have you ask questions than the alternative.  If you don't get a quick response to an email, it's almost certainly due to busy-ness, and not a deeply meaningful decision by the faculty member.  For a sense of perspective: I get 50+ emails per day of various kinds not counting all the obvious spam that gets filtered.  

There is no question that far more information is now available to would-be graduate students than at any time in the past.  Use it.  Look at departmental web pages, look at individual faculty member web pages.  Make an informed decision.  Good luck!

Tuesday, February 13, 2024

Continuing Studies course, take 2

A year and a half ago, I mentioned that I was going to teach a course through Rice's Glasscock School of Continuing Studies, trying to give a general audience introduction to some central ideas in condensed matter physics.  Starting in mid-March, I'm doing this again.  Here is a link to the course registration for this synchronous online class.  This course is also intended as a potential continuing education/professional development offering for high school teachers, community college instructors, and other educators, and thanks to the generous support of the NSF, the Glasscock School is able to offer a limited number of full scholarships for educators - apply here by February 27 for consideration.   

(I am aware that the cost of the course is not trivial; at some point in the future I will make the course materials available broadly, and I will be sure to call attention to that at the time.)

Wednesday, February 07, 2024

A couple of links + a thought experiment about spin

A couple of interesting things to read:

  • As someone interested in lost ancient literature and also science, I really liked this news article from Nature about progress in reading scrolls excavated from Herculaneum.  The area around the Bay of Naples was a quite the spot for posh Roman families, and when Vesuvius erupted in 79 CE, whole villas, complete with their libraries of books on papyrus scrolls, were buried and flash-cooked under pyroclastic flows.  Those scrolls now look like lump charcoal, but with modern x-ray techniques (CT scanning using the beam from a synchrotron) plus machine learning, it is now possible to virtually unroll the scrolls and decipher the writing, because the ink has enough x-ray contrast with the carbonized papyrus to be detected.  There is reason to believe that there are more scrolls out there still buried, and there are lots of other books and scrolls out there that are too delicate or damaged to be handled and read the normal way.  It's great to see this approach starting to succeed.
  • I've written about metalenses before - using nanostructured surfaces for precise control of optical wavefronts to make ultrathin optical elements with special properties.  This extended news item from Harvard about this paper is a nice piece of writing.  With techniques now developed to make dielectric metalenses over considerably larger areas (100 mm silica wafers), these funky lenses can now start to be applied to astronomy.  Nifty.
And now the gedanken experiment that I've been noodling on for a bit.  I know what the correct answer must be, but I think this has done a good job at reminding me how what constitutes a measurement is a very subtle issue in quantum mechanics.

Suppose I have a single electron roughly localized at the origin.  It has spin-1/2, meaning that, if there are no other constraints, if I choose to make a measurement of the electron spin along some particular axis, I will find that with 50/50 probability the component of the angular momentum of the electron is \(\pm \hbar/2\) along that axis.  Suppose that I pick a \(z\) axis and do the measurement, finding that the electron is "spin-up" along \(z\).  Because the electron has a magnetic dipole moment, that means that the magnetic field at some distance \(r\) away from the origin should be the field from a magnetic dipole along \(z\).  

Now suppose I make another measurement of the spin, this time along the \(x\) axis.  I have a 50/50 chance of finding the electron spin up/down along \(x\).  After that measurement, the magnetic field at the same location \(r\) away from the origin should be the field from a magnetic dipole along \(x\).  It makes physical sense that the magnetic field at location \(r\) can only "know" that a measurement was done at the origin on a timescale \(r/c\).  (Note:  A truly correct treatment of this situation would seem to require QED, because the spin is entangled with the electromagnetic field via its magnetic moment; likewise one would really need to discuss in detail what it means to measure the spin state at the origin and what it means to measure the magnetic field locally.  Proper descriptions of detectors and measurements are really necessary.)

To highlight how subtle the situation is, suppose the spin at the origin is initially half of an EPR pair, so that it's in a spin singlet with a second spin near Alpha Centauri, so that the total spin of the two is zero.  Now a measurement of \(s_{z}\) at the origin determines the state of \(s_{z}\) at Alpha Centauri, and the magnetic field near that dipole at Alpha Centauri should be consistent with that.  Thinking about all of the subtleties here has been a good exercise for me in remembering how the seemingly simple statements we make when we teach this stuff can be implicitly very complicated.


Saturday, February 03, 2024

Large magnetic fields as a scientific tool

When I was at Berkeley at the beginning of the week to give a seminar, I was fortunate enough to overlap with their departmental physics colloquium by Greg Boebinger, an accomplished scientist who is also an extremely engaging and funny speaker.  Since 2004 he has been the director of the National High Magnetic Field Lab in Tallahassee, Florida, the premier user facility for access to large magnetic fields for scientific research.  He gave a great talk that discussed both the challenges in creating very large magnetic fields and a sampling of the cool science that can be done using these capabilities.

Leaving aside spin for a moment, magnetic fields* in some reference frame are generated by currents of moving charges and changing electric fields, as in Ampère's law, \(\nabla \times \mathbf{B} = \mu_{0}\mathbf{J} + \epsilon_{0}\mu_{0}\partial_{t}\mathbf{E}\), where \(\mathbf{J}\) is the current density.  Because materials have collective responses to magnetic fields, generating within themselves some magnetization (magnetic dipole moment per volume \(\mathbf{M}\)), we can think of the magnetic field as a thermodynamic variable, like pressure.  Just as all kinds of interesting physics can be found by using pressure to tune materials between competing phases (because pressure tunes interatomic spacing, and thus things like the ability of electrons to move from atom to atom, and hence the magnitude of magnetic exchange), a magnetic field can tune materials across phase transitions.  

It's worth remembering some physically relevant scales.  The earth's magnetic field at the surface is around 30-50 microTesla.  The magnetic field at the surface of a rare earth magnet is around 1 Tesla.  The field in a typical MRI machine used for medical imaging is 1.5 or 3 T.  The energy levels for the spin of an electron in a magnetic field are set by the Zeeman effect and shift by an amount around \(\mu_{\mathrm{B}}B\), where \(\mu_{\mathrm{B}}\) is the Bohr magneton, \(9.27 \times 10^{-24}\) J/T.  A 10 T magnetic field, about what you can typically get in an ordinary lab, leads to a Zeeman energy comparable to the thermal energy scale at about 6.7 K, or compared to an electron moving through a voltage of 0.6 mV.   In other words, magnetic fields are weak in that it generally takes a lot of current to generate a big field, and the associated energies are small compared to room temperature (\(k_{\mathrm{B}}T\) at 300 K is equivalent to 26 mV) and the eV scales relevant to chemistry.  Still, consequences can be quite profound, and even weak fields can be very useful with the right techniques. (The magnetic field at the surface of a neutron star can be \(10^{11}\) T, a staggering number in terms of energy density.)

Generating large magnetic fields is a persistent technological challenge.  Superconductors can be great for driving large currents without huge dissipation, but they have their own issues of critical currents and critical fields, and the mechanical forces on the conductors can be very large (see here for a recent review).  The largest steady-state magnetic field that has been achieved with a (high-Tc) superconducting coil combined with a resistive magnet is around 45.5 T (see here as well).  At the Los Alamos outpost of the Magnet Lab, they've achieved non-destructive pulsed fields as large as 101 T (see this video).  A huge limiting factor is the challenge of making joints between superconducting wires, so that the joint itself remains superconducting at the very large currents and fields needed. 

The science that can be done with large fields extends well beyond condensed matter physics.  One example from the talk that I liked:  Remarkable resolution is possible in ion cyclotron resonance mass spectroscopy, so that with a single drop of oil, it is possible to identify the contribution of the many thousands of hydrocarbon molecules in there and "fingerprint" where it came from.  

Fun stuff, and a great example of an investment in technology that would very likely never have been made by private industry alone.


* I know that \(\mathbf{B}\) is technically the magnetic induction or magnetic flux density in SI units, but colloquially everyone calls it the magnetic field, so I'll do the same here.

Friday, January 26, 2024

Generative AI and scientific images/diagrams

Generative AI for image generation is a controversial topic for many reasons.  Still, as someone who doesn't have a staff of graphic artists on hand to help make scientific illustrations, it has certainly been tempting to see whether it might be a useful tool.  My brief experiments are based using bing's integrated engine (which I believe is DALL-E 3) since Rice has a license.  The short summary:  Trying to make scientific illustrations this way is great at showing how this technology is not close to useful yet for this application.  While natively integrated w/ chatGPT, it still cannot parse prompts well, indicating that it doesn't really "know" the meaning of a lot of basic terms and how those relate to its image training data.

Let me give you an example.  When I asked it "Make me an image of a benzene molecule showing the chemical bonds between the carbon atoms", this is the response:  

When I tried to refine this by typing "Make an image of a benzene molecule.", it responded by showing me image search results with sources, which were all correctly benzene, but not an original image.  When I responded "Please create a new image (not from an existing source) of a benzene molecule.", it responded with "Ok, I will create a new image from scratch showing a benzene molecule." and showed me this:

I then tried more specificity in the prompt:  "Create (from scratch) an image of a benzene molecule, which consists of six carbon atoms (black spheres) linked in a ring, with one hydrogen atom (a smaller red sphere) attached to each carbon atom."  The result was this:
Closer, but I'm still surprised at how poorly any of these resemble the stock images of benzene that it correctly found through image searching on the internet.  That last one is particularly close, so I tried "The fourth image is close, but it should have only one hydrogen attached to each carbon, not three."  In classic chatGPT form, the response was "I’m sorry, I made a mistake. I’ll try to create a new image of a benzene molecule with only one hydrogen attached to each carbon. Please wait a moment."  It then produced this:
Clearly it just cannot understand or follow the instructions yet. 

Similar things happen if you ask it to make an image of a lattice.  If I ask "Make me an image from scratch of a simple cubic lattice.", the verbal and image responses are quite encouraging.  "I’ll try to create that. Here is a new image of a simple cubic lattice that I generated from scratch. A simple cubic lattice is a type of crystal structure where the atoms are arranged in a regular pattern of cubes. Each atom is located at the center of a cube and has six nearest neighbors. A simple cubic lattice is one of the simplest and most symmetric types of lattices."

This is quite close, though all of these have problems (non-orthogonal angles, uneven spacings).  Things go off the rails badly, though, if you want a face-centered cubic lattice.  It can say the definition of an FCC lattice reasonably well:  "I’ll try to create that. Here is a new image of a face-centered cubic lattice that I generated from scratch. A face-centered cubic lattice is a type of crystal structure where the atoms are arranged in a regular pattern of cubes. Each atom is located at the center of a cube and at the center of each face of the cube. A face-centered cubic lattice has 12 nearest neighbors for each atom and is one of the most common and dense types of lattices."  However, the images are, ummm, artistic.
These are quite different from the simple image search of the internet for "face-centered cubic lattice", which does a good job if you don't mind using someone else's image.  The bottom line:  These kinds of image generation tools have enormous potential to save us time, but right now the parsing technology and training methods just aren't there yet.  If a simple image search turns up the right targets, but the generation tool can't make images that look like those search results, then training has failed somewhere. 

If anyone out there has found some flavor of these image generation capabilities that really does work well for helping to make scientific illustrations, please let me know.

Update:  Google just released their free image generation tool, and while it still has problems like those above, it's pretty impressive.  For example, here is what it made from the prompt "drawing of a benzene molecule, consisting of six black carbon atoms arranged in a ring, with each carbon atom connected to one red hydrogen atom":

That's certainly better than anything DALL-E made.  Of course, the other three images that came along with that one were all screwed up.  Still, progress.


 


Tuesday, January 16, 2024

Materials characterization techniques – a brief glossary

Suppose someone has synthesized or found what they think is a new material. How do people studying materials (condensed matter physicists, materials scientists, materials chemists) figure out what they have and understand its properties? That's the puzzle-solving aspect of working with materials: In general, solid matter involves an enormous number of interacting particles, and determining even something as basic as its structure and underlying excitations is not simple.

There are many, many materials characterization techniques available, each with its own peculiarities and limitations. (I think that the alphabet-soup collection of acronyms associated with these is part of condensed matter's general perception as complicated, obscure, and full of jargon, but the need for a variety of techniques is clear in practice.) For the class I'm teaching, I wrote up a brief glossary of these. Apologies for undoubtedly leaving out someone's favorite. Please let me know in the comments what I've missed or mis-stated. Wikipedia already does a creditable job explaining many of these, including with diagrams and citations to key literature. Hopefully sticking a lot of these in one place will be useful to some. -- DN

PS - the fact that there are so many different techniques that can be applied just to determine material structure and composition is a hint why trying to automate materials characterization in AI/ML-based materials synthesis and discovery has a long way to go.



Materials characterization techniques – a brief glossary

 

Microscopy

Optical microscopyprovides optical information about structure on scales > 1 μm

 

Electron microscopy and related

Scanning electron microscopy (SEM)electron beam (1-40 keV) rastered across sample; secondary electrons knocked out of the sample are detected as a function of beam position to create an image.  Sensitive to surface conditions, works best on conductive materials, larger signals from high Z materials.  Beam spot size typically nm scale; lateral resolution down to 1-2 nm possible.  Penetration depth into solid of 10s of nm, more with higher electron beam energy.  Typically requires sample in vacuum (or at least detector closer to sample than electron mean free path in background gas).  Best with conductive samples to avoid charging.

Related

Back-scatter electron diffraction (BSED):  back-scattered electrons from the beam used to create diffraction patterns from the surface crystal structure.

Energy (X-ray) dispersive spectroscopy (EDS):  x-ray fluorescence excited by electron beam is detected; can be used for elemental compositional analysis.

Electron microprobe analysis (EMPA): carefully calibrated cousin of EDS, allows precise elemental analysis.

Cathodoluminescence (CL): optical photons collected from e-beam excited sample as a function of beam position.  Can detect excitations of material like plasmons, excitons.

 

Transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM):  sub-nm spot size electron beam (typically 100 keV and higher) passed through thin ( 100 nm thick) sample into a detector.  Can detect atomic-scale structural information.  EDS, CL can be performed as well. “Bright field” and “dark field” imaging modes possible.  Sample in vacuum.  Special sample holders available to allow measurements as a fn of temperature, strain, electronic biasing,

Related

Selective-area electron diffraction (SAED) – get electron diffraction from portions of the sample.

Electron energy loss spectroscopy (EELS) – measure energy loss of transmitted electrons, can infer excitations (e.g. plasmons) within the material.  Energy resolution down to sub-100 meV possible.

Lorentz electron microscopy (LEM) – can infer magnetic domain patterns from deflection of transmitted electron beam

 

Electron diffraction:

Reflection high energy electron diffraction (RHEED):  diffraction using grazing incidence electrons (10-30 keV).  Extremely sensitive to surface conditions, used for in situ characterization of thin film growth in molecular beam epitaxy (MBE) and pulsed laser deposition (PLD) systems.  Requires vacuum.

 

Low energy electron diffraction (LEED): low energy (20-300 eV) electrons diffracted in reflection off surfaces.  This is the original electron diffraction discovered by Davisson and Germer back in 1924.  Requires vacuum, very surface sensitive (nm scales), vulnerable to magnetic fields. 

Related

Auger electron spectroscopy (AES):   use keV electrons to knock out core electrons; as electron drops down to fill core hole, excess energy kicks out less bound electron, whose energy is measured.  Very surface sensitive. 

Low energy electron microscopy (LEEM) and spin-polarized LEEM (SPLEEM):  Doing electron microscopy using < 100 V electrons; extremely surface sensitive, SPLEEM good for local magnetic structure.

 

Scanned probe microscopy (SPM)

Category of microscopy methods that involves moving a sharp tip in close proximity to a material surface.  Typically involves piezoelectric transducers for sample/tip relative motion and scanning.  Examples:

Atomic force microscopy (AFM):  A sharp tip (down to a few nm in radius) at the end of a cantilever or tuning fork structure is moved relative to the sample surface.  In contact mode, changes in surface topography cause deflection of the cantilever, which is typically detected optically.  In non-contact (tapping) mode, the tip is oscillated at the cantilever resonance frequency.  The short-range interaction between tip and sample alters the frequency and phase of the cantilever motion.  Feedback of tip height above sample is used to maintain tip-sample separation and map topography.  Can be performed in ambient conditions.  If performed in vacuum, with molecule-functionalized tips, it is possible to perform atomic-resolution imaging and “see” molecular orbitals.  Versions of AFM may be performed in fluid environments as well.

Related

Lateral force microscopy (LFM):  looks at sideways forces on tip as it is scanned over the sample surface; sensitive to changes in local friction and elastic properties.

Piezoresponse force microscopy (PFM): uses a conductive tip and an applied ac current to map piezoelectric response of sample.

Conducting probe AFM:  In contact mode, allows mapping of electronic properties of the sample, though care is required for interpretation.

Scanning capacitance microscopy (SCM): Using conductive tip as effective capacitor plate, maps capacitance of sample.  Useful for mapping carrier concentration in semiconductor materials.

Magnetic force microscopy (MFM):  Uses a ferromagnetically coated tip.  Scanning a line in close non-contact mode to get topography, and rescanning back over the line with tip elevated a fixed amount so that long-range magnetic forces are mapped.  One challenge:  magnetic field from tip can perturb magnetic domains in sample. 

Electrostatic force microscopy (EFM): Conductive AFM tip is used and held at a particular potential relative to the sample.  As in MFM, mapping at a fixed tip-sample distance can reveal local electric field forces between tip and sample.

Kelvin probe force microscopy (KPFM): Feedback is performed, so that the conductive AFM tip potential is adjusted to null out any long-range electric field forces between tip and sample.  This can be used to map out the local contact potential or work function difference between tip and sample.   

Magnetic resonance force microscopy (MRFM): Uses radio frequency (RF) excitation and a magnetic tip to drive magnetic resonance (either electron spin resonance or nuclear magnetic resonance) of spins in the sample, detected via the cantilever motion. 

 

Near-field scanning optical microscopy (NSOM or SNOM): Using AFM-like control, a tip is brought into close proximity (nm to tens of nm) of the sample surface.  Near-field optical interactions are then mapped as a function of tip position.  Tip can be a tapered optical fiber or a contain a hole/waveguide, so that light travels through the tip to the sample surface.  Scattered light can be detected back through the tip or in the far field.  Alternately, light can be shined in via the far field and scattered into the tip or into another far-field detector.  Key idea is that the very small tip and tip-sample distance can scatter sub-diffraction-limit information into the far field.

Scanning single-electron transistor microscopy (SSETM): A tip is prepared (e.g., on a drawn optical fiber) with a single-electron transistor (SET, a device based on “Coulomb blockade”, consisting of a metal “island” with tunnel junctions to a source and a drain electrode, sometimes with an additional “gate” electrode that is capacitively coupled to the island) at the tip apex.  The tip is positioned close to the sample using AFM-like techniques to avoid crashing into the surface.  The electronic transport through the SET as a function of biasing conditions and the tip position.  The surface potential of the sample acts as a “gate” that modulates conduction through the island in the Coulomb blockade regime.  By modulating the tip position and biasing conditions, can be used to measure local charge density and electronic compressibility.  Typical spatial resolution 10s of nm at best, because of diameter of island and positioning precision.  Requires cryogenic temperatures to operate.

Scanning SQUID microscopy: A superconducting quantum interference device (SQUID) is fabricated on a tip (e.g., on a drawn optical fiber).  The tip is again positioned near the sample using AFM-like techniques.  The SQUID, consisting of a superconducting loop with Josephson junction weak links, is used to detect magnetic flux from the sample.  This can be used to map current distributions in operating devices.  Requires cryogenic temperatures to operated, does not work well with magnetic fields.

Scanning Hall probe microscopy: A 2D electronic system is patterned into a Hall configuration on some kind of tip and positioned (using AFM-like methods) close to a sample of interest, to act as a magnetic field detector. 

Scanning NV center microscopy:  A nitrogen-vacancy center in a diamond crystal has optical transitions that are highly sensitive to local magnetic fields.  Incorporating NV centers into diamond films on SPM tips enables high resolution (tens of nm) measurements of local fields including direction, and the inference of current distributions.

Microwave Impedance Microscopy (MIM): A microwave resonator is made and incorporated so that a conductive AFM-like tip is part of the resonant circuit.  Scanning the tip over a device changes the Q of the resonator, allowing mapping (with 10s of nm resolution) of the microwave frequency (say hundreds of MHz to GHz) dielectric properties of the sample. 

Scanning thermal microscopy (SThM): Scanning a special temperature-sensitive probe tip over a sample to assess local thermal conduction properties or local temperature.  Several variants depending on the type of thermally sensitive probe used (e.g. thermocouple, phase change material, optical defect center with T-dependent lifetime).

 

Scanning tunneling microscopy (STM):  Tunneling current between metallic tip (sometimes Pt, W) and conductive sample used for z-positioning feedback.  Because of the exponential distance dependence of tunneling, atomic resolution is possible.  Can be performed at ambient conditions, but by far the best results are obtained in vacuum and at low temperatures. 

Related

Scanning tunneling spectroscopy (STS):  At each tip position over the sample, z feedback is turned off and tunneling I-V curves are obtained at a nominally fixed tip height (usually including dI/dV vs. V and sometimes d2I/dV2 vs. V).   The d2I/dV2 vs. V data is used to perform inelastic electron tunneling spectroscopy (IETS), and can detect local excitations like vibrations.

Quasiparticle interference (QI):  From STS maps, spatial Fourier transforms of the (fixed energy) maps of conductance vs. position are performed.  For itinerant quasiparticles that can move around on the sample surface, quantum interference between trajectories that bounce off scattering sites and the tip mean that the QI transforms make it possible to infer E(k) for the surface states of the sample. 

Spin-polarized STM (SPSTM): Requires a magnetic/spin-polarized tip.  Can reveal local magnetic information due to spin-dependent tunneling between tip and sample.

 

X-ray methods

X-ray diffraction (XRD):  gives crystal structure (spatial frequencies of atomic stacking) of materials via coherent scattering of x-rays.  Powder XRD = gives bright rings as a function of angle away from forward scattering (linear combination of many spots).  Obeys Bragg condition.  Single-crystal XRD = gives discrete spots.  A Laue single-crystal diffractometer can be used to find the crystal orientation of single crystals.

X-ray reflectometry (XRR):  rather like optical ellipsometry; looking at x-ray reflections at grazing incidence with respect to a multilayered surface.  Can be used to infer layer thicknesses (assuming there is x-ray contrast between different layers)

X-ray absorption spectroscopy (XAS) and x-ray absorption fine structure (XAFS):  Using tunable x-ray sources (e.g. beam from a synchrotron), it is possible to measure x-ray absorption in detail, allowing determination of chemical structure and valence in materials.  Also related: x-ray absorption near edge structure (XANES), gives more detailed chemical information.

Inelastic x-ray scattering (IXS): Angle- and energy-resolved x-ray scattering, allowing measurement of absorption edges and detection of excitations launched in the material at some known energy and momentum transfer.

Resonant inelastic x-ray scattering (RIXS):  Angle- and energy-resolved x-ray scattering where the incident wavelength is chosen to be close to an x-ray line of an element in the target.  Needs tunable x-ray source (free electron laser (FEL), e.g.)  Since it is sensitive to electron density, it can be used with small sample volumes, and can be used to look for dispersive excitations in the material.  There is hope that RIXS can be used to detect magnetic excitations as an alternative to neutron scattering for small amounts of sample material. 

X-ray magnetic circular dichroism (XMCD): Difference of XAS between left- and right-circularly polarized x-ray beams.  Can be used to infer magnetic moments of atoms in the sample.  Can be resonantly enhanced if x-rays are chosen to be at transitions of the core electrons of the magnetic atoms in the material.  Typically needs a synchrotron to get high brightness beams.

Related

X-ray magnetic linear dichroism (XMLD): Difference of XAS between x- and y-polarized x-ray beams.  Closely related to XMCD, useful for looking at charge order and orbital order in magnetic materials.

 

Photoemission

X-ray photoemission spectroscopy (XPS) and ultraviolet photoemission spectroscopy (UPS):  Uses x-ray or UV light to eject electrons from sample and analyzes the energy of the ejected electrons.  This gives the energies of the core levels of the constituents relative to the vacuum, which encodes the valence state of the elements.  Sample in vacuum.  Surface-sensitive, very useful for determining chemical composition.  Can be combined with etching to do depth profiling of composition. 

Related

Inverse photoemission spectroscopy (IPES): Low energy (< 20 eV) electrons interact with low-lying unoccupied electronic states, sometimes generating emitted photons.  Probes states above the Fermi level of materials. 

Photoemission electron microscopy (PEEM):  With a scannable optical source, it is possible to map spatial nonuniformity in photoemitted electrons.

 

Angle-resolved photoemission spectroscopy (ARPES):  Uses incident x-rays or UV at precisely known energy and momenta to eject electrons from sample; hemispherical analyzer is used to measure energy and momenta of ejected electrons with high precision (energy resolution can be as sharp as 1 meV in synchrotron facilities).  Sample in ultrahigh vacuum, typically requires surfaces cleaved in vacuo. This is the primary technique for measuring electronic band structure.  Like all photoemission techniques, it works best on conductive samples to avoid charging problems.  Variations include spin-polarized ARPES (polarization of detected electrons is found) and time-resolved ARPES (optical pump followed by time-delayed x-ray/UV pulse to do the photoemission).  There is also a related technique in terms of hardware called momentum-resolved EELS, where incident electrons of known energy and momentum are bounced off the material of interest and their final energy and momenta are measured.

 

Neutrons

Neutron diffraction:  Neutron scattering, requires beam of monoenergetic neutrons (prepared from a reactor via moderation + diffraction off a known crystal to act as a monochromator) (or broad-band neutrons but with time-of-flight to assess neutron energy).  Sensitive to lattice structure (nuclei).  Magnetic dipole interactions with electrons allows neutron diffraction to be sensitive to magnetic order.  Variations:  cold neutrons (prepared by scattering off cryogenic material) for higher sensitivity to magnetic systems; polarized neutrons, with polarized detection for higher sensitivity to magnetic systems.  Because neutron scattering cross-sections are generally small, neutron scattering historically requires large quantities (many milligrams) of material, and single-crystal diffraction is typical (with magnetic structure measurements requiring careful alignment of sample material via XRD first).  High brightness sources are improving the situation. Another challenge:  some elements and isotopes have large absorption cross-sections for neutrons and thus cannot readily be measured via neutron scattering. A positive flipside of this is that neutron scattering is very sensitive to hydrogen and lithium, of interest in batteries and other energy-related applications.

Related

Inelastic neutron scattering (INS):  Momentum- and energy-resolved neutron scattering, with change in neutron energy and momentum recorded.  Similar in spirit to ARPES, for mapping out dispersion relations of excitations within the sample material.  This is the primary method of tracing out phonon dispersions in solids, as well as the means of identifying and quantifying magnons.  Spin-polarized INS is possible, though any neutron scattering technique that requires preparation or detection of neutrons in particular spin states is more demanding (takes longer, requires higher initial flux) because of loss of neutrons during preparation and detection. 

Neutron reflectometry:  Diffraction of reflected neutrons, rather analogous to EBSD, though also sensitive to magnetic scattering.

Small-angle neutron scattering (SANS):  Analogous to SAXS, but with grazing-incidence neutrons.  Strongly sensitive to light elements (because they have bigger neutron scattering cross-sections) and magnetic structure.

 

Optical spectroscopy

Note that many optical techniques can be combined with microscopy to achieve spatial resolution and mapping of responses over sample surfaces.  A good review article on some of these is this.

UV/Vis/IR absorption:  A sample is illuminated in a transmission geometry with broadband light, and by measuring the transmitted spectrum, electronic transitions can be identified and band structure can be constrained.  Selection rules constrain what transitions can be seen.

Fourier transform infrared (FTIR) spectroscopy and microscopy:  Using a broadband mid- to far-IR light source and incorporating the sample into one arm of an interferometer, it is possible to measure absorption out to longer wavelengths (10 μm, e.g.).  Good for identifying “infrared active” (e.g. involving polar displacements) low energy vibrational modes in solids.

Ellipsometry and spectroscopic ellipsometry: Incident light of known wavelength, measuring reflected light from a surface as a function of angle of incidence (and wavelength of incident light, in the spectroscopic case). Allows determination of dielectric function/index of refraction, interpretation through modeling.  Great for quantifying layer thicknesses for dielectric multilayers.

THz spectroscopy:  Using THz sources and detection, can look at transmission and reflection in the mm-wave (very far IR; not quite the microwave).  Great for identifying vibrational modes, low-energy excitations as in superconductivity and some magnetic states. CW sources now exist for THz using quantum cascade lasers. Time-resolved THz (THz time-domain spectroscopy) is often used, as broadband THz pulses can be created using pulsed lasers and photoconductive antennas. 

Optical conductivity:  By measuring real and imaginary parts of the dielectric function (through light scattering, ellipsometry, absorption measurements) and using the Kramers-Kronig relations, it is possible to infer the frequency-dependent conductivity σ(ω), which can reveal a lot about dynamics of charged excitations.

Faraday rotation:  In transmission, the polarization of light can be rotated due to magnetization of the sample.  Provides information about magnetic structure of materials.

Magneto-optic Kerr effect (MOKE):  In reflection, the polarization of light can be rotated due to magnetization of the sample. 

Raman spectroscopy:  This is inelastic light scattering, often applied to molecules or optical phonons in solids.  An incoming photon of angular frequency ω0.  Elastic scattering is called Rayleigh scattering.  If the photon excites a vibration or another excitation of energy ℏω, the (“Stokes”) scattered photon comes out with frequency ω0 – ω.  If the system is already excited, the (“anti-Stokes”) scattered photon can grab energy from the excitation and come out with frequency ω0 + ω. Raman scattering can take place if the polarizability tensor of the system α depends on the displacements of the atoms.  In Raman spectroscopy of solid crystalline materials, with polarization control of the incoming light and known incident angle vs. the crystallographic orientation, it is possible to gain insight into dispersion of excitations.  Detection is usually done with a grating spectrometer + CCD or CMOS camera.  Variation: magnetoRaman, where sample is in an applied magnetic field.

Brillouin light scattering: Inelastic light scattering at quite low energy transfers, better suited for looking at acoustic phonons, magnons, etc. in solids.  Energy transfers are sufficiently small that detection is usually done with an interferometer.

Photoluminescence (PL): Optical spectroscopy in which incident light electronically excites the sample, and the sample then emits photons of energies characteristic of the electronic excitations. This is a standard way to characterize excitons and related excitations in semiconductors. Variations include time-resolved PL (to look at dynamics of excitations and their lifetimes) using pulsed excitation and timed detection; and two-photon PL (TPPL), in which high intensity lower energy excitation is used to nonlinearly excite the sample. (Nonlinear optical processes depend critically on symmetries of the underlying material.) When applied to molecular systems (or semiconductor nanocrystals) in the context of chemistry, PL is often referred to as fluorescence spectroscopy.

 

Electronic transport

I-V characterization: Measuring the current as a function of voltage (or voltage as a function of current).  Depending on the material involved, considerable information may be inferred from such data.

Magnetoresistance/magnetoconductance:  Measuring electrical resistance or conductance as a function of applied magnetic field and temperature.  Conductance measurements = source a voltage, measure a current.  Resistance measurements = source a current, measure a voltage.  Best practice, if possible, is to perform a 4-terminal (or more) measurement, with current sourced via two leads and voltages measured with other leads.  Since an ideal voltage probe draws no current, contact resistances do not interfere with the voltage measurement. 

Differential conductance/differential resistance:  For differential conductance (dI/dV), the applied bias includes a small ac voltage in addition to an applied dc voltage Vdc, and an ac measurement (via a lock-in amplifier) allows the detection of the ac contribution to the current; this allows measurement of dI/dV as a function of Vdc.  Similarly, for differential resistance (dV/dI), the applied bias includes a small ac current in addition to an applied dc current Idc, and an ac measurement via lock-in allows detection of the ac contribution to the voltage; this allows measurement of dV/dI as a function of Idc.  Note that differential resistance measurements are appropriate for examining candidate superconductors, when it is possible that the sample may support nonzero current with zero voltage.

Hall effect:  By measuring longitudinal and transverse resistance (RxxVxx/Ix, RxyVxy/Ix) in the presence of a perpendicular magnetic field Bz, it is possible to infer the sign of the charge carriers, charge mobility, and carrier density (assuming an isotropic single-band conductor). 

Tunneling spectroscopy:  In a tunnel junction (between a conducting sample and a normal metal probe electrode), at zero temperature the differential tunneling conductance dI/dV is proportional to the electronic density of states of the probe at its Fermi energy and the density of states of the sample at E = EF,sample-eVdc, where Vdc is the bias voltage of the probe relative to the sample.  (For a superconducting probe, the probe density of states is very sharp but is also shifted relative to the normal state EF because of the superconducting energy gap.)

Related

Inelastic electron tunneling spectroscopy (IETS): Conventionally, in tunneling spectroscopy, when the bias energy scale eVdc crosses the energy ℏω required to inelastically excite an excitation of the sample, this adds a possible path for electron transport.  The result is a kink in I-V, equivalently a step in dI/dV vs. Vdc, and therefore a peak in d2I/dV2 (at positive Vdc) at Vdc=ω/e. A real excitation of the sample should result in antisymmetric d2I/dV2 features at Vdc ω/e.  This approach has been used to identify vibrations in molecules, optical phonons in solids, and also magnetic excitations in solids.  The IETS features are broadened by the finite electronic temperature (kBT), so cryogenic temperatures are best suited for this technique.

 

Thermodynamic and thermal measurements

Specific heat:  Adding a small amount of thermal energy to a sample via a heater and measuring the temperature rise of the sample using a local thermometer.  Because of the relationship between specific heat and entropy (Cp = (1/T)(∂S/∂T)|p), the specific heat as a function of temperature may be used to infer entropy.  First-order phase transitions show up as a huge feature in specific heat vs temperature, since the entropy is discontinuous across a first-order transition.  Second-order phase transitions show up as a singular feature (discontinuity) in heat capacity vs temperature because (∂S/∂T) is discontinuous across such a transition, and will show critical fluctuations approaching the transition temperature.  Specific heat of metals is linear in T at low temperatures and is used to infer the electronic density of states at the Fermi level.

Related

Differential scanning calorimetry (DSC): Temperature is measured as heat input to the sample is scanned.  Intended to reveal phase changes within the material.

Thermal conductivity: A known thermal energy current is applied through a sample, and the temperature drop across the sample is measured using local thermometers.  This is a measure of the transport of energy by all mobile excitations in the material.  In conductors, charge carriers are expected to transport an amount of energy proportional to their specific heat, leading in metals to the Wiedemann-Franz relation.

Thermal expansion: Changes in sample dimensions as a function of temperature are measured, giving insights into material structure and bonding.  Typically, thermal expansion relates to the anharmonicity of the interatomic potential, and it is related therefore to nonlinearities in the properties of phonons (see the Grüneisen parameter).

Thermopower/Seebeck coefficient:  Absolute Seebeck response = the change in voltage across a sample is measured as a function of the temperature difference imposed across the sample.  Electronic excitations (and phonons) tend to diffuse away from the hot side.  Seebeck response sign generally depends on sign of the charge carriers (electron-like or hole-like).  The Seebeck response in a conductor is proportional to the energy dependence of the conductivity (and hence the mean free path) of the carriers.

Nernst-Ettingshausen effect:  In a Hall-like geometry, the transverse voltage across a sample Vxy is propertional to the temperature gradient along the sample dT/dx and the mutually perpendicular magnetic field Bz, so that the Nernst coefficient is defined as ν = (Exy/Bz)/(dT/dx).  This gives information about the transverse scattering of heat-carrying excitations in the presence of a magnetic field.

 

Magnetic measurements

Magnetization: Measurements of M vs H may be performed using SQUID-based and other magnetometers, though knowledge of sample dimensions and geometry are required.  Characteristic features of M are expected for certain material types.  For example, near zero field, a superconductor is expected to show perfect diamagnetism.  Often measurements are also made of M vs T at fixed H, comparing field-cooled and zero-field-cooled responses.  Saturation of M vs H at low temperatures and high fields can reveal the magnetic state of elements hosting local magnetic moments.

Vibrating sample magnetometry (VSM): a particular type of magnetometer that vibrates the sample back and forth through pickup coils.

AC susceptibility:  An oscillating component of H is applied and the change in M is measured.

Nuclear magnetic resonance (NMR):  liquid (for molecules) or solid-state.  Applied magnetic field provides Zeeman energy splitting for spin states of nuclei, radio frequency pulse sequences (and continuous wave methods) used to determine nuclear spin properties (and because of hyperfine couplings, provides information about electronic states).  Specific effects in superconductors (Knight shift).  Care must be taken with conducting samples, as microwaves don’t necessarily penetrate into the bulk of the material.

Electron paramagnetic resonance (EPR) or electron spin resonance (ESR):  Applied magnetic field provides Zeeman energy splitting for spin states of electrons, microwave pulse sequences (and continuous wave methods) are applied to do spectroscopy of these.  Best in insulating materials with unpaired electrons.  Particularly handy in determining the g factors for local magnetic moments, which is affected by crystal fields (local chemical bonding environment) at the local spin-carrying atoms. 

Ferromagnetic resonance (FMR): Conventionally, a radio frequency/microwave drive is applied to make the ferromagnetic magnetization M of a material precess around an external magnetic field.  Gives information about the magnetization dynamics and damping.  Recently, FMR in small devices has been driven via spin currents (from the spin Hall effect/spin-orbit torques or  spin transfer torques).

Mossbauer spectroscopy: This is really a nuclear physics-based technique, but given that the most famous Mössbauer material is iron, it has relevance for magnetism.  Gamma-ray spectroscopy using the Mössbauer effect (collective recoil or lack thereof of the entire lattice rather than individual atoms), gives extremely precise energetic information about nuclear environment of the particular isotopes, including hyperfine interactions.

Muon spin spectroscopy (μSR): Muons produced via an accelerator are implanted or transmitted through a material of interest. Decay of positive muons leads to emission of positrons, with directional asymmetry of emission related to the spin state of the muon. These measurements this give information about the magnetic environment within the material. Does not require pulsed fields.

 

Other techniques to assess composition

Secondary ion mass spectrometry (SIMS): Material is sputtered away from the sample, and the fragments are analyzed using mass spectrometry (e.g., ionized fragments are accelerated and curved in a magnetic field for detection, to determine their charge to mass ratio).

Inductively coupled plasma mass spectrometry (ICP-MS): Using an inductively coupled plasma source to ionize sample material for MS.

Atomic emission spectroscopy (AES): Material is heated or otherwise excited, and the emission spectra of the products is measured.  Modern version of old approach of looking at the color of flame produced by a bit of material.

Rutherford backscattering spectrometry (RBS): Ions (protons, alpha particles) are fired at the sample material and back-scattered ions are detected; can give depth-dependent compositional information.

Thermogravitic analysis (TGA): Destructive technique.  The sample is placed in a sensitive balance and heated through its decomposition, and the sample is weighed as the temperature is swept.  Different breakdown products will be produced at different temperatures.  Often combined with mass spectrometry to determine the molecular weight of the evolved products.


Other surface characterization methods

Helium atom scattering (HAS):  Diffraction of helium atoms off surfaces.  Extremely surface sensitive.

Field ion microscopy (FIM): A sharp tip is biased up to a high voltage.  Gas molecules impinge on the tip, ionize due to the strong electric field, and are repelled away to a detection screen.  Amazingly, this can give atomically precise information about the configuration of atoms at the tip.