Over the years I've written quite a few posts that try to explain physics concepts relevant to condensed matter/nano topics. I've thought about compiling some edited (more likely completely rewritten) version of these as a primer for science journalists. Here are the originals, collected together in one meta-post, since many current readers likely never saw them the first time around.
What is temperature?
What is chemical potential?
What is mass?
What are quasiparticles?
What is effective mass?
What is a phonon?
What is a plasmon?
What are magnons?
What are skyrmions?
What are excitons?
What is quantum coherence?
What are universal conductance fluctuations?
What is a metal?
What is a bad metal? What is a strange metal?
What are liquid crystals?
What is a phase of matter?
About phase transitions....
(effectively) What is mean-field theory?
About reciprocal space.... About spatial periodicity.
What is band theory?
What is a crystal?
What is a time crystal?
What is spin-orbit coupling?
About graphene, and more about graphene
About noise, part one, part two (thermal noise), part three (shot noise), part four (1/f noise)
What is inelastic electron tunneling spectroscopy?
What is demagnetization cooling?
About memristors....
What is a functional? (see also this)
What is density functional theory? Part 2 Part 3
What are the Kramers-Kronig relations?
What is a metamaterial?
What is a metasurface?
What is the Casimir effect?
About exponential decay laws
About hybridization
About Fermi's Golden Rule
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Search This Blog
Tuesday, February 28, 2017
Tuesday, February 21, 2017
In memoriam: Millie Dresselhaus
Millie Dresselhaus has passed away at 86. She was a true giant, despite her diminutive stature. I don't think anything I could write would be better than the MIT write-up linked in the first sentence. It was great to have had the opportunity to interact with her on multiple occasions and in multiple roles, and both nanoscience in particular and the scientific community in general will be poorer without her enthusiasm, insights, and mentoring. (One brief anecdote to indicate her work ethic: She told me once that she liked to review on average something like one paper every couple of days.)
Metallic hydrogen?
There has been a flurry of news lately about the possibility of achieving metallic hydrogen in the lab. The quest for metallic hydrogen is a fun story with interesting characters and gadgets - it would be a great topic for an episode of Nova or Scientific American Frontiers. In brief faq form (because real life is very demanding right now):
Why would this be a big deal? Apart from the fact that it's been sought for a long time, there are predictions that metallic hydrogen could be a room temperature superconductor (!) and possibly even metastable once the pressure needed to get there is removed.
Isn't hydrogen a gas, and therefore an insulator? Sure, at ambient conditions. However, there is very good reason to believe that if you took hydrogen and cranked up the density sufficiently (by squeezing it), it would actually become a metal.
What do you mean by a metal? Do you mean a ductile, electrically conductive solid? Yes on the electrically conductive part, at least. From the chemistry/materials perspective, a metal often described a system where the electrons are delocalized - shared between many many ions/nuclei. From the physics perspective (see here), a metal is a system where the electrons have "gapless excitations" - it's possible to create excitations of the electrons (moving an electron from a filled state to an empty state of different energy and momentum) down to arbitrarily low energies. That's why the electrons in a metal can respond to an applied voltage by flowing as a current.
What is the evidence that hydrogen can become a metal at high densities? Apart from recent experiments and strong theoretical arguments, the observation that Jupiter (for example) has a whopping magnetic field is very suggestive.
How do you get from a diatomic, insulating gas to a metal? You squeeze. While it was originally hoped that you would only need around 250000 atmospheres of pressure to get there, it now seems like around 5 million atmospheres is more likely. As the atoms are forced to be close together, it is easier for electrons to hop between the atoms (for experts, a larger tight-binding hopping matrix element and broader bands), and because of the Pauli principle the electrons are squeezed to higher and higher kinetic energies. Both trends push toward metal formation.
Yeah, but how do you squeeze that hard? Well, you could use a light gas gun to ram a piston into a cylinder full of liquid hydrogen like these folks back when I was in grad school. You could use a whopping pulsed magnetic field like a z-pinch to compress a cylinder filled with hydrogen, as suggested here (pdf) and reported here. Or, you could put hydrogen in a small, gasketed volume between two diamond facets, and very carefully turn a screw that squeezes the diamonds together. That's the approach taken by Dias and Silvera, which prompted the recent kerfuffle.
How can you tell it's become a metal? Ideally you'd like to measure the electrical conductivity by, say, applying a voltage and measuring the resulting current, but it can be very difficult to get wires into any of these approaches for such measurements. Instead, a common approach is to use optical techniques, which can be very fast. You know from looking at a (silvered or aluminized) mirror that metals are highly reflective. The ability of electrons in a metal to flow in response to an electric field is responsible for this, and the reflectivity can be analyzed to understand the conductivity.
So, did they do it? Maybe. The recent result by Dias and Silvera has generated controversy - see here for example. Reproducing the result would be a big step forward. Stay tuned.
Why would this be a big deal? Apart from the fact that it's been sought for a long time, there are predictions that metallic hydrogen could be a room temperature superconductor (!) and possibly even metastable once the pressure needed to get there is removed.
Isn't hydrogen a gas, and therefore an insulator? Sure, at ambient conditions. However, there is very good reason to believe that if you took hydrogen and cranked up the density sufficiently (by squeezing it), it would actually become a metal.
What do you mean by a metal? Do you mean a ductile, electrically conductive solid? Yes on the electrically conductive part, at least. From the chemistry/materials perspective, a metal often described a system where the electrons are delocalized - shared between many many ions/nuclei. From the physics perspective (see here), a metal is a system where the electrons have "gapless excitations" - it's possible to create excitations of the electrons (moving an electron from a filled state to an empty state of different energy and momentum) down to arbitrarily low energies. That's why the electrons in a metal can respond to an applied voltage by flowing as a current.
What is the evidence that hydrogen can become a metal at high densities? Apart from recent experiments and strong theoretical arguments, the observation that Jupiter (for example) has a whopping magnetic field is very suggestive.
How do you get from a diatomic, insulating gas to a metal? You squeeze. While it was originally hoped that you would only need around 250000 atmospheres of pressure to get there, it now seems like around 5 million atmospheres is more likely. As the atoms are forced to be close together, it is easier for electrons to hop between the atoms (for experts, a larger tight-binding hopping matrix element and broader bands), and because of the Pauli principle the electrons are squeezed to higher and higher kinetic energies. Both trends push toward metal formation.
Yeah, but how do you squeeze that hard? Well, you could use a light gas gun to ram a piston into a cylinder full of liquid hydrogen like these folks back when I was in grad school. You could use a whopping pulsed magnetic field like a z-pinch to compress a cylinder filled with hydrogen, as suggested here (pdf) and reported here. Or, you could put hydrogen in a small, gasketed volume between two diamond facets, and very carefully turn a screw that squeezes the diamonds together. That's the approach taken by Dias and Silvera, which prompted the recent kerfuffle.
How can you tell it's become a metal? Ideally you'd like to measure the electrical conductivity by, say, applying a voltage and measuring the resulting current, but it can be very difficult to get wires into any of these approaches for such measurements. Instead, a common approach is to use optical techniques, which can be very fast. You know from looking at a (silvered or aluminized) mirror that metals are highly reflective. The ability of electrons in a metal to flow in response to an electric field is responsible for this, and the reflectivity can be analyzed to understand the conductivity.
So, did they do it? Maybe. The recent result by Dias and Silvera has generated controversy - see here for example. Reproducing the result would be a big step forward. Stay tuned.
Sunday, February 12, 2017
What is a time crystal?
Recall a (conventional, real-space) crystal involves a physical system with a large number of constituents spontaneously arranging itself in a way that "breaks" the symmetry of the surrounding space. By periodically arranging themselves, the atoms in an ordinary crystal "pick out" particular length scales (like the spatial period of the lattice) and particular directions.
Back in 2012, Frank Wilczek proposed the idea of time crystals, here and here, for classical and quantum versions, respectively. The original idea in a time crystal is that a system with many dynamical degrees of freedom, can in its ground state spontaneously break the smooth time translation symmetry that we are familiar with. Just as a conventional spatial crystal would have a certain pattern of, e.g., density that repeats periodically in space, a time crystal would spontaneously repeat its motion periodically in time. For example, imagine a system that, somehow while in its ground state, rotates at a constant rate (as described in this viewpoint article). In quantum mechanics involving charged particles, it's actually easier to think about this in some ways. [As I wrote about back in the ancient past, the Aharonov-Bohm phase implies that you can have electrons producing persistent current loops in the ground state in metals.]
The "ground state" part of this was not without controversy. There were proofs that this kind of spontaneous periodic groundstate motion is impossible in classical systems. There were proofs that this is also a challenge in quantum systems. [Regarding persistent currents, this gets into a definitional argument about what is a true time crystal.]
Now people have turned to the idea that one can have (with proper formulation of the definitions) time crystals in driven systems. Perhaps it is not surprising that driving a system periodically can result in periodic response at integer multiples of the driving period, but there is more to it than that. Achieving some kind of steady-state with spontaneous time periodicity and a lack of runaway heating due to many-body interacting physics is pretty restrictive. A good write-up of this is here. A theoretical proposal for how to do this is here, and the experiments that claim to demonstrate this successfully are here and here. This is another example of how physicists are increasingly interested in understanding and classifying the responses of quantum systems driven out of equilibrium (see here and here).
The "ground state" part of this was not without controversy. There were proofs that this kind of spontaneous periodic groundstate motion is impossible in classical systems. There were proofs that this is also a challenge in quantum systems. [Regarding persistent currents, this gets into a definitional argument about what is a true time crystal.]
Now people have turned to the idea that one can have (with proper formulation of the definitions) time crystals in driven systems. Perhaps it is not surprising that driving a system periodically can result in periodic response at integer multiples of the driving period, but there is more to it than that. Achieving some kind of steady-state with spontaneous time periodicity and a lack of runaway heating due to many-body interacting physics is pretty restrictive. A good write-up of this is here. A theoretical proposal for how to do this is here, and the experiments that claim to demonstrate this successfully are here and here. This is another example of how physicists are increasingly interested in understanding and classifying the responses of quantum systems driven out of equilibrium (see here and here).
Sunday, February 05, 2017
Losing a colleague and friend - updated
Blogging is taking a back seat right now. I'm only posting because I know some Rice connections and alumni read here and may not have heard about this. Here is a longer article, though I don't know how long it will be publicly accessible.
Update: This editorial was unexpected (at least by me) and much appreciated. There is also a memorial statement here.
Update 2: The Houston Chronicle editorial is now behind a pay-wall. I suspect they won't mind me reproducing it here:
"If I have seen further it is by standing on the shoulders of giants."
Update: This editorial was unexpected (at least by me) and much appreciated. There is also a memorial statement here.
Update 2: The Houston Chronicle editorial is now behind a pay-wall. I suspect they won't mind me reproducing it here:
"If I have seen further it is by standing on the shoulders of giants."
Isaac Newton was not the first to express this sentiment, though he was perhaps the most brilliant. But even a man of his stature knew that he only peered further into the secrets of our universe because of the historic figures who preceded him.
Those giants still walk among us today. They work at the universities, hospitals and research laboratories that dot our city. They explore the uncharted territory of human knowledge, their footsteps laying down paths that lead future generations.
Dr. Marjorie Corcoran was one of those giants. The Rice University professor had spent her career uncovering the unknown - the subatomic levels where Newton's physics fall apart. She was killed after being struck by a Metro light rail train last week.
Corcoran's job was to ask the big questions about the fundamental building blocks and forces of the universe. Why does matter have mass? Why does physics act the way it does?
She worked to understand reality and unveil eternity. To the layperson, her research was a secular contemplation of the divine.
Our city spent years of work and millions of dollars preparing for the super-human athletic feats witnessed at the Super Bowl. But advertisers didn't exactly line up to sponsor Corcoran - and for good reason. Anyone can marvel in a miraculous catch. It is harder to grasp the wonder of a subatomic world, the calculations that bring order to the universe, the research that hopes to explain reality itself.
Only looking backward can we fully grasp the incredible feats done by physicists like Corcoran.
"A lot of people don't have a very long timeline. They're thinking what's going to happen to them in the next hour or the next day, maybe the next week," Andrea Albert, one of Corcoran's former students, told the editorial board. "No, we're laying the foundation so that your grandkids are going to have an awesome, cool technology. I don't know what it is yet. But it is going to be awesome."
Houston is already home to some of the unexpected breakthroughs of particle physics. Accelerators once created to smash atoms now treat cancer patients with proton therapy.
All physics is purely academic - until it isn't. From the radio to the atom bomb, modern civilization is built on the works of giants.
But the tools that we once used to craft the future are being left to rust.
Federal research funding has fallen from its global heights. Immigrants who help power our labs face newfound barriers. Our nation shouldn't forget that Albert Einstein and Edward Teller were refugees.
"How are we going to foster the research mission of the university?" Rice University President David Leebron posed to the editorial board last year. "I think as we see that squeeze, you look at the Democratic platform or the Republican platform or the policies out of Austin, I worry about the level of commitment."
In a competitive field, Corcoran went out of her way to help new researchers. In a field dominated by men, she stood as a model for young women. And in a nation focused on quarterly earnings, her work was dedicated to the next generation.
Marjorie Corcoran was a giant. The world stands taller because of her.
Sunday, January 29, 2017
What is a crystal?
(I'm bringing this up because I want to write about "time crystals", and to do that....)
The key physics points: When placed together under the right conditions, the building blocks of a crystal spontaneously join together and assemble into the crystal structure. While space has the same properties in every location ("invariance under continuous translation") and in every orientation ("invariance under continuous orientation"), the crystal environment doesn't. Instead, the crystal has discrete translational symmetry (each lattice site is equivalent), and other discrete symmetries (e.g., mirror symmetry about some planes, or discrete rotational symmetries around some axes). This kind of spontaneous symmetry breaking is so general that it happens in all kinds of systems, like plastic balls floating on reservoirs. The spatial periodicity has all kinds of consequences, like band structure and phonon dispersion relations (how lattice vibration frequencies depend on vibration wavelengths and directions).
Wednesday, January 25, 2017
A book recommendation
I've been very busy lately, hence a slow down in posting, but in the meantime I wanted to recommend a book. The Pope of Physics is the recent biography of Enrico Fermi from Gino Segrè and Bettina Hoerlin. The title is from Fermi's nickname as a young physicist in Italy - he and his colleagues (the "Via Panisperna boys", named for the address of the Institute of Physics in Rome) took to giving each other nicknames, and Fermi's was "the Pope" because of his apparent infallibility. The book is compelling, gives insights into Fermi and his relationships, and includes stories about that wild era of physics that I didn't recall hearing before. (For example, when trying to build the first critical nuclear pile at Stag Field in Chicago, there was a big contract dispute with Stone and Webster, the firm hired by the National Defense Research Council to do the job. When it looked like the dispute was really going to slow things down, Fermi suggested that the physicists themselves just build the thing, and the put it together from something like 20000 graphite blocks in about two weeks.)
While it's not necessarily as page-turning as The Making of the Atomic Bomb, it's a very interesting biography that offers insights into this brilliant yet emotionally reserved person. It's a great addition to the bookshelf. For reference, other biographies that I suggest are True Genius: The Life and Science of John Bardeen, and the more technical works No Time to be Brief: A Scientific Biography of Wolfgang Pauli and Subtle is the Lord: The Science and Life of Albert Einstein.
While it's not necessarily as page-turning as The Making of the Atomic Bomb, it's a very interesting biography that offers insights into this brilliant yet emotionally reserved person. It's a great addition to the bookshelf. For reference, other biographies that I suggest are True Genius: The Life and Science of John Bardeen, and the more technical works No Time to be Brief: A Scientific Biography of Wolfgang Pauli and Subtle is the Lord: The Science and Life of Albert Einstein.
Monday, January 16, 2017
What is the difference between science and engineering?
In my colleague Rebecca Richards-Kortum's great talk at Rice's CUWiP meeting this past weekend, she spoke about her undergrad degree in physics at Nebraska, her doctorate in medical physics from MIT, and how she ended up doing bioengineering. As a former undergrad engineer who went the other direction, I think her story did a good job of illustrating the distinctions between science and engineering, and the common thread of problem-solving that connects them.
In brief, science is about figuring out the ground rules about how the universe works. Engineering is about taking those rules, and then figuring out how to accomplish some particular task. Both of these involve puzzle-like problem-solving. As a physics example on the experimental side, you might want to understand how electrons lose energy to vibrations in a material, but you only have a very limited set of tools at your disposal - say voltage sources, resistors, amplifiers, maybe a laser and a microscope and a spectrometer, etc. Somehow you have to formulate a strategy using just those tools. On the theory side, you might want to figure out whether some arrangement of atoms in a crystal results in a lowest-energy electronic state that is magnetic, but you only have some particular set of calculational tools - you can't actually solve the complete problem and instead have to figure out what approximations would be reasonable, keeping the essentials and neglecting the extraneous bits of physics that aren't germane to the question.
Engineering is the same sort of process, but goal-directed toward an application rather than specifically the acquisition of new knowledge. You are trying to solve a problem, like constructing a machine that functions like a CPAP, but has to be cheap and incredibly reliable, and because of the price constraint you have to use largely off-the-shelf components. (Here's how it's done.)
People act sometimes like there is a vast gulf between scientists and engineers - like the former don't have common sense or real-world perspective, or like the latter are somehow less mathematical or sophisticated. Those stereotypes even comes through in pop culture, but the differences are much less stark than that. Both science and engineering involve creativity and problem-solving under constraints. Often which one is for you depends on what you find most interesting at a given time - there are plenty of scientists who go into engineering, and engineers can pursue and acquire basic knowledge along the way. Particularly in the modern, interdisciplinary world, the distinction is less important than ever before.
In brief, science is about figuring out the ground rules about how the universe works. Engineering is about taking those rules, and then figuring out how to accomplish some particular task. Both of these involve puzzle-like problem-solving. As a physics example on the experimental side, you might want to understand how electrons lose energy to vibrations in a material, but you only have a very limited set of tools at your disposal - say voltage sources, resistors, amplifiers, maybe a laser and a microscope and a spectrometer, etc. Somehow you have to formulate a strategy using just those tools. On the theory side, you might want to figure out whether some arrangement of atoms in a crystal results in a lowest-energy electronic state that is magnetic, but you only have some particular set of calculational tools - you can't actually solve the complete problem and instead have to figure out what approximations would be reasonable, keeping the essentials and neglecting the extraneous bits of physics that aren't germane to the question.
Engineering is the same sort of process, but goal-directed toward an application rather than specifically the acquisition of new knowledge. You are trying to solve a problem, like constructing a machine that functions like a CPAP, but has to be cheap and incredibly reliable, and because of the price constraint you have to use largely off-the-shelf components. (Here's how it's done.)
People act sometimes like there is a vast gulf between scientists and engineers - like the former don't have common sense or real-world perspective, or like the latter are somehow less mathematical or sophisticated. Those stereotypes even comes through in pop culture, but the differences are much less stark than that. Both science and engineering involve creativity and problem-solving under constraints. Often which one is for you depends on what you find most interesting at a given time - there are plenty of scientists who go into engineering, and engineers can pursue and acquire basic knowledge along the way. Particularly in the modern, interdisciplinary world, the distinction is less important than ever before.
Friday, January 13, 2017
Brief items
What with the start of the semester and the thick of graduate admissions season, it's been a busy week, so rather than an extensive post, here are some brief items of interest:
- We are hosting one of the APS Conferences for Undergraduate Women in Physics this weekend. Welcome, attendees! It's going to be a good time.
- This week our colloquium speaker was Jim Kakalios of the University of Minnesota, who gave a very fun talk related to his book The Physics of Superheroes (an updated version of this), as well as a condensed matter seminar regarding his work on charge transport and thermoelectricity in amorphous and nanocrystalline semiconductors. His efforts at popularizing physics, including condensed matter, are great. His other books are The Amazing Story of Quantum Mechanics, and the forthcoming The Physics of Everyday Things. That last one shows how an enormous amount of interesting physics is embedded and subsumed in the routine tasks of modern life - a point I've mentioned before.
- Another seminar speaker at Rice this week was John Biggins, who explained the chain fountain (original video here, explanatory video here, relevant paper here).
- Speaking of videos, here is the talk I gave last April back at the Pittsburgh Quantum Institute's 2016 symposium, and here is the link to all the talks.
- Speaking of quantum mechanics, here is an article in the NY Review of Books by Steven Weinberg on interpretations of quantum. While I've seen it criticized online as offering nothing new, I found it to be clearly written and articulated, and that can't always be said for articles about interpretations of quantum mechanics.
- Speaking of both quantum mechanics interpretations and popular writings about physics, here is John Cramer's review of David Mermin's recent collection of essays, Why Quark Rhymes with Pork: And other Scientific Diversions (spoiler: I agree with Cramer that Mermin is wrong on the pronunciation of "quark".) The review is rather harsh regarding quantum interpretation, though perhaps that isn't surprising given that Cramer has his own view on this.
Sunday, January 08, 2017
Physics is not just high energy and astro/cosmology.
A belated happy new year to my readers. Back in 2005, nearly every popularizer of physics on the web, television, and bookshelves was either a high energy physicist (mostly theorists) or someone involved in astrophysics/cosmology. Often these people were presented, either deliberately or through brevity, as representing the whole discipline of physics. Things have improved somewhat, but the overall situation in the media today is not that different, as exemplified by the headline of this article, and noticed by others (see the fourth paragraph here, at the excellent blog by Ross McKenzie).
For example, consider Edge.org, which has an annual question that they put to "the most complex and sophisticated minds". This year the question was, what scientific term or concept should be more widely known? It's a very interesting piece, and I encourage you to read it. They got responses from 206 contributors (!). By my estimate, about 31 of those would likely say that they are active practicing physicists, though definitions get tricky for people working on "complexity" and computation. Again, by my rough count, from that list I see 12-14 high energy theorists (depending on whether you count Yuri Milner, who is really a financier, or Gino Segre, who is an excellent author but no longer an active researcher) including Sabine Hossenfelder, one high energy experimentalist, 10 people working on astrophysics/cosmology, four working on some flavor of quantum mechanics/quantum information (including the blogging Scott Aronson), one on biophysics/complexity, and at most two on condensed matter physics. Seems to me like representation here is a bit skewed.
Hopefully we will keep making progress on conveying that high energy/cosmology is not representative of the entire discipline of physics....
Thursday, December 29, 2016
Some optimism at the end of 2016
When the news is filled with bleak items, like:
- The deaths of prominent scientists
- The deaths of many notable figures (too many to link in entirety)
- Disturbing news about the environment, and news about news about the environment
Let me make a push for optimism, or at least try to put some things in perspective. There are some reasons to be hopeful. Specifically, look here, at a site called "Our World in Data", produced at Oxford University. These folks use actual numbers to point out that this is actually, in many ways, the best time in human history to be alive:
- The percentage of the world's population living in extreme poverty is at an all-time low (9.6%).
- The percentage of the population that is literate is at an all-time high (85%), as is the overall global education level.
- Child mortality is at an all-time low.
- The percentage of people enjoying at least some political freedom is at an all-time high.
Tuesday, December 20, 2016
Mapping current at the nanoscale - part 2 - magnetic fields!
A few weeks ago I posted about one approach to mapping out where current flows at the nanoscale, scanning gate microscopy. I had made an analogy between current flow in some system and traffic flow in a complicated city map. Scanning gate microscopy would be analogous recording the flow of traffic in/out of a city as a function of where you chose to put construction barrels and lane closures. If sampled finely enough, this would give you a sense of where in the city most of the traffic tends to flow.
Of course, that's not how utilities like Google Maps figure out traffic flow maps or road closures. Instead, applications like that track the GPS signals of cell phones carried in the vehicles. Is there a current-mapping analogy here as well? Yes. There is some "signal" produced by the flow of current, if only you can have a sufficiently sensitive detector to find it. That is the magnetic field. Flowing current density \(\mathbf{J}\) produces a local magnetic field \(\mathbf{B}\), thanks to Ampere's law, \(\nabla \times \mathbf{B} = \mu_{0} \mathbf{J}\).
Fortunately, there now exist several different technologies for performing very local mapping of magnetic fields, and therefore the underlying pattern of flowing current in some material or device. One older, established approach is scanning Hall microscopy, where a small piece of semiconductor is placed on a scanning tip, and the Hall effect in that semiconductor is used to sense local \(B\) field.
Considerably more sensitive is the scanning SQUID microscope, where a tiny superconducting loop is placed on the end of a scanning tip, and used to detect incredibly small magnetic fields. Shown in the figure, it is possible to see when current is carried by the edges of a structure rather than by the bulk of the material, for example.
A very recently developed method is to use the exquisite magnetic field sensitive optical properties of particular defects in diamond, NV centers. The second figure (from here) shows examples of the kinds of images that are possible with this approach, looking at the magnetic pattern of data on a hard drive, or magnetic flux trapped in a superconductor. While I have not seen this technique applied directly to current mapping at the nanoscale, it certainly has the needed magnetic field sensitivity. Bottom line: It is possible to "look" at the current distribution in small structures at very small scales by measuring magnetic fields.
Of course, that's not how utilities like Google Maps figure out traffic flow maps or road closures. Instead, applications like that track the GPS signals of cell phones carried in the vehicles. Is there a current-mapping analogy here as well? Yes. There is some "signal" produced by the flow of current, if only you can have a sufficiently sensitive detector to find it. That is the magnetic field. Flowing current density \(\mathbf{J}\) produces a local magnetic field \(\mathbf{B}\), thanks to Ampere's law, \(\nabla \times \mathbf{B} = \mu_{0} \mathbf{J}\).
![]() |
Scanning SQUID microscope image of x-current density
in a GaSb/InAs structure, showing that the current is
carried by the edges. Scale bar is 20 microns. Image
|
Fortunately, there now exist several different technologies for performing very local mapping of magnetic fields, and therefore the underlying pattern of flowing current in some material or device. One older, established approach is scanning Hall microscopy, where a small piece of semiconductor is placed on a scanning tip, and the Hall effect in that semiconductor is used to sense local \(B\) field.
![]() |
Scanning NV center microscopy to see magnetic fields,
Scale bars are 400 nm.
|
A very recently developed method is to use the exquisite magnetic field sensitive optical properties of particular defects in diamond, NV centers. The second figure (from here) shows examples of the kinds of images that are possible with this approach, looking at the magnetic pattern of data on a hard drive, or magnetic flux trapped in a superconductor. While I have not seen this technique applied directly to current mapping at the nanoscale, it certainly has the needed magnetic field sensitivity. Bottom line: It is possible to "look" at the current distribution in small structures at very small scales by measuring magnetic fields.
Saturday, December 17, 2016
Recurring themes in (condensed matter/nano) physics: Exponential decay laws
It's been a little while (ok, 1.6 years) since I made a few posts about recurring motifs that crop up in physics, particularly in condensed matter and at the nanoscale. Often the reason certain mathematical relationships crop up repeatedly in physics is that they are, deep down, based on underlying assumptions that are very simple. One example common in all of physics is the idea of exponential decay, that some physical property or parameter often ends up having a time dependence proportional to \(\exp(-t/\tau)\), where \(\tau\) is some characteristic timescale.
Why is this time dependence so common? Let's take a particular example. Suppose we are in the remarkable cistern, shown here, that used to store water for the city of Houston. If you go on a tour there (I highly recommend it - it's very impressive.), you will observe that it has remarkable acoustic properties. If you yell or clap, the echo gradually dies out by (approximately) exponential decay, fading to undetectable levels after about 18 seconds (!). The cistern is about 100 m across, and the speed of sound is around 340 m/s, meaning that in 18 seconds the sound you made has bounced off the walls around 61 times. Each time the sound bounces off a wall, it loses some percentage of its intensity (stored acoustic energy).
That idea, that the decrease in some quantity is a fixed fraction of the current size of that quantity, is the key to the exponential decay, in the limit that you consider the change in the quantity from instant to instant (rather than taking place via discrete events). Note that this is also basically the same math that is behind compound interest, though that involves exponential growth.
![]() |
Buffalo Bayou cistern. (photo by Katya Horner). |
Why is this time dependence so common? Let's take a particular example. Suppose we are in the remarkable cistern, shown here, that used to store water for the city of Houston. If you go on a tour there (I highly recommend it - it's very impressive.), you will observe that it has remarkable acoustic properties. If you yell or clap, the echo gradually dies out by (approximately) exponential decay, fading to undetectable levels after about 18 seconds (!). The cistern is about 100 m across, and the speed of sound is around 340 m/s, meaning that in 18 seconds the sound you made has bounced off the walls around 61 times. Each time the sound bounces off a wall, it loses some percentage of its intensity (stored acoustic energy).
That idea, that the decrease in some quantity is a fixed fraction of the current size of that quantity, is the key to the exponential decay, in the limit that you consider the change in the quantity from instant to instant (rather than taking place via discrete events). Note that this is also basically the same math that is behind compound interest, though that involves exponential growth.
Saturday, December 10, 2016
Bismuth superconducts, and that's weird
Many elemental metals become superconductors at sufficiently low temperatures, but not all. Ironically, some of the normal metal elements with the best electrical conductivity (gold, silver, copper) do not appear to do so. Conventional superconductivity was explained by Bardeen, Cooper, and Schrieffer in 1957. Oversimplifying, the idea is that electrons can interact with lattice vibrations (phonons), in such a way that there is a slight attractive interaction between the electrons. Imagine a billiard ball rolling on a foam mattress - the ball leaves trailing behind it a deformation of the mattress that takes some finite time to rebound, and another nearby ball is "attracted" to the deformation left behind. This slight attraction is enough to cause pairing between charge carriers in the metal, and those pairs can then "condense" into a macroscopic quantum state with the superconducting properties we know. The coinage metals apparently have comparatively weak electron-phonon coupling, and can't quite get enough attractive interaction to go superconducting.
Another way you could fail to get conventional BCS superconductivity would be just to have too few charge carriers! In my ball-on-mattress analogy, if the rolling balls are very dilute, then pair formation doesn't really happen, because by the time the next ball rolls by where a previous ball had passed, the deformation is long since healed. This is one reason why superconductivity usually doesn't happen in doped semiconductors.
Superconductivity with really dilute carriers is weird, and that's why the result published recently here by researchers at the Tata Institute is exciting. They were working bismuth, which is a semimetal in its usual crystal structure, meaning that it has both electrons and holes running around (see here for technical detail), and has a very low concentration of charge carriers, something like 1017/cm3, meaning that the typical distance between carriers is on the order of 30 nm. That's very far, so conventional BCS superconductivity isn't likely to work here. However, at about 500 microKelvin (!), the experimenters see (via magnetic susceptibility and the Meissner effect) that single crystals of Bi go superconducting. Very neat.
They achieve these temperatures through a combination of a dilution refrigerator (possible because of the physics discussed here) and nuclear demagnetization cooling of copper, which is attached to a silver heatlink that contains the Bi crystals. This is old-school ultralow temperature physics, where they end up with several kg of copper getting as low as 100 microKelvin. Sure, this particular result is very far from any practical application, but the point is that this work shows that there likely is some other pairing mechanism that can give superconductivity with very dilute carriers, and that could be important down the line.
Another way you could fail to get conventional BCS superconductivity would be just to have too few charge carriers! In my ball-on-mattress analogy, if the rolling balls are very dilute, then pair formation doesn't really happen, because by the time the next ball rolls by where a previous ball had passed, the deformation is long since healed. This is one reason why superconductivity usually doesn't happen in doped semiconductors.
Superconductivity with really dilute carriers is weird, and that's why the result published recently here by researchers at the Tata Institute is exciting. They were working bismuth, which is a semimetal in its usual crystal structure, meaning that it has both electrons and holes running around (see here for technical detail), and has a very low concentration of charge carriers, something like 1017/cm3, meaning that the typical distance between carriers is on the order of 30 nm. That's very far, so conventional BCS superconductivity isn't likely to work here. However, at about 500 microKelvin (!), the experimenters see (via magnetic susceptibility and the Meissner effect) that single crystals of Bi go superconducting. Very neat.
They achieve these temperatures through a combination of a dilution refrigerator (possible because of the physics discussed here) and nuclear demagnetization cooling of copper, which is attached to a silver heatlink that contains the Bi crystals. This is old-school ultralow temperature physics, where they end up with several kg of copper getting as low as 100 microKelvin. Sure, this particular result is very far from any practical application, but the point is that this work shows that there likely is some other pairing mechanism that can give superconductivity with very dilute carriers, and that could be important down the line.
Tuesday, December 06, 2016
Suggested textbooks for "Modern Physics"?
I'd be curious for opinions out there regarding available textbooks for "Modern Physics". Typically this is a sophomore-level undergraduate course at places that offer such a class. Often these tend to focus on special relativity and "baby quantum", making the bulk of "modern" end in approximately 1930. Ideally it would be great to have a book that includes topics from the latter half of the 20th century, too, without having them be too simplistic. Looking around on amazon, there are a number of choices, but I wonder if I'm missing some diamond in the rough out there by not necessarily using the right search terms, or perhaps there is a new book in development of which I am unaware. The book by Rohlf looks interesting, but the price tag is shocking - a trait shared by many similarly titled works on amazon. Any suggestions?
Saturday, November 26, 2016
Quantum computing - lay of the land, + corporate sponsorship
Much has been written about quantum computers and their prospects for doing remarkable things (see here for one example of a great primer), and Scott Aronson's blog is an incredible resource if you want more technical discussions. Recent high profile news this week about Microsoft investing heavily in one particular approach to quantum computation has been a good prompt to revisit parts of this subject, both to summarize the science and to think a bit about corporate funding of research. It's good to see how far things have come since I wrote this almost ten years ago (!!).
Remember, to realize the benefits of general quantum computation, you need (without quibbling over the details) some good-sized set (say 1000-10000) of quantum degrees of freedom, qubits, that you can initialize, entangle to create superpositions, and manipulate in deliberate ways to perform computational operations. On the one hand, you need to be able to couple the qubits to the outside world, both to do operations and to read out their state. On the other hand, you need the qubits to be isolated from the outside world, because when a quantum system becomes entangled with (many) environmental degrees of freedom whose quantum states you aren't tracking, you generally get decoherence - what is known colloquially as the collapse of the wavefunction.
The rival candidates for general purpose quantum computing platforms make different tradeoffs in terms of robustness of qubit coherence and scalability. There are error correction schemes, and implementations that combine several "physical" qubits into a single "logical" qubit that is supposed to be harder to screw up. Trapped ions can have very long coherence times and be manipulated with great precision via optics, but scaling up to hundreds of qubits is very difficult (though see here for a claim of a breakthrough). Photons can be used for quantum computing, but since they fundamentally don't interact with each other under ordinary conditions, some operations are difficult, and scaling is really hard - to quote from that link, "About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information." Electrons in semiconductor quantum dots might be more readily scaled, but coherence is fleeting. Superconducting approaches are the choices of the Yale and UC Santa Barbara groups.
The Microsoft approach, since they started funding quantum computing research, has always been rooted in ideas about topology, perhaps unsurprising since their effort has been led by Michael Freedman. If you can encode quantum information in something to do with topology, perhaps the qubits can be more robust to decoherence. One way to get topology in the mix is to work with particular exotic quantum excitations in 2d that are non-Abelian. That is, if you take two such excitations and move them around each other in real space, the quantum state somehow transforms itself to remember that braiding, including whether you moved particle 2 around particle 1, or vice versa. Originally Microsoft was very interested in the \(\nu = 5/2\) fractional quantum Hall state as an example of a system supporting this kind of topological braiding. Now, they've decided to bankroll the groups of Leo Kouwenhoven and Charlie Marcus, who are trying to implement topological quantum computing ideas using superconductor/semiconductor hybrid structures thought to exhibit Majorana fermions.
It's worth noting that Microsoft are not the only people investing serious money in quantum computing. Google invested enormously in John Martinis' effort. Intel has put a decent amount of money into a silicon quantum dot effort practically down the hall from Kouwenhoven. This kind of industrial investment does raise some eyebrows, but as long as it doesn't kill publication or hamstring students and postdocs with weird constraints, it's hard to see big downsides. (Of course, Uber and Carnegie Mellon are a cautionary example of how this sort of relationship may not work out well for the relevant universities.)
Remember, to realize the benefits of general quantum computation, you need (without quibbling over the details) some good-sized set (say 1000-10000) of quantum degrees of freedom, qubits, that you can initialize, entangle to create superpositions, and manipulate in deliberate ways to perform computational operations. On the one hand, you need to be able to couple the qubits to the outside world, both to do operations and to read out their state. On the other hand, you need the qubits to be isolated from the outside world, because when a quantum system becomes entangled with (many) environmental degrees of freedom whose quantum states you aren't tracking, you generally get decoherence - what is known colloquially as the collapse of the wavefunction.
The rival candidates for general purpose quantum computing platforms make different tradeoffs in terms of robustness of qubit coherence and scalability. There are error correction schemes, and implementations that combine several "physical" qubits into a single "logical" qubit that is supposed to be harder to screw up. Trapped ions can have very long coherence times and be manipulated with great precision via optics, but scaling up to hundreds of qubits is very difficult (though see here for a claim of a breakthrough). Photons can be used for quantum computing, but since they fundamentally don't interact with each other under ordinary conditions, some operations are difficult, and scaling is really hard - to quote from that link, "About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information." Electrons in semiconductor quantum dots might be more readily scaled, but coherence is fleeting. Superconducting approaches are the choices of the Yale and UC Santa Barbara groups.
The Microsoft approach, since they started funding quantum computing research, has always been rooted in ideas about topology, perhaps unsurprising since their effort has been led by Michael Freedman. If you can encode quantum information in something to do with topology, perhaps the qubits can be more robust to decoherence. One way to get topology in the mix is to work with particular exotic quantum excitations in 2d that are non-Abelian. That is, if you take two such excitations and move them around each other in real space, the quantum state somehow transforms itself to remember that braiding, including whether you moved particle 2 around particle 1, or vice versa. Originally Microsoft was very interested in the \(\nu = 5/2\) fractional quantum Hall state as an example of a system supporting this kind of topological braiding. Now, they've decided to bankroll the groups of Leo Kouwenhoven and Charlie Marcus, who are trying to implement topological quantum computing ideas using superconductor/semiconductor hybrid structures thought to exhibit Majorana fermions.
It's worth noting that Microsoft are not the only people investing serious money in quantum computing. Google invested enormously in John Martinis' effort. Intel has put a decent amount of money into a silicon quantum dot effort practically down the hall from Kouwenhoven. This kind of industrial investment does raise some eyebrows, but as long as it doesn't kill publication or hamstring students and postdocs with weird constraints, it's hard to see big downsides. (Of course, Uber and Carnegie Mellon are a cautionary example of how this sort of relationship may not work out well for the relevant universities.)
Monday, November 21, 2016
More short items, incl. postdoc opportunities
Some additional brief items:
- Rice's Smalley-Curl Institute has two competitive, endowed postdoctoral opportunities coming up, the J. Evans Attwell Welch Postdoctoral Fellowship, and the Peter M. and Ruth L. Nicholas Postdoctoral Fellowship in Nanotechnology. The competition is fierce, but they're great awards and come with separate travel funds and research supplies resources. Applying requires working with a Rice faculty sponsor, and the deadline for applications is June 30, 2017, with a would-be start date around the beginning of September, 2017.
- This may be completely academic, but my colleagues at Rice's Baker Institute, including former NSF director and Presidential science adviser Neal Lane, have prepared a report with recommendations to the next science adviser regarding the Office of Science and Technology Policy and how to integrate science into policy making. Yeah. Sigh.
- Check out Funsize Physics! It's a repository of education and broader outreach products from NSF investigators, started by Shireen Adenwalla and Jocelyn Bosely, related to their NSF MRSEC.
- Anyone have strong opinions about Academic Analytics? The main questions are whether the quality control on the information is good, and whether the information can actually be useful.
Wednesday, November 16, 2016
short items
A handful of brief items:
- A biologist former colleague has some good advice on writing successful NSF proposals that translates well to other disciplines and agencies.
- An astronomy colleague has a nice page on the actual science behind the much-hyped supermoon business.
- Lately I've found myself recalling a book that I read as part of an undergraduate philosophy of science course twenty-five years ago, The Dilemmas of an Upright Man. It's the story of Max Planck and the compromises and choices he made while trying to preserve German science through two world wars. As the Nazis rose to power and began their pressuring of government scientific institutions such as the Berlin Academy and the Kaiser Wilhelm Institutes, Planck decided to remain in leadership roles and generally not speak out publicly, in part because he felt like if he abrogated his position there would only be awful people left behind like ardent Nazi Johannes Stark. These decisions may have preserved German science, but they broke his relationship with Einstein, who never spoke to Planck again from 1937 until Planck's death in 1947. It's a good book and very much worth reading.
Wednesday, November 09, 2016
Lenses from metamaterials
As alluded to in my previous posts on metamaterials and metasurfaces, there have been some recently published papers that take these ideas and do impressive things.
- Khorasaninejad et al. have made a metasurface out of a 2d array of very particularly designed TiO2 posts on a glass substrate. The posts vary in size and shape, and are carefully positioned and oriented on the substrate so that, for light incident from behind the glass, normal to the glass surface, and centered on the middle of the array, the light is focused to a spot 200 microns above the array surface. Each little TiO2 post acts like a sub-wavelength scatterer and imparts a phase on the passing light, so that the whole array together acts like a converging lens. This is very reminiscent of the phased array I'd mentioned previously. For a given array, different colors focus to different depths (chromatic aberration). Impressively, the arrays are designed so that there is no polarization dependence of the focusing properties for a given color.
- Hu et al. have made a different kind of metasurface, using plasmonically active gold nanoparticles on a glass surface. The remarkable achievement here is that the authors have used a genetic algorithm to find a pattern of nanoparticle shapes and sizes that somehow, through phased array magic, produces a metasurface that functions as an achromatic lens - different visible colors (red, green, blue) normally incident on the array focus to the same spot, albeit with a short focal length of a few microns.
- Finally, in more of a 3d metamaterial approach, Krueger et al. have leveraged their ability to create 3d designer structures of porous silicon. The porous silicon frameworks have an effective index of refraction at the desired wavelength. By controllably varying the porosity as a function of distance from the optical axis of the structure, these things can act as lenses. Moreover, because of designed anisotropy in the framework, they can make different polarizations of incident light experience different effective refractive indices and therefore have different focal lengths. Fabrication here is supposed to be considerably simpler than the complicated e-beam lithography needed to accomplish the same goal with 2d metasurfaces.
Friday, November 04, 2016
What is a metasurface?
As I alluded in my previous post, metamaterials are made out of building blocks, and thanks to the properties of those building blocks and their spatial arrangement, the aggregate system has, on longer distance scales, emergent properties (e.g., optical, thermal, acoustic, elastic) that can be very different from the traits of the individual building blocks. Classic examples are opal and butterfly wing, both of which are examples of structural coloration. The building blocks (silica spheres in opal; chitin structures in butterfly wing) have certain optical properties, but by properly shaping and arranging them, the metamaterial comprising them has brilliant iridescent color very different from that of bulk slabs of the underlying material.
This works because of wave interference of light. Light propagates more slowly in a dielectric (\(c/n(\omega)\), where \(n(\omega)\) is the frequency-dependent index of refraction). Light propagating through some thickness of material will pick up a phase shift relative to light that propagates through empty space. Moreover, additional phase shifts are picked up at interfaces between dielectrics. If you can control the relative phases of light rays that arrive at a particular location, then you can set up constructive interference or destructive interference.
This is precisely the same math that gives you diffraction patterns. You can also do this actively with radio transmitter antennas. If you set up an antenna array and drive each antenna at the same frequency but with a controlled phase relative to its neighbors, you can tune where the waves constructively or destructively interfere. This is the principle behind phased arrays.
An optical metasurface is an interface that has structures on it that impose particular phase shifts on light that either is transmitted through or reflected off the interface. Like a metamaterial and for the same wave interference reasons, the optical properties of the interface on distance scales larger than those structures can be very different than those of the materials that constitute the structures. Bear in mind, the individual structures don't have to be boring - each by itself could have complicated frequency response, like acting as a dielectric or plasmonic resonator. We now have techniques that allow rich fabrication on surfaces with a variety of materials down to scales much smaller than the wavelength of visible light, and we have tremendous computational techniques that allow us to calculate the expected optical response from such structures. Put these together, and those capabilities enable some pretty amazing optical tricks. See here (pdf!) for a good slideshow covering this topic.
![]() |
Controlling the relative phases between antennas in an array lets you steer radiation. By Davidjessop - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/ w/index.php?curid=48304978 |
This is precisely the same math that gives you diffraction patterns. You can also do this actively with radio transmitter antennas. If you set up an antenna array and drive each antenna at the same frequency but with a controlled phase relative to its neighbors, you can tune where the waves constructively or destructively interfere. This is the principle behind phased arrays.
An optical metasurface is an interface that has structures on it that impose particular phase shifts on light that either is transmitted through or reflected off the interface. Like a metamaterial and for the same wave interference reasons, the optical properties of the interface on distance scales larger than those structures can be very different than those of the materials that constitute the structures. Bear in mind, the individual structures don't have to be boring - each by itself could have complicated frequency response, like acting as a dielectric or plasmonic resonator. We now have techniques that allow rich fabrication on surfaces with a variety of materials down to scales much smaller than the wavelength of visible light, and we have tremendous computational techniques that allow us to calculate the expected optical response from such structures. Put these together, and those capabilities enable some pretty amazing optical tricks. See here (pdf!) for a good slideshow covering this topic.
Subscribe to:
Posts (Atom)