There is a new result in this week's issue of Nature that is very neat (and my college classmate Young Lee is the PI - small world!). The experiment is an inelastic neutron scattering measurement that looks at a material with the unusual name herbertsmithite, and reports evidence that this material is a "quantum spin liquid". I'll try to break down the physics here into reasonably accessible bite-sized chunks.
First, what is a spin liquid? Imagine having a bunch of localized spins on a lattice. You can picture these like little bar magnets. In this case, the spins are the unpaired d electrons of the copper atoms in the herbertsmithite structure. In general, the spins in a solid (this particular one is an insulator) "talk" to each other via the exchange interaction. What this really means is that there are interactions between the spins so that the spins prefer a particular relative orientation to each other. In this case, the interaction between the electron spins is antiferromagnetic, meaning that for spins on two neighboring Cu atoms, having the spins be oppositely directed saves some energy (17 meV) compared to having the spins aligned. As the temperature is lowered, an ensemble of spins will tend to find whatever configuration minimizes the total energy (the ground state). In a ferromagnet, that will be a state with the spins all aligned with their neighbors. In a perfect antiferromagnet, that would be a state where the spins are all antialigned with their neighbors. Both of these are ordered ground states, in that there is some global arrangement of the spins (with a particular symmetry) that wins at T = 0. The problem in herbertsmithite is, because of the spatial arrangement of the Cu atoms (in a Kagome lattice), it's impossible to have every spin antialigned with all of its neighbors. This is an example of geometric frustration. As a result, even as T gets very low, it would appear that the spins in herbertsmithite never order, even though they interact with their neighbors very strongly. This is an analog to the liquid state, where the molecules of a liquid clearly interact very strongly with their neighbors (they bump right into each other!), but they do not form a spatially ordered arrangement (that would be a solid).
Why a quantum spin liquid? Two reasons. First, I cheated in my description above. While we can talk classically about antialigned spins, we really should say that pairs of spins want to form singlets, meaning quantum mechanically entangled antialigned states with net spin zero. So, you can think of this spin liquid state as involving a big entangled mess of spins, where each spin is somehow trying to be entangled in a singlet state with each of its nearest neighbors. This is very complicated to treat theoretically. Second, the fluctuations that dominate in this situation are quantum fluctuations, rather than thermally driven fluctuations. Quantum fluctuations will persist all the way down to T = 0.
What's special about a quantum spin liquid? Well, the low energy excitations of a quantum spin liquid can be very weird. If you imagine reaching into the material and flipping one spin so that it's now energetically "unhappy" in terms of its neighbors, what you find is that you can start flipping spins and end up with "spinon" excitations that travel through the material, having spin-1/2 but no charge, and other exotic properties. This is described reasonably well here. Importantly, these excitations have effects that are seen in measurable properties, like heat capacity and how the system can take up and lose energy.
So what did the experimenters do? They grew large, very pure single crystals of herbertsmithite, and fired neutrons at them. Knowing the energies and momenta of the incident neutrons, and measuring the energies and momenta of the scattered neutrons, they were able to map out the properties of the excitations, showing that they really do look like what one expects for a quantum spin liquid.
Why should you care? This is a great example of seeing exotic properties (like these weird spin excitations) that emerge because of the collective response of a large number of particles. A single Cu ion or unit cell of the crystal doesn't do this stuff - you need lots of spins. Moreover, this is now a system where we can study what this weird, highly quantum-entangled does - I think it's very very far from practical applications, but you never know. Looks like a very nice piece of work.
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, December 20, 2012
Tuesday, December 18, 2012
Just how self-correcting is science?
In an extremely timely article in the new issue of American Scientist, Joseph Grcar looks at what fraction of publications in various disciplines are basically corrective (that is, comments, corrigenda, corrections, retractions, or refutations). He finds in the sciences in general the correction rate is about 1-1.5% of publications. This is probably a bit of an underestimate, in my view, since there are new works published that are essentially soft refutations that may not be detected by the methods used here. Likewise, some fraction of the body of published work (constituting the denominator of that fraction) has no impact (in the sense of never being cited). Still, that's an interesting number to see.
Friday, December 14, 2012
A nano controversy
A couple of my colleagues pointed me to this blog, that of Raphaël Lévy at Liverpool. Lately it has become a clearing house for information about a particular controversy in nanoscale science, the question of "stripy nanoparticles". The ultrashort version of the story: Back in 2004, Francesco Stellacci (then at MIT, now in Switzerland) published a paper in Nature Materials arguing that his group had demonstrated something quite interesting and potentially useful. Very often when solution-based materials chemistry is used to synthesize nanoparticles, the nanoparticles end up coated in a monolayer of some molecule (a ligand). These ligands can act as surfactants to alter the kinetics of growth, but their most important function is to help the nanoparticles remain in suspension by preventing their aggregation (or one of my favorite science terms, flocculation). Anyway, Stellacci and company used two different kinds of ligand molecules, and claimed that they had evidence that the ligands spontaneously phase-segregated on the nanoparticle surface into parallel stripes. His group has gone on to publish many papers in high impact journals on these "stripy" particles.
However, it is clear from the many posts on Lévy's blog, to say nothing of the paper published in Small, that this claim is controversial. Basically those who disagree with Stellacci's interpretation argue that the scanned probe images that apparently show stripes are in fact displaying a particular kind of imaging artifact. As an AFM or STM tip is scanned over a surface, feedback control is used to maintain some constant conditions (e.g., constant AFM oscillation frequency or amplitude; constant STM tunneling current). If the feedback isn't tuned properly, there can be "ringing" so that the image shows oscillatory features as a function of time (and therefore tip position).
I have no stake in this, though I have to say that the arguments and images shown by the skeptics are pretty persuasive. I'd have to dig through Stellacci's counterarguments and ancillary experiments, but this doesn't look great.
This whole situation does raise some interesting questions, though. Lévy points out that many articles seem to be published that take the assertion of stripiness practically on faith or on very scant evidence. Certainly once there is a critical mass of published literature in big journals claiming some effect, it can be hard as a reviewer to argue that that body of work is all wrong. Still, if you see (a series of) results in the literature that you really think are incorrectly interpreted, what it is the appropriate way to handle something like this? Write a "comment" in each of these journals? How should journals respond to concerns like this? I do know that editors at high profile journals really don't like even reviewing "rebuttal" papers - they'd much rather have a "comment" or to let sleeping dogs lie. Interesting stuff, nonetheless.
Update: To clarify, I am not taking a side here scientifically - in the long run, the community will settle these questions, particularly those of reproducibility. Further, one other question raised here is the appropriate role of blogs. They are an alternative way of airing scientific concerns (compared to the comment/rebuttal format), and that's probably a net good, but I don't think a culture of internet campaigns against research with which we disagree is a healthy limiting case.
However, it is clear from the many posts on Lévy's blog, to say nothing of the paper published in Small, that this claim is controversial. Basically those who disagree with Stellacci's interpretation argue that the scanned probe images that apparently show stripes are in fact displaying a particular kind of imaging artifact. As an AFM or STM tip is scanned over a surface, feedback control is used to maintain some constant conditions (e.g., constant AFM oscillation frequency or amplitude; constant STM tunneling current). If the feedback isn't tuned properly, there can be "ringing" so that the image shows oscillatory features as a function of time (and therefore tip position).
I have no stake in this, though I have to say that the arguments and images shown by the skeptics are pretty persuasive. I'd have to dig through Stellacci's counterarguments and ancillary experiments, but this doesn't look great.
This whole situation does raise some interesting questions, though. Lévy points out that many articles seem to be published that take the assertion of stripiness practically on faith or on very scant evidence. Certainly once there is a critical mass of published literature in big journals claiming some effect, it can be hard as a reviewer to argue that that body of work is all wrong. Still, if you see (a series of) results in the literature that you really think are incorrectly interpreted, what it is the appropriate way to handle something like this? Write a "comment" in each of these journals? How should journals respond to concerns like this? I do know that editors at high profile journals really don't like even reviewing "rebuttal" papers - they'd much rather have a "comment" or to let sleeping dogs lie. Interesting stuff, nonetheless.
Update: To clarify, I am not taking a side here scientifically - in the long run, the community will settle these questions, particularly those of reproducibility. Further, one other question raised here is the appropriate role of blogs. They are an alternative way of airing scientific concerns (compared to the comment/rebuttal format), and that's probably a net good, but I don't think a culture of internet campaigns against research with which we disagree is a healthy limiting case.
Monday, December 03, 2012
The future of Si: Into the fog
Today we had a visit at Rice from Mike Mayberry, the VP for the Technology and Manufacturing group at Intel. He gave a very good talk about where semiconductor electronics is going (naturally with the appropriate disclaimers that we shouldn't buy or sell stocks based on anything he said). The general rule is that there is a metaphorical fog out there about ten years off, beyond which it's not clear what the industry will be doing or how it will be doing it. However, for the comparatively near term, the path is fairly well known. In short: complementary metal-oxide-semiconductor (CMOS) based on Si is going to continue for quite a ways longer. Device design will continue toward improved electrostatic control of the channel - we will likely see evolution from tri-gate finFET structures toward full wrap-around gates. We are firmly in the materials-limited regime of device performance, and Intel has been playing games with strain to improve charge mobility. They have even experimented with integration of III-V materials via epitaxy onto Si platforms. It's pretty clear that there is a healthy skepticism about post-CMOS alternative technologies, particularly given the absurdly low cost and high volumes of Si. Intel ships something like 4 trillion transistors per minute (!). Other remarkable facts/figures: Network traffic right now is around 7 exabytes (1018 bytes) per day, or the equivalent of 17000 HD movies every second, including around 204 million emails per minute on gmail, and these numbers are increasing all the time. Amazing.
Thursday, November 29, 2012
Ionic liquid gating - amazing results
I've mentioned ionic liquid gating a couple of times (here, here) before. Ionic liquids are basically organic salts (molecular anion; molecular cation, though there are some that use, e.g., alkaline metal ions instead) that are liquid at room temperature. The concept is simple enough: use the ionic liquid as an electrolyte in a capacitor comprising a wire employed as a gate electrode and the surface of a sample of interest as the counterelectrode (helpfully contacted by source and drain electrodes). Setting the bias of the gate relative to the surface drives the ions to move, building up (ionic) surface charge densities at the sample surface. The material responds to screen what would otherwise be an enormous electric field penetrating the surface by accumulating mobile charge carriers of the appropriate species in a layer at the interface. This approach, adopted by multiple groups, has been pushed hardest by the group of Iwasa. The real advantage of this approach is that it allows access to electrostatically gated charge densities comparable to what can be achieved in chemical doping - on the order \( 10^{15} \) per cm2 in that top layer of material. (This is the regime that He Who Shall Not Be Named fraudulently claimed to access, but this time it's real!)
Particular highlights have included:
The results just keep coming. There are some real subtleties to the technique, though. It's extremely important to make sure that the ionic liquid is really acting as a chemically inert electrolyte, and not inducing electrochemistry at the material surface or any other kind of chemical doping.
Anyway, this whole area is clearly one to watch in condensed matter. Anytime you can push into a previously inaccessible regime, Nature tends to offer up surprises.
Particular highlights have included:
- Gating SrTiO3 into metallicity and then superconductivity.
- Gating ZnO into metallicity.
- Gating ferromagnetic response in GaMnAs and Co-doped TiO2.
- Gate-induced superconductivity in KTaO3.
The results just keep coming. There are some real subtleties to the technique, though. It's extremely important to make sure that the ionic liquid is really acting as a chemically inert electrolyte, and not inducing electrochemistry at the material surface or any other kind of chemical doping.
Anyway, this whole area is clearly one to watch in condensed matter. Anytime you can push into a previously inaccessible regime, Nature tends to offer up surprises.
Wednesday, November 21, 2012
Tidbits
Now that a couple of papers are in and the never-ending list of tasks is getting shorter, hopefully blogging will pick back up. Here are a few interesting links for the (American) Thanksgiving holiday.
This is a paper about the subtleties and challenges of computational physics, meant to be very conversational and pedagogical for students. It was a fun read, and we should have more resources like this.
The Bad Astronomer has relocated to Slate.com. He did this just in time for the rampant speculation about NASA's pending chemistry results from the Curiosity rover. I don't think Curiosity actually killed a cat, but you never know.
The University of Houston just had a one-day symposium in honor of the twenty-fifth anniversary of the discovery of YBCO, the first superconductor with a transition temperature greater than the boiling point of liquid nitrogen. I wish I could've been there, but I had other commitments. My colleague tells me it was a very nicely done affair with many interesting talks and panel discussions. Now if only we could figure this out definitively....
It's always interesting to see a very thought-provoking paper from someone you take seriously. Miles Blencowe, a condensed matter/quantum measurement theorist at Dartmouth, argues that perturbative quantum gravity (a general approach to extending general relativity a bit into the quantum regime) can be a major source of decoherence of quantum superpositions of massive objects. This implies that the very quantum structure of spacetime acts dynamically to "collapse wavefunctions" in the language of the Copenhagen take on quantum mechanics. I need to read this more carefully.
This is a paper about the subtleties and challenges of computational physics, meant to be very conversational and pedagogical for students. It was a fun read, and we should have more resources like this.
The Bad Astronomer has relocated to Slate.com. He did this just in time for the rampant speculation about NASA's pending chemistry results from the Curiosity rover. I don't think Curiosity actually killed a cat, but you never know.
The University of Houston just had a one-day symposium in honor of the twenty-fifth anniversary of the discovery of YBCO, the first superconductor with a transition temperature greater than the boiling point of liquid nitrogen. I wish I could've been there, but I had other commitments. My colleague tells me it was a very nicely done affair with many interesting talks and panel discussions. Now if only we could figure this out definitively....
It's always interesting to see a very thought-provoking paper from someone you take seriously. Miles Blencowe, a condensed matter/quantum measurement theorist at Dartmouth, argues that perturbative quantum gravity (a general approach to extending general relativity a bit into the quantum regime) can be a major source of decoherence of quantum superpositions of massive objects. This implies that the very quantum structure of spacetime acts dynamically to "collapse wavefunctions" in the language of the Copenhagen take on quantum mechanics. I need to read this more carefully.
Monday, November 12, 2012
Physics education, again.
This video is great at pointing out much of what is wrong with high school physics education in the US. However, I find the implication that this is not that hard to fix to be a bit tough to swallow. More later....
Wednesday, November 07, 2012
Things no one teaches you as part of your training.
Over the last couple of months I've been reflecting about some aspects of being an academic physicist, particularly what skills are important and what aspects of the job are never explicitly taught. The training system that has evolved in the post-WWII US research universities is one that, when it works well, instills critical thinking, experimental or calculational design, and a large amount of (often rather narrow) scientific expertise. Ancillary to this, doctoral students often (but not always) get indirect training in written and oral communications through the various papers and theses they write and the presentations that are made at conferences and dissertation defenses. Often students gain some teaching experience, though many times this is in the form of the kind of TA work (running a lab section, grading problem sets) that is a far cry from actually planning out and lecturing a course. Sometimes in the course of graduate or postdoctoral work, scholars are taught a bit about mentoring of younger students, but this is almost entirely informal.
However, there are many critical skills (relevant to eventual careers in both academia and industry) that get by-passed. I'm not sure how you would actually teach these things in practice, and setting up courses to do so would widely be viewed as a waste of student time. Still, it's interesting how much of being a good faculty member (or valued employee with managerial responsibilities) is never taught; it's just assumed that you pick this stuff up along the way somehow, or you are innately skilled. Examples:
However, there are many critical skills (relevant to eventual careers in both academia and industry) that get by-passed. I'm not sure how you would actually teach these things in practice, and setting up courses to do so would widely be viewed as a waste of student time. Still, it's interesting how much of being a good faculty member (or valued employee with managerial responsibilities) is never taught; it's just assumed that you pick this stuff up along the way somehow, or you are innately skilled. Examples:
- Managing students. This includes: motivating students; determining what level of guidance is best suited to a particular student to instill independence and creativity yet avoid either aimless floundering or complete micromanagement; how to deal with personal, physical health or mental health problems; how to assess whether a student really has strong potential or an affinity for a particular project or set of skills.
- Managing money. No one ever tells you how to run a research group's finances. No one ever explicitly sits you down and explains how the university's finances really work, what indirect costs really mean, how to deal with unanticipated financial issues, how to write a budget justification, how to stretch money as far as possible, how to negotiate with vendors, how the university accounting system works, how much responsibility to delegate to students/postdocs, how you may interact with the office of research accounting, how to do effort reporting.
- Working with colleagues within the department and the university. (actually, my department does a decent job at this through faculty mentoring, but most of that was put in place after I had been promoted already.) How does university decision making work, what can the chair do, what do the deans do, what does the provost do. Why are there so many university committees? Do any of them do anything useful? Are they just a refuge for people with too much free time, who like to argue for hours about whether "could" or "should" is the appropriate language for a policy?
- Writing. The only way people learn to write all of the really critical documents (papers, grant proposals, white papers, little blurbs for the department web page, group websites, etc.) is by doing.
- Teaching. At the modern research university, there is an assumption that you can pick up teaching. This is widely considered insulting by serious education professionals, though there is truth to it - most people who are highly successful, communicative, organized scientists tend to be pretty good in the classroom, since good teaching requires good communications and organization capabilities (though also considerably more).
- Time management. No one teaches you how to budget your time, but if you can't do it reasonably well, you're really in trouble. (For example, I should be writing three other things right now....)
Monday, October 29, 2012
back to blogging soon.
Just a quick note that I'm finally done w/ my NSF proposal, and I will be back to blogging about science soon. One quick link for you all: My faculty colleague (and general wise person) Neal Lane has a very nice editorial in today's NY Times about the importance of federal support for basic research.
A second fun link: if you haven't seen this or this or this, you have been missing out on a great series of math videos by Vi Hart. I want to grow up to be clever enough to make videos like this about condensed matter physics.
A second fun link: if you haven't seen this or this or this, you have been missing out on a great series of math videos by Vi Hart. I want to grow up to be clever enough to make videos like this about condensed matter physics.
Thursday, October 18, 2012
"Everything you wanted to know about Data Analysis and Fitting but were afraid to ask"
This paper posted to the arxiv the other day provides a very readable, practical discussion of error analysis and curve fitting. While I have not had the time to go through this in detail, at a quick read it looks like it is the sort of thing every physics student should know and use as a refresher.
The point is, far too many people who really should know better never learn the right way to think about uncertainties or how to properly fit data. For example, in the world of economics, some people actually think that the curve on this graph actually has some statistical significance. (That's not a political statement - I'm laughing at their innumeracy.)
The point is, far too many people who really should know better never learn the right way to think about uncertainties or how to properly fit data. For example, in the world of economics, some people actually think that the curve on this graph actually has some statistical significance. (That's not a political statement - I'm laughing at their innumeracy.)
Thursday, October 11, 2012
Condensed matter experimental position at Rice
Pardon the use of the blog for advertising, but it can only help broaden the reach of the ad:
On a tangentially related note, I encourage people who want to understand more about how the faculty job search process works to read my previous posts on the topic. This is a good place to start, followed by this and this. As commenter Charles Day pointed out, here is a Physics Today article freely available on this topic.
The Department
of Physics and Astronomy at Rice University invites applications for a
tenure-track faculty position in experimental condensed matter physics. The department expects to make an appointment
at the assistant professor level. This search primarily seeks an outstanding
individual whose emphasis is on neutron or x-ray spectroscopy of hard condensed
matter systems, who will complement and extend existing
experimental and theoretical activities in condensed matter physics (see http://physics.rice.edu/). A PhD in physics or related field is
required. Applicants should send a curriculum vitae, statements of research and
teaching interests, a list of publications, and two or three selected reprints,
in a single PDF file, to vcall@rice.edu with subject line “CME Search” or to Prof. Douglas Natelson, Chair, Condensed
Matter Search Committee, Dept. of Physics and Astronomy – MS 61, Rice
University, 6100 Main Street, Houston, TX
77005. Applicants should also
arrange for at least three letters of recommendation to be sent by email or
post. Applications will be accepted
until the position is filled, but only those received by December 1, 2012
will be assured full consideration. The
appointment is expected to start in July 2013.
Rice University is an affirmative action/equal opportunity employer;
women and underrepresented minorities are strongly encouraged to apply.
On a tangentially related note, I encourage people who want to understand more about how the faculty job search process works to read my previous posts on the topic. This is a good place to start, followed by this and this. As commenter Charles Day pointed out, here is a Physics Today article freely available on this topic.
Wednesday, October 10, 2012
Bits and pieces
Several links to bide time while I try to write to many things.
Congratulations, of course, to this year's Nobel winners in physics! Great stuff, and it was lots of fun trying to give a simple explanation of the work to the freshmen in my class.
Grad student Barbie is a bit too on the nose.
It's become abundantly clear that the US House science committee is populated in part by people who are not just ignorant (like those who dont understand how biology works) some simply think that science itself is evil and literally a trick by the devil (!) to mislead them. Other people have pointed this out. This is just unacceptable. We deserve better. It's a damned disgrace that Congress actually rewards these people by placing them on a committee where their ignorance can formulate policy. I don't know how to fix it except by shaming them, and we all know that shaming the current House leadership is impossible. I'm close to having a Howard Beale moment here.
This was a cool nano physics story to hear on the way to work this morning.
Congratulations, of course, to this year's Nobel winners in physics! Great stuff, and it was lots of fun trying to give a simple explanation of the work to the freshmen in my class.
Grad student Barbie is a bit too on the nose.
It's become abundantly clear that the US House science committee is populated in part by people who are not just ignorant (like those who dont understand how biology works) some simply think that science itself is evil and literally a trick by the devil (!) to mislead them. Other people have pointed this out. This is just unacceptable. We deserve better. It's a damned disgrace that Congress actually rewards these people by placing them on a committee where their ignorance can formulate policy. I don't know how to fix it except by shaming them, and we all know that shaming the current House leadership is impossible. I'm close to having a Howard Beale moment here.
This was a cool nano physics story to hear on the way to work this morning.
Thursday, October 04, 2012
Science and its self-correcting nature.
Eight years ago, Moses Chan of Penn State made big news by publishing experimental evidence that appeared to be consistent with supersolidity - a hypothesized state in which atomic vacancies in a solid (in this case, pressurized crystals of 4He at very low temperatures) could move without dissipation, analogous to the quantum coherent, viscosity-free flow of atoms in a superfluid. I've mentioned this before (1) (2). Now, as written up in the latest issue of Science, it seems like supersolidity (at least in the system that had been studied) is dead, and a major killer was a paper by the original authors of the first claim.
This happens sometimes. Observations and their interpretation can seem very very compelling, and yet later someone will think of some subtle issue that had not been considered previously. That's the nature of science. Unfortunately, sometimes the popular impression that gets conveyed is that because of these rare situations, science is no more trustworthy than random guesses or opinions. My own thesis advisor told me more than once that it's ok to be wrong in science occasionally, and the best outcome is to be the one who discovers your own mistake! (He and coauthors had published a PRL claiming that an effect they saw was taking place in solid 3He, when it turned out that it really was happening in the liquid, which they then also published, correcting their own mistaken interpretation. It worked out well for them.)
That reminds me: time for the annual Nobel speculation, since the physics prize comes next Tuesday. Place your bets below.... (blogging will continue to be slow due to multiple other writing constraints right now)
This happens sometimes. Observations and their interpretation can seem very very compelling, and yet later someone will think of some subtle issue that had not been considered previously. That's the nature of science. Unfortunately, sometimes the popular impression that gets conveyed is that because of these rare situations, science is no more trustworthy than random guesses or opinions. My own thesis advisor told me more than once that it's ok to be wrong in science occasionally, and the best outcome is to be the one who discovers your own mistake! (He and coauthors had published a PRL claiming that an effect they saw was taking place in solid 3He, when it turned out that it really was happening in the liquid, which they then also published, correcting their own mistaken interpretation. It worked out well for them.)
That reminds me: time for the annual Nobel speculation, since the physics prize comes next Tuesday. Place your bets below.... (blogging will continue to be slow due to multiple other writing constraints right now)
Thursday, September 27, 2012
More recent hot topics
Still working on proposals, so this will be brief. However, as mentioned in my previous post, I did want to add in some topics/open questions that are fairly hot right now:
- Topological insulators - These materials are nominally band insulators, in that they have a filled band of electronic states, an energy gap, and an empty conduction band. However, unlike ordinary band insulators, these have an odd number of states that live at their surface that exist in the band gap. Because of strong spin-orbit coupling, the spin of a carrier in one of these surface states is locked in a particular orientation relative to the carrier's momentum. That tends to suppress large angle scattering by ordinary disorder, since ordinary disorder scattering doesn't flip spin. One big question is, can these materials be grown in such a way that the bulk really is insulating? During crystal growth, it is energetically cheap to form point defects in many of these materials that act as dopants, leading to serious bulk conduction. Recent work has found materials (e.g., Bi2Te2Se) that are less problematic, but no one has (to my knowledge) figured out a way to grow really insulating thin films that retain the cool surface state properties. A second question is, can one actually employ these surface states for anything useful?
- Gating in strongly correlated materials - pioneered by Iwasa's group in Japan, there has been a boon in using ionic liquids (essentially molten organic salts) as electrolytes in gating experiments. These liquids allow one to obtain surface charge densities comparable to those possible in chemical doping, on the order of one charge per unit cell on the surface. That's enough to do interesting physics in strongly correlated materials (e.g., gating an insulating copper oxide layer into superconductivity). How far can one push this? Can this technique be used to develop a transistor based on the Mott metal-insulator transition?
- Quantum computing - this isn't new, of course, but there seems to be more and more work going on toward making some form of solid state, scalable quantum computer. Which of the competing approaches will win out? Spins, with their amazingly long coherence times in isotopically pure Si and diamond? Superconducting flux or charge qubits? It does not look like there is any fundamental reason why you couldn't have quantum computers, but it's an enormous technical challenge.
- Optomechanics - there are a number of groups out there having lots of fun looking at micro- or nanoelectromechanical systems and coupling them to optics. This lets you do optical cooling methods to put the mechanical systems into low-occupation quantum states; this lets you entangle the light with with mechanical system; etc. What are the ultimate limits here? Could this usher in a new style of precision measurement, or lead to new quantum information manipulations?
- Plasmonics - we're firmly out of the stage now where every weird metal nanostructure with a plasmon resonance was netting a high profile paper. Instead, people are looking at using plasmons to confine light to deep subwavelength scales, for super-tiny optical emitters, detectors, etc. How small a laser can one make using plasmonics? What other quantum optics tricks can one play with these tools? Can plasmonic effects be engineered to improve photovoltaics significantly, or photocatalysis?
Wednesday, September 19, 2012
Controversies/hot topics in condensed matter, revisited
A little over six years ago (!), I wrote this post, where I listed a whole series of topics that were hot/exciting/controversial at the time. While I'm traveling for a NSF site visit and working on a couple of proposals, I thought that now might be a good time to bring up this list again. Here is a too-brief update/scorecard, with current statements in blue. In a followup post I will try to add some new topics, and I invite suggestions/submissions in the comments. The previous list was:
- 2d metal-insulator transition - What is the mechanism for the apparent metal-insulator transition in 2d electron and hole systems at low densities? Is it profound or not? I admit, I haven't followed this as closely as I should have. The old argument had been that the scaling theory of localization (a theory that essentially neglects electron-electron interactions) says that any 2d electronic system becomes an insulator at zero temperature in the presence of arbitrarily weak (but nonzero) disorder.
- High-Tc - what is the mechanism of high temperature superconductivity? What is the ultimate limit of Tc? What is the "bad metal", and what is the pseudogap, really? How important are stripes and checkerboards? Is the phrase "doped Mott insulator" really a generic description of these systems? We still don't have a definitive, broadly agreed answer to these questions, though progress has been made, in large part due to continually improving sample material quality. It sure looks like there can be other phases (including ones involving spatial patterns of charge like stripes and checkerboards) that compete with superconductivity. It also looks like high temperature superconductivity often "dies" as T is increased due to loss of global phase coherence. That is, there are still paired electrons above the (bulk) critical temperature, but the pairs lack the coordinated quantum mechanical evolution that gives what we think of as the hallmarks of the superconducting state (the perfect expulsion of magnetic flux at low magnetic fields; zero electrical resistance). Since I wrote the above, a whole new class of superconducting materials, the iron pnictides, has been discovered. While in their normal state they are generally not Mott insulators, electronic correlations and the competition between different correlated ground states do seem to be important, something broadly similar to what is seen in the cuprates.
- Quantum criticality and heavy fermions - Do we really understand these systems? What are the excitations in the "local moment" phase? What is the connection to high-Tc, if any? My faculty colleague who is an expert on quantum criticality would give a definite, though qualified "yes" to that last question. I think (and please correct me in the comments if you disagree) that how to properly describe low energy electronic excitations of systems when the quasiparticle picture breaks down (that is, when the carriers don't act roughly like ordinary electrons, but instead are "incoherent") is still up in the air.
- Manganites - What sets the length scale for inhomogeneities in these materials? I believe this is still up for discussion, though it's known that the effects of disorder and strain can make it very challenging to pull out truly intrinsic physics.
- Quantum coherence and mesoscopics - Do we really have a complete understanding of mesoscopic physics and decoherence at this point? What about in correlated materials? In normal ("boring") metals at low temperatures and in ordinary semiconductors, it looks like we do have a pretty good handle on what's going on, though there are still some systems where the details can get very complicated. As for strongly correlated materials (when electron-electron interactions are very important), I still have not seen a lot of work directly looking at the issue. This is related to the point above about quantum criticality - if you can't readily describe the low energy excitations of the system as particle-like, then it can be tricky to think about their quantum coherent properties.
- Quantum Hall systems - Are there really non-Abelian states at certain filling factors? In bilayers, is there excitonic condensation? Cautious answers on both these counts appear to be "yes". There has been quite a bit of lovely work looking at the 5/2 fractional quantum Hall state (including very cute stuff by my postdoctoral mentor) that seems entirely consistent with non-Abelian physics. Likewise, the recent work making the case for Majorana fermions in semiconductor/superconductor hybrid systems shows that there is hope of really studying systems with non-Abelian excitations. In the case of the quantum Hall bilayers, work by Jim Eisenstein's group at Cal Tech looks very exciting (if I'm leaving out someone, my apologies - I haven't followed the area that closely).
- 1d systems - Is there conclusive evidence of spin-charge separation and Luttinger liquid behavior in semiconductor nanowires? Nanotubes? I think the case for spin-charge separation is better now than it was six years ago, due to very nice work by multiple groups (Yacoby now at Harvard; the gang at Cavendish in Cambridge, for example).
- Mixed valence compounds - Is there or is there not charge ordering at low temperatures in Fe3O4, something that's been argued about for literally 60 years now? This seems to have been settled in the Fe3O4 case: the system has some amount of charge disproportionation, orbital ordering, and the electron-phonon coupling is not negligible in looking at the physics here.
- Two-channel Kondo physics - Is there firm evidence for the two-channel Kondo effect and non-Fermi liquid behavior in some physical system? A qualified "yes", in that the Goldhaber-Gordon group at Stanford made a tunable quantum dot system that can sit at a point that looks like 2-channel Kondo physics is relevant. However, I haven't seen anyone else following up on this, probably in part b/c it's very hard.
- Molecular electronics - Is there really improving agreement between experiment and theory? Can novel correlation physics be studied in molecular systems? Can molecules exhibit intrinsic (to the molecule) electronic functionality? In order, "yes", "yes" (interesting underscreened Kondo physics and other quantum impurity problems, for example), and "yes" (e.g., optically driven switching between isomers with different conductances), though as I've said for years, we're not going to be building computers out of these things - they're tools for looking at physics and chemistry at the nanometer scale.
- Organic semiconductors - What is the ultimate limit of charge mobility in these materials? Are there novel electronic correlation effects to be seen? Can one see a metal-insulator transition in these systems? In order, "it still remains to be seen" (though mobilities on the order of 10-100 cm^2/Vs have been shown); "maybe" (if one counts experiments looking at charge transfer salts, Mott transitions, etc.), and "yes" (if one uses, e.g., ionic liquids to obtain exceedingly high carrier densities).
- Nanomechanical systems - Can we demonstrate true "quantum mechanics", in the sense of a mechanical system that acts quantum mechanically? Yes - see Science's breakthrough of the year in 2010, for example.
- Micro/nano systems to address "fundamental physics" - Can we measure gravity on the 100 nm length scale? Are there experiments with Josephson junctions that can probe "dark energy"? "Not yet" and "Not yet", though like atomic physics I think it would not be surprising if condensed matter produced some systems that could be used in precision measurement tests looking at these kinds of issues.
Thursday, September 13, 2012
Room temperature superconductivity? Probably not yet.
A paper appeared on the arxiv recently (published in Advanced Materials) from a group in Leipzig reporting magnetic measurements that the authors argue are suggestive of some kind of room temperature superconductivity in highly oriented pyrolitic graphite (that's enhanced after a particular treatment involving water). Unsurprisingly, this got a bit of attention. So, what is the deal?
Well, it's been known for a long time that clean graphite has a very large diamagnetic response. That means that when a magnetic field is applied to the material from the outside, the field inside the material is less than the external field. This can happen in a couple of ways. In a type-I superconductor, at low applied magnetic fields, persistent "screening" currents are set up that completely cancel the externally applied field. This perfect diamagnetism is called the Meissner effect, and is a signature of superconductivity. (Why this happens is actually a pretty deep question - just accept for now that it does.)
The investigators have spent a long time staring at the magnetization of their graphite as a function of applied magnetic field and temperature, and they argue that what they see could be a signature of some kind of "granular" superconductivity. This means that the bulk of the material is not superconducting. In fact, you can compare the measured diamagnetism with what one would expect for a perfect diamagnet, and if this is superconductivity, only about 0.01% of the sample is superconducting. Still, the systematics that they find are interesting and definitely worth further investigation. It's important to know, though, that there have been similar discussions for over a decade. We're not there yet.
Well, it's been known for a long time that clean graphite has a very large diamagnetic response. That means that when a magnetic field is applied to the material from the outside, the field inside the material is less than the external field. This can happen in a couple of ways. In a type-I superconductor, at low applied magnetic fields, persistent "screening" currents are set up that completely cancel the externally applied field. This perfect diamagnetism is called the Meissner effect, and is a signature of superconductivity. (Why this happens is actually a pretty deep question - just accept for now that it does.)
The investigators have spent a long time staring at the magnetization of their graphite as a function of applied magnetic field and temperature, and they argue that what they see could be a signature of some kind of "granular" superconductivity. This means that the bulk of the material is not superconducting. In fact, you can compare the measured diamagnetism with what one would expect for a perfect diamagnet, and if this is superconductivity, only about 0.01% of the sample is superconducting. Still, the systematics that they find are interesting and definitely worth further investigation. It's important to know, though, that there have been similar discussions for over a decade. We're not there yet.
Friday, September 07, 2012
TED talks
My local public radio station has been repeatedly promoting the TED Radio Hour, which involves (to paraphrase the promo) people having 18 minutes to give the talk of their lives. The TED folks have certainly gone very far in promotion - they do a great job in making all of their talks look like things worthy of listening. Looking on the TED site, it's interesting to see what there is that may be relevant to readers of this blog. Searching on "condensed matter" (without the quotes) returns only a single talk, by the extraordinarily creative George Whitesides. Searching on "nanoscale" returns eight talks, including one by Paul Rothemund on DNA origami and one by Angela Belcher on her work on nano-enabled batteries. A search on "solid state" returns nothing relevant at all. This has made me think about what I'd say if I had the chance to give a talk like this - one where it's supposed to be accessible to a really general audience. Two topics come to mind.
First, someone at some point should give a TED talk that really spells out how enormous the impact of solid state physics really is on our daily lives. This would require a couple of minutes talking about what we mean by "solid state physics", and what it tells us. This would also require some discussion about the divide between science and engineering, the nature of basic science, and the eventual usefulness of abstract knowledge. In the end, you can tie together the ideal gas law (the need to use statistics to understand large numbers of particles), the Pauli principle (which explains the periodic table and how electrons arrange themselves), the need for better telephone amplifiers (Bell Labs and the transistor), all eventually resulting in the cell phone in your pocket, computers, the internet, etc.
Second, I'd love to jump into some of our work that looks at how heating and dissipation happen at the molecular scale. When you push current through a wire, the wire gets hot. How does that happen? What does "hot" mean? How does energy get from the battery into the microscopic degrees of freedom in the wire? What happens if the wire is really small, like atomic-scale? What does it mean for something to be "irreversible"? This could be a lot of fun. Of course, the total number of scientists that give these talks is tiny and they are august (e.g., Rothemund and Belcher are both MacArthur Fellows; Whitesides has won just about everything except the Nobel, and that wouldn't be a surprise). Still, it never hurts to fantasize a bit.
First, someone at some point should give a TED talk that really spells out how enormous the impact of solid state physics really is on our daily lives. This would require a couple of minutes talking about what we mean by "solid state physics", and what it tells us. This would also require some discussion about the divide between science and engineering, the nature of basic science, and the eventual usefulness of abstract knowledge. In the end, you can tie together the ideal gas law (the need to use statistics to understand large numbers of particles), the Pauli principle (which explains the periodic table and how electrons arrange themselves), the need for better telephone amplifiers (Bell Labs and the transistor), all eventually resulting in the cell phone in your pocket, computers, the internet, etc.
Second, I'd love to jump into some of our work that looks at how heating and dissipation happen at the molecular scale. When you push current through a wire, the wire gets hot. How does that happen? What does "hot" mean? How does energy get from the battery into the microscopic degrees of freedom in the wire? What happens if the wire is really small, like atomic-scale? What does it mean for something to be "irreversible"? This could be a lot of fun. Of course, the total number of scientists that give these talks is tiny and they are august (e.g., Rothemund and Belcher are both MacArthur Fellows; Whitesides has won just about everything except the Nobel, and that wouldn't be a surprise). Still, it never hurts to fantasize a bit.
Sunday, September 02, 2012
Cheating, plagiarism, and honor codes
The internet has been buzzing about the rumored cheating scandal at Harvard, where more than half of the students in an introductory political science class are implicated in plagiarism and/or improper collusion on a take-home exam. It sounds like there were several coincident issues here. First, the take-home final was "open book, open notes, open internet". That might be fine under some circumstances and certainly corresponds to real-life conditions - I've always hated the contrived nature of closed-book, timed exams. However, if students are not properly trained in how to cite material (a big "if" at Harvard), this approach could easily lead to short-answer responses that sound very similar to each other, a case of textual "convergent evolution" rather than actual collusion. However, an article at Salon makes this whole mess sound even worse than that. It sounds like real collusion and cheating were common and had been for years, and the professor basically implied that the course was an easy A.
I was surprised to learn that Harvard had no honor code system. I'd been an undergrad at Princeton, which has a very seriously run honor system; I'd been a grad student at Stanford, where the situation was similar (though they did ask faculty not to put students in a position where they'd be "tempted", like a take-home closed book exam - I always thought this to be hypocritical. Either you trust students or you don't.), and I'm a faculty member at Rice, where there is a very seriously run honor system. While no system is perfect, and those truly determined to cheat will still try to cheat, based on my experience I think that having a broadly known, student-run (or at least run with heavy student participation) academic justice system is better than the alternative. It does, however, rely critically on faculty buy-in. If the faculty think that the honor system is broken (either too lenient or too cumbersome), or the faculty are so detached from teaching that they don't care or realize the impact that teaching has on other students, then an honor system won't work.
On the one hand, cheating is in many ways easier than ever before, because of the free flow of information enabled by the internet. Students can download solution manuals for nearly every science textbook, for example. However, technology also makes it possible to compare assignments and spot plagiarism more readily than ever before. What I find very distressing is the continual erosion of the meaning of academic and intellectual honesty. This ranges from the death-of-a-thousand-cuts little stuff like grabbing images off the internet without attribution, all the way to the vilifying of science as just as subjective as opinion. There is something very insidious about deciding to marginalize fact-checking. If places like Harvard don't take the truth and intellectual honesty seriously, how can we be surprised when the average person is deeply cynical about everything that is claimed to be true?
I was surprised to learn that Harvard had no honor code system. I'd been an undergrad at Princeton, which has a very seriously run honor system; I'd been a grad student at Stanford, where the situation was similar (though they did ask faculty not to put students in a position where they'd be "tempted", like a take-home closed book exam - I always thought this to be hypocritical. Either you trust students or you don't.), and I'm a faculty member at Rice, where there is a very seriously run honor system. While no system is perfect, and those truly determined to cheat will still try to cheat, based on my experience I think that having a broadly known, student-run (or at least run with heavy student participation) academic justice system is better than the alternative. It does, however, rely critically on faculty buy-in. If the faculty think that the honor system is broken (either too lenient or too cumbersome), or the faculty are so detached from teaching that they don't care or realize the impact that teaching has on other students, then an honor system won't work.
On the one hand, cheating is in many ways easier than ever before, because of the free flow of information enabled by the internet. Students can download solution manuals for nearly every science textbook, for example. However, technology also makes it possible to compare assignments and spot plagiarism more readily than ever before. What I find very distressing is the continual erosion of the meaning of academic and intellectual honesty. This ranges from the death-of-a-thousand-cuts little stuff like grabbing images off the internet without attribution, all the way to the vilifying of science as just as subjective as opinion. There is something very insidious about deciding to marginalize fact-checking. If places like Harvard don't take the truth and intellectual honesty seriously, how can we be surprised when the average person is deeply cynical about everything that is claimed to be true?
Thursday, August 30, 2012
Safety training
A brief post for a busy time: For various reasons, I'm interested in lab safety training of grad students (and postdocs and undergrads) at universities. In particular, I am curious about best practices - approaches that are not invasive, burdensome, or one-size-fits-all, but never the less actually improve the safety of researchers. If you think your institution does this particularly well, please post in the comments about what they do and why, or drop me an email. Thanks.
Sunday, August 26, 2012
"Impact factors" and the damage they do.
The Wall Street Journal ran this article yesterday, which basically talks about how some journals try to manipulate their impact factors by, for example, insisting that authors of submitted manuscripts add references that point to that specific journal. Impact factor is (according to that article's version of the Thompson/ISI definition) the number of citations of a journal in a given year, divided by the number of papers published in that journal over the last two years. I've written before about why I think impact factors are misleading: they're a metric deeply based on outliers rather than typical performance. Nature and Science (for example) have high impact factors not really because their typical paper is much more highly cited than, e.g., Phys Rev B's. Rather, they have higher impact factors because the probability that a paper in Nature or Science is one of those that gets thousands of citations is higher than in Phys Rev B. The kind of behavior reported in the article is the analog of an author self-citing like crazy in an effort to boost citation count or h-index. (Ignoring how easy it is to detect and make corrections,) Self-citation can't make a lousy, unproductive researcher look like a powerhouse, but it can make a marginal researcher look less marginal. Likewise, spiking citations for journals can't make a lousy journal suddenly have an IF of ten, but they can make a journal's IF look like 1 instead of 0.3, and some people actually care about this.
Why should any publishing scientific researcher spend any time thinking about this? All you have to do is look at the comments on that article, and you'll see. Behavior and practices that damage the integrity of the scientific publishing enterprise have the potential to do grave harm to the (already precarious) standing of science. If the average person thinks that scientific research is a rigged game, full of corrupt scientists looking only to further their own image and careers, and that scientific research is no more objective than people arguing about untestable opinions, that's tragic.
Why should any publishing scientific researcher spend any time thinking about this? All you have to do is look at the comments on that article, and you'll see. Behavior and practices that damage the integrity of the scientific publishing enterprise have the potential to do grave harm to the (already precarious) standing of science. If the average person thinks that scientific research is a rigged game, full of corrupt scientists looking only to further their own image and careers, and that scientific research is no more objective than people arguing about untestable opinions, that's tragic.
Friday, August 24, 2012
Just enough to be dangerous....
A colleague and I were talking this morning about what knowledge depth is desirable in referees. A reviewer who does not know an area in detail will sometimes give a bold or unconventional idea a fair, unbiased hearing. In the other limit, a reviewer who is a hardcore expert and has really pondered the Big Picture questions in an area can sometimes (not always) appreciate a new or creative approach that is likely to have big impact even if that approach is unorthodox. Where you can really run into trouble sometimes is with reviewers who know just enough to be dangerous - they can identify critical issues of concern, know the orthodoxy of a field, and don't necessarily appreciate the Big Picture or potential impact. This may be the root of the claim I've heard from journal editors, that often the "recommended referees" that people suggest when submitting articles end up being the harshest critics. Just a thought. In general, we are all well served by getting the most knowledgeable referees possible.
Sunday, August 19, 2012
And this guy sits on the House Science Committee.
Today Congressman Todd Akin from Missouri, also the Republican nominee for the US Senate seat currently held by Sen. Claire McCaskill, said that women have a biological mechanism that makes it very difficult for them to get pregnant in the case of "legitimate rape" (whatever that is). Specifically, he said "If it’s a legitimate rape, the female body has ways to try to shut that whole thing down." Yes, he said that, and there's video. Regardless of your politics or your views on abortion, isn't it incredibly embarrassing that a member of the House Science Committee would say something so staggeringly ignorant?
Update: Once again, The Onion gets it right.
Update: Once again, The Onion gets it right.
Wednesday, August 15, 2012
Intro physics - soliciting opinions
For the third year in a row, I'm going to be teaching Rice's honors intro mechanics course (PHYS 111). I use the outstanding but mathematically challenging (for most first-year undergrads) book by Kleppner and Kolenkow. It seems pretty clear (though I have done no rigorous study of this) that the students who perform best in the course are those that are the most comfortable with real calculus (both differential and integral), and not necessarily those with the best high school physics background. Teaching first-year undergrads is generally great fun in this class, though quite a bit of work. Since these are a self-selected bunch who really want to be there, and since Rice undergrads are generally very bright, they are a good audience.
I do confess, though, that (like all professors who really care about educating students) I go back and forth about whether I've structured the class properly. It's definitely set up like a traditional lecture course, and while I try to be interactive with the students, it is a far cry from some of the modern education research approaches. I don't use clickers (though I've thought seriously about it), and I don't use lots of peer instruction or discovery-based interactions. The inherent tradeoffs are tricky: we don't really have the properly configured space or personnel resources to do some of the very time-intensive discussion/discovery-based approaches. Likewise, while those approaches undoubtedly teach some of the audience better than traditional methods, perhaps with greater retention, it's not clear whether the gains outweigh the fact that nearly all of those methods trade subject content for time. That is, in order to teach, e.g., angular momentum really well, they dispense with other topics. It's also not clear to me that these methods are well-suited to the Kleppner-Kolenkow level of material.
As unscientific as a blog posting is, I'd like to solicit input from readers. Anyone out there have particularly favorite approaches to teaching intro physics at this level? Evidence, anecdotal or otherwise, that particular teaching methods really lead to improved instruction, at the level of an advanced intro class (as opposed to general calc-based physics)?
I do confess, though, that (like all professors who really care about educating students) I go back and forth about whether I've structured the class properly. It's definitely set up like a traditional lecture course, and while I try to be interactive with the students, it is a far cry from some of the modern education research approaches. I don't use clickers (though I've thought seriously about it), and I don't use lots of peer instruction or discovery-based interactions. The inherent tradeoffs are tricky: we don't really have the properly configured space or personnel resources to do some of the very time-intensive discussion/discovery-based approaches. Likewise, while those approaches undoubtedly teach some of the audience better than traditional methods, perhaps with greater retention, it's not clear whether the gains outweigh the fact that nearly all of those methods trade subject content for time. That is, in order to teach, e.g., angular momentum really well, they dispense with other topics. It's also not clear to me that these methods are well-suited to the Kleppner-Kolenkow level of material.
As unscientific as a blog posting is, I'd like to solicit input from readers. Anyone out there have particularly favorite approaches to teaching intro physics at this level? Evidence, anecdotal or otherwise, that particular teaching methods really lead to improved instruction, at the level of an advanced intro class (as opposed to general calc-based physics)?
Wednesday, August 08, 2012
Another sad loss
It was disheartening to hear of another sad loss in the community of condensed matter physics, with the passing of Zlatko Tesanovic. I had met Zlatko when I visited Johns Hopkins way back when I was a postdoc, and he was a very fun person. My condolences to his family, friends, and colleagues.
Saturday, August 04, 2012
Confirmation bias - Matt Ridley may have some.
Matt Ridley is a columnist who writes generally insightful material for the Wall Street Journal about science and the culture of scientists. For the last three weeks, he has published a three-part series about confirmation bias, the tendency of people to overly weight evidence that agrees with their preconceived notions and downgrade the importance of evidence that disagrees with their preconceived notions. Confirmation bias is absolutely real and part of the human condition. Climate change skeptics have loudly accused climate scientists of confirmation bias in their interpretation of both data and modeling results. The skeptics claim that people like James Hansen will twist facts unrelentingly to support their emotion-based conclusion that climate change is real and caused by humans.
Generally Mr. Ridley writes well. However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters. He says: "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise.... He may be right, but such curve-fitting reasoning is an example of confirmation bias."
Climate science debate aside, that last statement is just flat-out wrong. First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias. Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling. To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses. The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct. An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct. Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?" Doing this right is tricky. Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing. Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct. (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)
The bottom line: when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct". Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.
Generally Mr. Ridley writes well. However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters. He says: "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise.... He may be right, but such curve-fitting reasoning is an example of confirmation bias."
Climate science debate aside, that last statement is just flat-out wrong. First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias. Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling. To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses. The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct. An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct. Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?" Doing this right is tricky. Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing. Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct. (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)
The bottom line: when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct". Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.
Tuesday, July 31, 2012
A new big prize
So there's another big scientific prize around now, the Milner Prize for Fundamental Physics. Interesting. Clearly wealthy Russian multimillionaires can do what they like with their money, whether that means physics prizes or miniature giraffes. However, I am not terribly thrilled with the idea that "fundamental physics" includes (to a significant degree so far) theoretical ideas that simply have not been tested experimentally. It would be very unfortunate if this ends up as a high profile media splash that boosts the erroneous public perceptions of fundamental physics as (1) virtually entirely high energy theory; and (2) exotic ideas unconnected to experiment (e.g., the string multiverse).
Monday, July 30, 2012
A few items
This editorial from yesterday's NY Times is remarkable for just how off-base it is. The author, a retired social scientist professor, argues that we should stop teaching algebra to so many people. He actually thinks that teaching algebra to everyone is bad for society: "Making mathematics mandatory prevents us from discovering and developing young talent." Basically, his reasoning is that (1) it's hard for many non-math-inclined people, so it takes up a lot of time that could be spent on other things; and (2) it's really not useful for most people, so it's doubly a waste of time. How anyone can argue publicly that what society really needs is less math literacy is completely beyond me. Like all of these sorts of things, there is a grain of reason in his argument: Most people do not need to prove solutions to cubic equations with professional-grade math rigor. However, algebra and algebraic ideas are absolutely essential to understanding many many things, from financial literacy to probability and statistics. Moreover, algebra teaches real quantitative reasoning, rather than just arithmetic. The fact that this even got printed in the Times is another example of the anti-science/math/engineering bias in our society. If I tried to get an editorial into the Washington Post advocating that we stop teaching history and literature to everyone because it takes away time from other things and writing is hard for many people, I would rightly be decried as an idiot.
This editorial from today's NY Times is also remarkable. The author had historically been a huge skeptic of the case for anthropogenic global warming. Funded by the oil magnate (and totally unbiased about this issue, I'm sure) Koch brothers, he did a study based much more on statistics and data gathering than relying on particular models of climate forecasting. Bottom line: He's now convinced that global warming is real, and that human activities in terms of CO2 are very significant drivers. Funding to be cut off and his name to be publicly excoriated on Fox News in 5...4...3.... See? Quantitative reasoning is important.
Rumor has it that Bill Nye the Science Guy is considering making new episodes of his show. Bill, I know you won't read this, but seven years ago you visited Rice and posed for a picture with my research group. Please make this happen! If there is anything I can do to increase the likelihood that this takes place, let me know.
Finally, from the arxiv tonight, this paper is very interesting. These folks grew a single layer of FeSe on a strontium titanate substrate, and by annealing it under different conditions they affect its structure (as studied by angle-resolved photoemission). The important point here is that they find conditions where this layer superconducts with a transition temperature of 65 K. That may not sound so impressive, but if it holds up, it beats the nearest Fe-based material by a good 10 K, and beats bulk FeSe by more like a factor of 1.5 in transition temperature. Stay tuned. Any upward trend in Tc is worth watching.
This editorial from today's NY Times is also remarkable. The author had historically been a huge skeptic of the case for anthropogenic global warming. Funded by the oil magnate (and totally unbiased about this issue, I'm sure) Koch brothers, he did a study based much more on statistics and data gathering than relying on particular models of climate forecasting. Bottom line: He's now convinced that global warming is real, and that human activities in terms of CO2 are very significant drivers. Funding to be cut off and his name to be publicly excoriated on Fox News in 5...4...3.... See? Quantitative reasoning is important.
Rumor has it that Bill Nye the Science Guy is considering making new episodes of his show. Bill, I know you won't read this, but seven years ago you visited Rice and posed for a picture with my research group. Please make this happen! If there is anything I can do to increase the likelihood that this takes place, let me know.
Finally, from the arxiv tonight, this paper is very interesting. These folks grew a single layer of FeSe on a strontium titanate substrate, and by annealing it under different conditions they affect its structure (as studied by angle-resolved photoemission). The important point here is that they find conditions where this layer superconducts with a transition temperature of 65 K. That may not sound so impressive, but if it holds up, it beats the nearest Fe-based material by a good 10 K, and beats bulk FeSe by more like a factor of 1.5 in transition temperature. Stay tuned. Any upward trend in Tc is worth watching.
Thursday, July 26, 2012
Memristor or not - discussion in Wired.
Earlier in the month, Wired reported that HP is planning to bring their TiO2-based resistive memory to market in 2014. Resistive memory is composed of a bunch of two-terminal devices that function as bits. Each device has a resistance that is determined by the past history of the voltage (equivalently current) applied to the device, and can be toggled between a high resistance state and a low resistance state. In HP's case, their devices are based on the oxidation and reduction of TiO2 and the diffusion of oxygen vacancies.
This announcement and reporting apparently raised some hackles. Wired has finally picked up on the fact that HP's use of the term "memristor" to describe their devices is more of a marketing move than a rigorous scientific claim. As I pointed out almost two years ago, memristors are (in my view) not really fundamental circuit elements in the same way as resistors, capacitors, and inductors; and just because some widget has a history-dependent resistance, that does not make it a memristor in the sense of the original definition.
This announcement and reporting apparently raised some hackles. Wired has finally picked up on the fact that HP's use of the term "memristor" to describe their devices is more of a marketing move than a rigorous scientific claim. As I pointed out almost two years ago, memristors are (in my view) not really fundamental circuit elements in the same way as resistors, capacitors, and inductors; and just because some widget has a history-dependent resistance, that does not make it a memristor in the sense of the original definition.
Thursday, July 19, 2012
Tuesday, July 17, 2012
Replacement for Virtual Journals?
I notice that the APS Virtual Journals are going away. I was a big fan of the nano virtual journal, since the people who ran it generally did a really nice job of aggregating articles from a large number of journals (APS, AIP, IOP, Nature, Science, PNAS, etc.) on a weekly basis. True, they never did manage to work out a deal to coordinate with the ACS, and they'd made a conscious decision to avoid journals dedicated to nano (e.g., Nature Nano). Still it will be missed.
Now I will show my age. Is there a nice web 2.0 way to replace the virtual journal? In the announcement linked above, they justify the end of these virtual journals by saying that there are new and better tools available for gathering this sort of information. What I'd like to do, I think, is look at the RSS feeds of the tables of contents of a bunch of journals, filter on certain keywords, and aggregate together links to all the articles. Is there a nice way to do this without miss and fuss? Suggestions would be appreciated.
Now I will show my age. Is there a nice web 2.0 way to replace the virtual journal? In the announcement linked above, they justify the end of these virtual journals by saying that there are new and better tools available for gathering this sort of information. What I'd like to do, I think, is look at the RSS feeds of the tables of contents of a bunch of journals, filter on certain keywords, and aggregate together links to all the articles. Is there a nice way to do this without miss and fuss? Suggestions would be appreciated.
Thursday, July 12, 2012
Exotic (quasi)particles, and why experimental physics is challenging
There was a very large amount of press, some of it rather breathless, earlier in the year about the reported observation of (effective) Majorana fermions in condensed matter systems. Originally hypothesized in the context of particle physics, Majorana fermions are particles with rather weird properties. Majorana looked hard at the Dirac equation (which is complex), and considered particles "built out of" linear combinations of components of solutions to the Dirac equation. These hypothesized particles would obey a real (not complex) wave equation, and would have the rather odd property that they are their own antiparticles (!) In the language of quantum field theory, if the operator \( \gamma^{\dagger} \) creates a Majorana particle, then \( \gamma^{\dagger} \gamma^{\dagger}\) creates and destroys one, leaving behind nothing. In the context of condensed matter, it has been theorized (here and here, for example) that it's possible to take a superconductor and a semiconductor wire with strong spin-orbit coupling, and end up with a composite system that has low energy excitations (quasiparticles) that have properties like those of Majorana fermions.
So, if you had these funky quasiparticles in your system, how could you tell? What experiment could you do that would give you the relevant information? What knob could you turn and what could you measure that would confirm or deny their presence? That's the challenge (and the fun and the frustration) of experimental physics. There are only so many properties that can be measured in the lab, and only so many control parameters that can be tuned. Is it possible to be clever and find an experimental configuration and a measurement that give an unambiguous result, one that can only be explained in this case by Majorana modes?
In the particular experiment that received the lion's share of attention, the experimental signature was a "zero-bias peak" in the electrical conductance of these structures. The (differential) conductance is the slope of the \(I-V\) curve of an electrical device - at any given voltage (colloquially called "bias"), the (differential) conductance tells you how much more current you would get if you increased the voltage by a tiny amount. In this case, the experimentalists found a peak in the conductance near \( V = 0 \), and that peak stayed put at \(V = 0\) even when a magnetic field was varied quite a bit, and a gate voltage was used to tune the amount of charge in the semiconductor. This agreed well with predictions for the situation when there is a Majorana-like quasiparticle bound to the semiconductor/superconductor interface.
The question is, though, is that, by itself, sufficient to prove the existence of Majorana-like quasiparticles experimentally? According to this new paper, perhaps not. It looks like it's theoretically possible to have other (boring, conventional) quasiparticles that can form bound states at that interface that also give a zero-bias peak in the conductance. Hmm. Looks like it may well be necessary to look at other measurable quantities besides just the conductance to try to settle this once and for all. This is an important point that gets too little appreciation in popular treatments of physics. It's rare that you can directly measure the really interesting property or system directly. Instead, you have to use the tools at your disposal to test the implications of the various possibilities.
So, if you had these funky quasiparticles in your system, how could you tell? What experiment could you do that would give you the relevant information? What knob could you turn and what could you measure that would confirm or deny their presence? That's the challenge (and the fun and the frustration) of experimental physics. There are only so many properties that can be measured in the lab, and only so many control parameters that can be tuned. Is it possible to be clever and find an experimental configuration and a measurement that give an unambiguous result, one that can only be explained in this case by Majorana modes?
In the particular experiment that received the lion's share of attention, the experimental signature was a "zero-bias peak" in the electrical conductance of these structures. The (differential) conductance is the slope of the \(I-V\) curve of an electrical device - at any given voltage (colloquially called "bias"), the (differential) conductance tells you how much more current you would get if you increased the voltage by a tiny amount. In this case, the experimentalists found a peak in the conductance near \( V = 0 \), and that peak stayed put at \(V = 0\) even when a magnetic field was varied quite a bit, and a gate voltage was used to tune the amount of charge in the semiconductor. This agreed well with predictions for the situation when there is a Majorana-like quasiparticle bound to the semiconductor/superconductor interface.
The question is, though, is that, by itself, sufficient to prove the existence of Majorana-like quasiparticles experimentally? According to this new paper, perhaps not. It looks like it's theoretically possible to have other (boring, conventional) quasiparticles that can form bound states at that interface that also give a zero-bias peak in the conductance. Hmm. Looks like it may well be necessary to look at other measurable quantities besides just the conductance to try to settle this once and for all. This is an important point that gets too little appreciation in popular treatments of physics. It's rare that you can directly measure the really interesting property or system directly. Instead, you have to use the tools at your disposal to test the implications of the various possibilities.
Subscribe to:
Posts (Atom)