Monday, January 22, 2018

In condensed matter, what is a "valley", and why should you care?

One big challenge of talking about condensed matter physics to a general audience is that there are a lot of important physical concepts that don't have easy-to-point-to, visible consequences.  One example of this is the idea of "valleys" in the electronic structure of materials. 

To explain the basic concept, you first have to get across several ideas:

You've heard about wave-particle duality.  A free particle in in quantum mechanics can be described by a wavefunction that really looks like a wave, oscillating in space with some spatial frequency (\(k\ = 2 \pi\)/wavelength).  Momentum is proportional to that spatial frequency (\(p = \hbar k\)), and there is a relationship between kinetic energy and momentum (a "dispersion relation") that looks simple.  In the low-speed limit, K.E. \(= p^2/2m\), and in the relativistic limit, K.E. \( = pc \).

In a large crystal (let's ignore surfaces for the moment), atoms are arranged periodically in space.  This arrangement has lower symmetry than totally empty space, but can still have a lot of symmetries in there.  Depending on the direction one considers, the electron density can have all kinds of interesting spatial periodicities.  Because of the interactions between the electrons and that crystal lattice, the dispersion relation \(E(\mathbf{k})\) becomes direction-dependent (leading to spaghetti diagrams).  Some kinetic energies don't correspond to any allowed electronic states, meaning that there are "bands" in energy of allowed states, separated by gaps.  In a semiconductor, the highest filled (in the limit of zero temperature) band is called the valence band, and the lowest unoccupied band is called the conduction band.

Depending on the symmetry of the material, the lowest energy states in the conduction band might not be near where \(|\mathbf{k}| = 0\).  Instead, the lowest energy electronic states in the conduction band can be at nonzero \(\mathbf{k}\).  These are the conduction band valleys.  In the case of bulk silicon, for example, there are 6 valleys (!), as in the figure.
The six valleys in the Si conduction band, where the axes 
here show the different components of \(\mathbf{k}\), and 
the blue dot is at \(\mathbf{k}=0\).

One way to think about the states at the bottom of these valleys is that there are different wavefunctions that all have the same kinetic energy, the lowest they can and still be in the conduction band, but their actual spatial arrangements (how the electron probability density is arranged in the lattice) differ subtly. 

In the case of graphene, I'd written about this before.  There are two valleys in graphene, and the states at the bottom of those valleys differ subtly about how charge is arranged between the two "sublattices" of carbon atoms that make up the graphene sheet.  What is special about graphene, and why other some materials are getting a lot of attention, is that you can do calculations about the valleys using the same math that gets used when talking about spin, the internal angular momentum of particles.  Instead of being in one graphene valley or the other, you can write about having "pseudospin" up or down. 

Once you start thinking of valley-ness as a kind of internal degree of freedom of the electrons that is often conserved in many processes, like spin, then you can consider all sorts of interesting ideas.  You can talk about "valley ferromagnetism", where available electrons all hang out in one valley.  You can talk about the "valley Hall effect", where carriers of differing valleys tend toward opposite transverse edges of the material.   Because of spin-orbit coupling, these valley effects can link to actual spin physics, and therefore are of interest for possible information processing and optoelectronic ideas.

Saturday, January 13, 2018

About grants: What is cost sharing?

In addition to science, I occasionally use this forum as a way to try to explain to students and the public how sponsored research works in academia.  Previously I wrote about the somewhat mysterious indirect costs.  This time I'd like to discuss cost sharing.

Cost sharing is what it sounds like - when researchers at a university propose a research project, and the funding agency or foundation wants to see the university kick in funding as well (beyond obvious things like the lab space where the investigators work).  Many grants, such as NSF single-investigator awards, expressly forbid explicit cost sharing.  That has certain virtues:  To some extent, it levels the playing field, so that particularly wealthy universities don't have an even larger advantage.  Agencies would all like to see their money leveraged as far as possible, and if cost sharing were unrestricted on grants, you could imagine a situation where wealthy institutions would effectively have an incentive to try to buy their way to grant success by offering big matching funds.   

In other programs, such as the NSF's major research instrumentation program, cost sharing is mandated, but the level is set at a fixed percentage of the total budget.  Similarly, some foundations make it known that they expect university matching at a certain percentage level.  While that might be a reach for some smaller, less-well-off universities when the budget is large, at least it's well-defined.    

Sometimes agencies try to finesse things, forbidding explicit cost sharing but still trying to get universities to invest "skin in the game".  For the NSF materials research science and engineering center program, for example, cost sharing is forbidden (in the sense that explicit promises of $N matching or institutional funding is not allowed), but proposals are required to include a discussion of "organizational commitment":  "Provide a description of the resources that the organization will provide to the project, should it be funded. Resources such as space, faculty release time, faculty and staff positions, capital equipment, access to existing facilities, collaborations, and support of outreach programs should be discussed, but not given as dollar equivalents.

"  First and foremost the science and broader impacts drive the merit review, but there's no question that an institution that happens to be investing synergistically with the topic of such a proposal would look good.

The big challenge for universities are grants where cost sharing is not forbidden, and no guidance is given about expectations.  There is a game theory dilemma at work, where institutions try to guess what level of cost sharing is really needed to be competitive.   

So where does the money for cost sharing come from on the university side?  Good question.  The details depend on the university.  Departments, deans, and the central administration typically have some financial resources that they can use to support cost sharing, but how these responsibilities get arranged and distributed varies.  

For the open-ended cost sharing situations, one question that comes up is, how much is too much?  As I'd discussed before, university administrations often argue that research is already a money-losing proposition, in the sense that the amount of indirect costs that they bring in does not actually come close to covering the true expenses of supporting the research enterprise.  That would argue in favor of minimizing cost sharing offers, except that schools really do want to land some of these awards.  (Clearly there are non-financial or indirect benefits to doing research, such as scholarly reputation, or universities would stop supporting that kind of work.)  It would be very interesting if someone would set up a rumor-mill-style site, so that institutions could share with peers roughly what they are offering up for certain programs - it would be revealing to see what it takes to be competitive.  

Sunday, January 07, 2018

Selected items

A few recent items that caught my eye:

  • The ever-creative McEuen and Cohen groups at Cornell worked together to make graphene-based origami widgets.   Access to the paper seems limited right now, but here is a link that has some of the figures.
  • Something else that the Cohen group has worked on in the past are complex fluids, such as colloidal suspensions.  The general statistical physics problem of large ensembles of interacting classical objects (e.g., maybe short-range rigid interactions, as in grains of sand, or perhaps M&Ms) is incredibly rich.  Sure, there are no quantum effects, but often you have to throw out the key simplifying assumption of statistical physics (that your system can readily explore all microscopic states compatible with overall constraints).  This can lead to some really weird effects, like dice packing themselves into an ordered array when stirred properly.  
  • When an ensemble of (relatively) hard classical objects really lock up collectively and start acting like a solid, that's called jamming.  It's still a very active subject of study, and is of huge industrial importance.  It also explains why mayonnaise gets much more viscous all of the sudden as egg yolk is added.
  • I'd be remiss if I didn't highlight a really nice article in Quanta about one of the grand challenges of (condensed matter) physics:  Classifying all possible thermodynamic phases of matter.   While the popular audience thinks of a handful of phases (solid, liquid, gas, maybe plasma), the physics perspective is broader, because of ideas about order and symmetries.  Now we understand more than ever before that we need to consider phases with different  topological properties as well.  Classification is not just "stamp collecting".

Monday, January 01, 2018

The new year and another arbitrary milestone

Happy new year to all!  I'm sure 2018 will bring some exciting developments in the discipline - at minimum, there will surely be a lot of talk about quantum computing.  I will attempt to post more often, and to work further on ways to bring condensed matter and nanoscale physics to a broader audience, though other responsibilities continue to make that a challenge.  Still, to modify a quote from Winston Churchill, "Writing a [blog] is like having a friend and companion at your side, to whom you can always turn for comfort and amusement, and whose society becomes more attractive as a new and widening field of interest is lighted in the mind."

By the way, this is the 1000th post on Nanoscale Views.  As we all know, this has special significance because 1000 is a big, round number.

Wednesday, December 27, 2017

The Quantum Labyrinth - a review

Because of real life constraints I'm a bit slow off the mark compared to others, but I've just finished reading The Quantum Labyrinth by Paul Halpern, and wanted to get some thoughts down about it.  The book is a bit of a superposition between a dual biography of Feynman and Wheeler, and a general history of the long-term impact of what started out as their absorber theory.  

The biographical aspects of Feynman have been well trod before by many, including Feynman himself and rather more objectively by James Gleick.   Feynman helped create his own legend (safecracking, being a mathematically prodigious, bongo-playing smart-ass).  The bits called back in the present work that resonate with me now (perhaps because of my age) are how lost he was after his first wife's death, his insecurity about whether he was really getting anything done after QED, his embracing of family life with his third wife, and his love of teaching - both as theater and as a way to feel accomplishment when research may be slow going.  

From other books I'd known a bit about Wheeler, who was still occasionally supervising physics senior theses at Princeton when I was an undergrad.  The backstory about his brother's death in WWII as motivation for Wheeler's continued defense work after the war was new to me.   Halpern does a very good job conveying Wheeler's style - coining pithy epigrams ("Spacetime tells matter how to move; matter tells spacetime how to curve.", "The boundary of a boundary is zero.") and jumping from topic to topic with way outside the box thinking.  We also see him editing his students' theses and papers to avoid antagonizing people.  Interesting.

From the Feynman side, the absorber theory morphed into path integrals, his eponymous diagrams, and his treatment of quantum electrodynamics.   The book does a good job discussing this, though like nearly every popularization, occasionally the analogies, similes, and metaphors end up sacrificing accuracy for the sake of trying to convey physical intuition.    From the Wheeler angle, we get to learn about attempts at quantizing gravity, geons, wormholes, and the many worlds interpretation of quantum mechanics.

It's a fun read that gives you a sense of the personalities and the times for a big chunk of twentieth century theoretical physics, and I'm impressed with Halpern's ability to convey these things without being a professional historian.  

Tuesday, December 19, 2017

The state of science - hyperbole doesn't help.

It seems like every few weeks these days there is a breathless essay or editorial saying science is broken, or that science as a whole is in the midst of a terrible crisis, or that science is both broken and in the midst of a terrible crisis.  These articles do have a point, and I'm not trying to trivialize anything they say, but come on - get a grip.  Science, and its cousin engineering, have literally reshaped society in the last couple of hundred years.  We live in an age of miracles so ubiquitous we don't notice how miraculous they are.  More people (in absolute numbers and as a percentage of the population) are involved in some flavor of science or engineering than ever before.

That does mean that yes, there will be more problems in absolute numbers than before, too, because the practice of science and engineering is a human endeavor.  Like anything else done by humans, that means there will be a broad spectrum of personalities involved, that not everyone will agree with interpretations or ideas, that some people will make mistakes, and that occasionally some objectionable people will behave unethically.   Decisions will be made and incentives set up that may have unintended consequences (e.g., trying to boost Chinese science by rewarding high impact papers leads to a perverse incentive to cheat.).   This does not imply that the entire practice of science is hopelessly flawed and riddled with rot, any more than a nonzero malpractice rate implies that all of medicine is a disaster.

Why is there such a sense of unease right now about the state of science and the research enterprise?  I'm not a sociologist, but here's my take.

Spreading information, good and bad, can happen more readily than ever before.  People look at sites like pubpeer and come away with the impression that the sky is falling, when in fact we should be happy that there now, for the first time ever, exists a venue for pointing out potential problems.  We are now able to learn about flawed studies and misconduct far more effectively than even twenty years ago, and that changes perceptions.  This seems to be similar to the disconnect between perception of crime rates and actual crime rates.

Science is, in fact, often difficult.  People can be working with complex systems, perhaps more complicated than their models assume.   This means that sometimes there can be good (that is, legitimate) reasons why reproducing someone's results can be difficult.  Correlation doesn't equal causation; biological and social phenomena can be incredibly complex, with many underlying degrees of freedom and often only a few quantifiable parameters.  In the physical sciences we often look askance at those fields and think that we are much better, but laboratory science in physics and chemistry can be genuinely challenging.  (An example from my own career:  We were working with a collaborator whose postdoc was making some very interesting nanoparticles, and we saw exciting results with them, including features that coincided with a known property of the target material.  The postdoc went on to a faculty position and the synthesis got taken over by a senior grad student.  Even following very clear directions, it took over 6 months before the grad student's particles had the target composition and we reproduced the original results, because of some incredibly subtle issue with the synthesis procedure that had changed unintentionally and "shouldn't" have mattered.)

Hyperbolic self-promotion and reporting are bad.   Not everything is a breakthrough of cosmic significance, not every advance is transformative, and that's ok.  Acting otherwise sets scientists and engineers up for a public backlash from years of overpromising and underdelivering.   The public ends up with the perception that scientists and engineers are hucksters.  Just as bad, the public ends up with the idea that "science" is just as valid a way of looking at the world as astrology, despite the fact that science and engineering have actually resulted in technological society.  Even worse, in the US it is becoming very difficult to disentangle science from politics, again despite the fact that one is (at least in principle) a way of looking at the world and trying to determine what the rules are, while the other can be driven entirely by ideology.  This discussion of permissible vocabulary is indicative of a far graver threat to science as a means of learning about the universe than actual structural problems with science itself.  Philosophical definitions aside and practical ones to the fore, facts are real, and have meaning, and science is a way of constraining what those facts are.

We can and should do better.  Better at being rigorous, better at making sure our conclusions are justified and knowing their limits of validity, better at explaining ourselves to each other and the public, better at policing ourselves when people transgress in their scientific ethics or code of conduct.

None of these issues, however, imply that science itself as a whole is hopelessly flawed or broken, and I am concerned that by repeatedly stating that science is broken, we are giving aid and comfort to those who don't understand it and feel threatened by it.

Saturday, December 16, 2017

Finding a quantum phase transition, part 2

See here for part 1.   Recall, we had been studying electrical conduction in V5S8, a funky material that is metallic, but on one type of vanadium site has local magnetic moments that order in a form of antiferromagnetism (AFM) below around 32 K.  We had found a surprising hysteresis in the electrical resistance as a function of applied magnetic field.  That is, at a given temperature, over some magnetic field range, the resistance takes different values depending on whether the magnitude of H is being swept up or back down. 

One possibility that springs to mind when seeing hysteresis in a magnetic material is domains - the idea that the magnetic order in the material has broken up into regions, and that the hysteresis is due to the domains rearranging themselves.  What speaks against that in this case is the fact that the hysteresis happens over the same field range when the field is in the plane of the layered material as when the field is perpendicular to the layers.   That'd be very weird for domain motion, but makes much more sense if the hysteresis is actually a signature of a first-order metamagnetic transition, a field-driven change from one kind of magnetic order to another.   First order phase transitions are the ones that have hysteresis, like when water can be supercooled below zero Celsius.

That's also consistent with the fact that the field scale for the hysteresis starts at low fields just below the onset of antiferromagnetism, and very rapidly goes to higher fields as the temperature falls and the antiferromagnetic state is increasingly stable.   Just at the ordering transition, when the AFM state is just barely favored over the paramagnetic state, it doesn't necessarily take much of a push to destabilize AFM order.... 

There was one more clue lingering in the literature.  In 2000, a paper reported a mysterious hysteresis in the magnetization as a function of H down at 4.2 K and way out near 17-18 T.  Could this be connected to our hysteresis?  Well, in the figure here at each temperature we plot a dot for the field that is at the middle of our hysteresis, and a horizontal bar to show the width of the hysteresis, including data for multiple samples.  The red data point is from the magnetization data of that 2000 paper.  

A couple of things are interesting here.   Notice that the magnetic field apparently required to kill the AFM state extrapolates to a finite value, around 18 T, as T goes to zero.  That means that this system has a quantum phase transition (as promised in the post title).  Moreover, in our experiments we found that the hysteresis seemed to get suppressed as the crystal thickness was reduced toward the few-layer limit.  That may suggest that the transition trends toward second order in thin crystals, though that would require further study.  That would be interesting, if true, since second order quantum phase transitions are the ones that can show quantum criticality.  It would be fun to do more work on this system, looking out there at high fields and thin samples for signatures of quantum fluctuations....

The bottom line:  There is almost certainly a lot of interesting physics to be done with magnetic materials approaching the 2d limit, and there are likely other phases and transitions lurking out there waiting to be found.