Search This Blog

Tuesday, February 02, 2016

What is density functional theory? part 1.

In previous posts, I've tried to introduce the idea that there can be "holistic" approaches to solving physics problems, and I've attempted to give a lay explanation of what a functional is (short version: a functional is a function of a function - it chews on a whole function and spits out a number.).  Now I want to talk about density functional theory, an incredibly valuable and useful scientific advance ("easily the most heavily cited concept in the physical sciences"), yet one that is basically invisible to the general public.

Let me try an analogy.  You're trying to arrange the seating for a big banquet, and there are a bunch of constraints:  Alice wants very much to be close to the kitchen.  Bob also wants to be close to the kitchen.  However, Alice and Bob both want to be as far from all other people as possible.  Etc. Chairs can't be on top of each other, but you still need to accommodate the full guest list.  In the end you are going to care about the answers to certain questions:  How hard would it be to push two chairs closer to each other? If one person left, how much would all the chairs need to be rearranged to keep everyone maximally comfortable?    You could imagine solving this problem by brute force - write down all the constraints and try satisfying them one person at a time, though every person you add might mean rearranging all the previously seated people.  You could also imagine solving this by some trial-and-error method, where you guess an initial arrangement, and make adjustments to check and see if you've improved how well you satisfy everyone.  However, it doesn't look like there's any clear, immediate strategy for figuring this out and answering the relevant questions.

The analogy of DFT here would be three statements.  First, you'd probably be pretty surprised if I told you that if I gave you the final seating positions of the people in the room, that would completely specify and nail down the answer to any of those questions up there that you could ask about the room.1  Second, there is a math procedure (a functional that depends on the positions of all of the people in the room that can be minimized) to find that unique seating chart.2  Third, even more amazingly, there is some mock-up of the situation where we don't have to worry about the people-people interactions directly, yet (minimizing a functional of the positions of the non-interacting people) would still give us the full seating chart, and therefore let us answer all the questions.3

For a more physicsy example:  Suppose you want to figure out the electronic properties of some system.  In something like hydrogen gas, H2, maybe we want to know where the electrons are, how far apart the atoms like to sit, and how much energy it takes to kick out an electron - these are important things to know if you are a chemist and want to understand chemical reactions, for example.  Conceptually, this is easy:  In principle we know the mathematical rules that describe electrons, so we should be able to write down the relevant equations, solve them (perhaps with a computer if we can't find nice analytical solutions), and we're done.  In this case, the equation of interest is the time-independent form of the Schroedinger equation.  There are two electrons in there, one coming from each hydrogen atom.  One tricky wrinkle is that the two electrons don't just feel an attraction to the protons, but they also repel each other - that makes this an "interacting electron" problem.  A second tricky wrinkle is that the electrons are fermions.  If we imagine swapping (the quantum numbers associated with) two electrons, we have to pick up a minus sign in the math representation of their quantum state.  We do know how to solve this problem (two interacting electrons plus two much heavier protons) numerically to a high degree of accuracy.  Doing this kind of direct solution gets prohibitively difficult, however, as the number of electrons increases.

So what do we do?  DFT tells us:
1If you actually knew the total electron density as a function of position, \(n(\mathbf{r})\), that would completely determine the properties of the electronic ground state.  This is the first Hohenberg-Kohn theorem.

2There is a unique functional \(E[n(\mathbf{r})]\) for a given system that, when minimized, will give you the correct density \(n(\mathbf{r})\).  This is the second Hohenberg-Kohn theorem.

3You can set up a system where, with the right functional, you can solve a problem involving noninteracting electrons that will give you the true density \(n(\mathbf{r})\).  That's the Kohn-Sham approach, which has actually made this kind of problem solving practical.

The observations by Kohn and Hohenberg are very deep.  Somehow just the electronic density encodes a whole lot more information than you might think, especially if you've had homework experience trying to solve many-body quantum mechanics problems.  The electronic density somehow contains complete information about all the properties of the lowest energy many-electron state.  (In quantum language, knowing the density everywhere in principle specifies the expectation value of any operator you could apply to the ground state.)

The advance by Kohn and Sham is truly great - it describes an actual procedure that you can carry out to really calculate those ground state properties.  The Kohn-Sham approach and its refinements have created the modern field of "quantum chemistry".

More soon....


Wednesday, January 27, 2016

CalTech wins the whole internet - public outreach for quantum.

This makes my public outreach efforts look lame by comparison.  Well done!

Friday, January 15, 2016

What is a functional? Ex: the Action Principle

Working our way toward the biggest theory most people have never heard of, let's talk about functionals, using the non-rigorous language that physicists like and which annoys mathematicians.

Here's an analogy.   You want to drive from your house to the store.  There are many possible routes, and for each route we could come up with a single number that depends on the route - it could be the total distance traveled, or the total time it took to get from the house to the store, or it could be the total fuel consumed, or it could be the number of times you turned left minus the number of times you turned right.  We could take all your possible routes, and we could somehow process each possible route into a number.  The operation that chews on your route information and converts it to a number is a functional of your path from the house to the store.  (Why would you want to do this?  Well, perhaps you value your time, and you want to pick the route that has the least accumulated time.  Perhaps you value fuel costs, and you want to pick the route that has the least fuel consumption.  The point is, depending on what you care about, a functional can let you pick between alternatives, here the routes, that are described by a huge, effectively infinite number of variables.)

In the spirit of MTW, a function of a single variable is a machine that takes a number, chews on it, and spits out a number.   This could be \(y(x) = x^{2}\), for example.  A function of multiple variables is a machine that takes more than one number, chews on them, and spits out a number -- like \(y(x_{1}, x_{2}, x_{3}) = x_{1}^{2} + 3x_{2} - x_{3}\).  For this example, for any set of three numbers \( \{x_{1}, x_{2}, x_{3}\} \), you can compute a value of \(y\).  

A functional is the "continuum limit" of a function of multiple variables - it's a machine that takes an infinite number of numbers (!), chews on it, and spits out a single number.  We can cast our example of Fermat's principle of least time this way.  Suppose light starts out at point P, and we let it take some wild path like the one shown in the figure.  We're eventually going to have the light wind up at point Q.  How long does it take the light to get from P to the interface?  Well, that depends on how you think it goes.  If you knew all the intervening points \((x_{i},y_{i})\), you could compute the distance between successive points, and add up all the times.  The transit time \(t_{\mathrm{tot}}\) depends on the whole trajectory that the light takes from P to Q.  Instead of writing \(t_{\mathrm{tot}}(x_{1}, y_{1}, x_{2}, y_{2}, .....)\), we write \(t_{\mathrm{tot}}[x,y]\), where the square brackets indicate that this is a functional.  For any goofy trajectory we could draw from P to Q, we could compute \(t_{\mathrm{tot}}\).  Fermat's principle of least time says that the one actually taken by light is the one that gives the smallest value of \(t_{\mathrm{tot}}\).  Why does this work?  That's actually a very deep question, and I won't try to answer it now.

The Action Principle is the most famous example of showing that functionals can be incredibly useful in physics.  I'm going to do a simple 1d example involving mechanical motion of a particle, but everything I will say generalizes to much more complicated cases.  Suppose we have a particle that starts at some initial position position \(x_{\mathrm{i}}\) at some initial time \(t_{\mathrm{i}}\), and ends up at some final position \(x_{\mathrm{f}}\) at some final time \(t_{\mathrm{f}}\).  We want to know, how does the particle get there?  Which of the essentially infinite number of possible trajectories \(x(t)\) did the particle take?  (Note that by allowing any arbitrary path \(x(t)\), we're also basically permitting any arbitrary velocity as a function of time in there.)

The local way to answer this problem is to start with the particle at the initial location and time, and apply Newton's laws.  From its position find the force acting on the particle, use that force to find the acceleration, and take a little timestep forward, updating the particle's position and velocity.  Now repeat this.

The Action Principle is a global approach.  It says that there is some functional called the action, \(S[x(t)]\).  For any trajectory \(x(t)\), you can compute a number \(S\).  The trajectory that a classical particle takes is the one that starts and ends in the right places and times, and produces the minimum* value of \(S\).  The form of \(S\) contains all the physics.  (For a 1d particle obeying Newton's laws, the correct form for \(S\) is the integral as a function of time over the whole trajectory of (the kinetic energy minus the potential energy).)  This is one of the stranger things to learn when studying physics - with the right procedure for writing down and expression for \(S\), and the right procedure for minimizing it (techniques called variational calculus), it seems like the (global) Action Principle is nearly magical, giving you ways to solve problems that would seem hopelessly complex in traditional (local) approaches.   Why does this actually work?   Again, this is a deep question, and I'll revisit it some other time.  The fact that you can actually come up with a functional-based formalism does indicate that there is "hidden" structure to nature beyond what you might guess just from, e.g., Newton's laws.

To revisit the analogy:  If I told you that there was a way to predict how you would drive from home to the store based on a single number related to each possible route, you would realize:  (1) you don't necessarily have to know all the detailed rules of driving to find the preferred route, just how to calculate that number; and (2) there clearly is some deeper principle at work than just the rules of driving that picks out the route you take.

Next time, I'll finally get to the point about density functional theory.

*Technically, a maximum could also work here, but for many many cases, there is no maximum possible value of \(S\).

Sunday, January 10, 2016

"Local" vs "global" ways to solve physics problems

Inspired by a recent post of Ross McKenzie, I thought it would be fun to try to write a popularly accessible piece about the enormously successful, wholly remarkable  theory that most people have never heard of, density functional theory.

To get there will require a couple of steps.  First, it's important to appreciate that sometimes, thanks to the mathematical structure of the universe, it is possible to think about and solve physics problems with two seemingly very different approaches - call them "local" and "global".  In the local approach, we write down equations that describe the underlying problem in great detail, and by carefully working out their solution, we arrive at an answer.  In the global approach, we come at the problem from an overview perspective of considering possible solutions and figuring out which one is correct.

For example, let's think about a light ray propagating from point P (in air) to point Q (in water), as shown in the figure (courtesy wikipedia).  It turns out that light travels at a speed \(c/n\) in a medium, where \(c\) is the speed of light in vacuum, and \(n\) is the "index of refraction" that depends on the material and the frequency of the light.  (This is already short-hand for solving the complicated problem of electromagnetic radiation and its interactions with a material containing charges, something that Feynman wrote about elegantly in this book, based on these lectures.)  The "local" approach would be to write down the equations describing the electromagnetic light waves, and solve these, including the description of the air, the water, and their interface.  The result we would find is so simple and compact that we teach it to freshmen, Snell's Law:  \(n_{1}\sin(\theta_{1}) = n_{2}\sin(\theta_{2})\), where the angles are defined in the figure.

The "global" way to solve this problem (and again arrive at Snell's Law) was found by Fermat (yes, the one with the "last" theorem).  He didn't have the option of solving the microscopic equations governing the radiation, since he died two hundred years before Maxwell published them.  Instead, Fermat knew that light seems to travel in straight lines within a given medium.  Therefore, he considered all the possible paths that a light ray could take from P to Q (such as the blue and green alternatives shown in the modified figure), trying to figure out which combination of straight segments (and hence which angles) were picked out by nature.  The answer he posited was that the correct path for the light is the one that minimizes the overall time taken by the light in going from P to Q.   This does give Snell's Law as a consequence, and seems to hint at a deeper organizing principle or structure at work than just "we solved complex equations with tricky boundary conditions, and Snell's Law fell out".  (These days, if a student is asked to derive the Snell's Law from Fermat's Principle of Least Time, they would use calculus to do so, since that plus coordinate geometry provides a clear way to right down an expression for the transit time and a way to minimize that function.  Fermat couldn't do that, as modern calculus didn't exist at the time, though he was among the people thinking along those lines.  He was pretty sharp.)

Next up:  another example of a "global" approach, the Action Principle.

Tuesday, December 29, 2015

APS elections - reminder

Sorry for the year-end lull in posting.  Work-related writing is taking a lot of my time right now, though I will be posting a few things soon.

In the meantime, a reminder to my APS colleagues:  The APS divisional elections are going on right now, ending on January 4.  The Division of Condensed Matter Physics and the Division of Materials Physics are both holding elections, and unfortunately there were some problems with the distributions of the electronic ballots, particularly to people with ".edu" email addresses.  These issues have been resolved and reminders sent, but if you are a member of DCMP or DMP and have not received your ballots electronically, I urge you to contact the respective secretary/treasurers (linked from the governance sections of the division webpages).   (Full disclosure:  I'm a candidate for a DCMP "member-at-large" position.)

Friday, December 18, 2015

Big Ideas in Quantum Materials - guest post, anyone?

Earlier this week UCSD played host to what looks like a great conference/workshop, Big Ideas in Quantum Materials.  Unfortunately, due to multiple commitments here I was unable to attend.  Would anyone who did go like to write a guest post hitting some of the highlights or summarizing the major insights from the meeting?  If so, please respond in the comments or contact me via email and we can do this.  (I'd rather do this as a post than have someone try to squeeze it into the comments since the format is more flexible, incl links, etc.)

Wednesday, December 16, 2015

Rice Academy of Fellows

This is a non-physics post, as I struggle to get done many tasks before the break.

Rice is jump-starting a new endowed postdoctoral fellow program (think Harvard Society of Fellows/Berkeley Miller Institute).  The first set of fellows is going to be "health-related research" themed, with subsequent cohorts of fellows having different themes.  Here is an announcement with additional information, if you or someone you know is interested:

The Rice University Academy of Fellows is accepting applications for its first cohort of scholars through January 11, 2016.  Scholars who want to pursue health-related research can find details and apply at http://www.riceacademy.rice.edu.    Applicants must have earned their doctoral degree between September 1, 2012 and August 31, 2016, and postdoctoral fellows are expected to begin September 1, 2016. All Rice professors are eligible to host Rice Academy Postdoctoral Fellows.  

Joining the Rice University Academy of Fellows is a fantastic opportunity for young scholars.  The postdoctoral fellows will join a dynamic intellectual community led by the Rice Academy Faculty Fellows.   The standard stipend is $60,000 (the advisor, host department, or some other entity must contribute towards the stipend $20,000 and the corresponding fringe).  Rice Academy Postdoctoral Fellows take a concurrent adjunct non-tenure track faculty appointment.


Wednesday, December 02, 2015

Advanced undergrad labs - survey

To my readers at universities:  I am interested in learning more about how other institutions do junior/senior level physics undergrad lab courses.  My impression is that there are roughly three approaches:

  • Self-guided or not to various degrees, students pick some set of predefined experiments that are presumably meant to teach pieces of physics while exposing the students to key components of modern research (more serious data acquisition; statistics+error analysis; sophisticated research instrumentation beyond what they would see in a first-year undergrad lab, such as lock-in amplifiers, high speed counters and vetoing, lasers, vacuum systems).  Sometimes students would work with an instructor to commission a new experiment rather than do one of the existing set.  This approach is what I saw as an undergrad - I remember running into a classmate late at night who had been doing some classic experiment confirming the \(1/r^{2}\) form of the Coulomb force law, and I remember three friends working as a team to commission a dye laser as part of such a project.
  • More topically narrow but intense/sophisticated labs.  For example, when I was a grad student I was a TA for a dedicated low temperature physics lab, where students chose from a list of experiments, designed some apparatus (!), had the parts machined by the shop (!!), and then actually assembled and ran their experiments over the course of a quarter.  It gave students a real sense of serious experimental research in its various phases, but only aimed to expose them to a comparatively narrow slice of modern physics.  I've heard of similar lab courses based on optics or atomic physics projects, and entire courses about electronics.
  • Some hybrid, where students do a combination of pre-fab experiments and then do a one-semester experimental project actually in an active research group, as part of their lab training and credit.
Questions for readers:  Am I leaving out some approach that you've experienced or run across?  If you're a faculty member in a university physics department, what does your department want/hope the undergraduates get out of these lab experiences?  Are there approaches to more advanced formal lab training that you particularly like and find successful (not counting having individual undergrads work in research groups, which we always encourage anyway)?  Students, were there particular labs or approaches that you really found valuable?

Tuesday, December 01, 2015

Various items - solids, explanations, education reform, and hiring for impact

I'm behind in a lot of writing of various flavors right now, but here are some items of interest:

  • Vassily Lubchenko across town at the University of Houston has written a mammoth review article for Adv. Phys. about how to think about glasses, the glass transition, and crystals.  It includes a real discussion of mechanical rigidity as a comparatively universal property - basically "why are solids solid?".  
  • Randall Munroe of xkcd fame has come out with another book, Thing Explainer, in which he tackles a huge array of difficult science and technology ideas and concepts using only the 1000 most common English words.  For a sample, he has an article in this style in The New Yorker in honor of the 100th anniversary of general relativity.
  • There was an editorial in the Washington Post on Sunday talking about how to stem the ever-rising costs of US university education.  This is a real problem, though I'm concerned that some of the authors' suggestions don't connect to the real world (e.g., if you want online courses to function in a serious, high quality way, that still requires skilled labor, and that labor isn't free).
  • Much university hiring is incremental, and therefore doesn't "move the needle" much in terms of departmental rankings, reputation, or resources.  There are rare exceptions.  Four years ago the University of Chicago launched their Institute for Molecular Engineering, with the intent of creating something like 35 new faculty lines over 15 years.  Now Princeton has announced that they are going to hire 10 new faculty lines in computer science.  That will increase the size of that one department from 32 to 42 tenure/tenure-track faculty.   Wow.

Thursday, November 26, 2015

Anecdote 7: Time travel and the most creative lecture I ever saw

My senior undergrad year, Princeton offered their every-three-years-or-so undergrad general relativity course (AST 301), taught at the time by J. R. Gott III.   Prof. Gott ran a pretty fun class, and he was a droll lecturer with a trace Southern accent and a dry sense of humor.  He was most well known at the time for solving the equations of general relativity for the case of cosmic strings, sort of 1d analogs of black holes.  He'd shown that if you have one cosmic string move past another at speeds approaching the speed of light, you could in principle go back in time.

The lectures were in a small tiered auditorium with the main door in the front, and a back entrance behind the last row.  On one Thursday in the middle of the semester, we were sitting there waiting for class to start, when the front door of the auditorium flies open, and in bursts Gott, with (uncharacteristically) messy hair and dressed (unusually) in some kind of t-shirt.  He dashed in, ran over to the utility closet in the front of the room, tore it open, and threw in a satchel of some kind before slamming the door.  He turned, wild-eyed, to the class, and proclaimed, "Don't be alarmed by anything you may see here today!" before running out the front door.

This was odd.  We looked around at each other, rather mystified.

Two minutes later, right at the official start time for class, the back door of the classroom opened, and in stepped a calm, combed Prof. Gott, wearing a dress shirt, tie, and jacket.  He announced, "I'm really sorry about this, but my NASA program officer is here today on short notice, and I have to meet with him about my grant.  Don't worry, though.  I've arranged a substitute lecturer for today, who should be here any minute.  I'll see you next Tuesday."  He then left by the back door.

Another minute or two goes by.  The front door opens again, and in steps a reasonably composed Prof. Gott, again wearing the t-shirt.  "Good morning everyone.  I'm your substitute lecturer today.  I've come back in time from after next Tuesday's lecture to give this class."

This was met with good-natured laughter by the students.

"Unfortunately," he continued, "I didn't have time to prepare any transparencies for today.  That's fine, though, because I'll just make them after the lecture, and then go back in time to before the lecture, and leave them somewhere for myself.  Ahh - I know!  The closet!"  He walked over to the closet, opened the door, and retrieved the bag with the slides.  There was more laughter and scattered clapping.  "Of course," said Prof. Gott, "now that I have these, I don't have to make them, do I.  I can just take these slides back to before the start of class."  Pause for effect.  "So, if I do that, then where did these come from?" More laughter.

Prof. Gott went on to deliver a class about time travel within general relativity (note to self:  I need to read this book!).

Postscript:  The following Tuesday, Prof. Gott arrived to teach class wearing the t-shirt outfit from the previous Thursday.   We were suitably impressed by this attention to detail.  As he walked by handing back our homework sets, I noticed that his wristwatch had a calendar on it, and I said that we should've checked that last time.  He hesitated, smiled a little grin, and then went on.




Thursday, November 19, 2015

Entanglement + spacetime - can someone clue me in here?

There is a feature article in the current issue of Nature titled "The quantum source of space-time", and I can't decide if I'm just not smart enough to appreciate some brilliant work, or if this is some really bad combination of hype and under-explanation.   

The article talks about the AdS-CFT correspondence - the very pretty mathematical insight that sometimes you can take certain complicated "strong coupling" problems (say gravitational problems) in 3d and map them mathematically to simpler (weakly coupled) problems about variables that live on the 2d boundary of that 3d volume.  I've mentioned this before as a trendy idea that's being applied to some condensed matter problems, though this is not without criticism.

Anyway, the article then says that there is deep high energy theory work going on looking at what happens if you mess with quantum entanglement of the degrees of freedom on that boundary.  The claim appears to be that, in some abstract limit that I confess I don't understand, if you kill entanglement on the boundary, then spacetime itself "falls apart" in the 3d bulk.  First question for my readers:  Can anyone point to a genuinely readable discussion of this stuff (tensor networks, etc.) for the educated non-expert?

Then things really go off the deep end, with claims that entanglement between particles is equivalent to an Einstein-Rosen wormhole connecting the particles.  Now, I'm prepared to believe that maybe there is some wild many-coordinate-transformations way of making the math describing entanglement look like the math describing some wormhole.  However, the theorists quoted here say things that sound stronger than that, and that's completely crazy.  I can create entangled photons in a lab with a low-power laser and a nonlinear crystal, and there is no way that this is physically equivalent to creating highly curved regions of spacetime and nontrivially altering the topology of spacetime.    Can someone explain to me whether the theoretical claims are like the former (there is some formal mathematical similarity between entangled particles and wormholes) or the much more extreme statement?

Tuesday, November 17, 2015

Guide to faculty searches, 2015 edition

As I did four years ago and four years before that, I wanted to re-post my primer on how faculty searches work in physics.  I know the old posts are out there and available via google, but I feel like it never hurts to revisit career-related topics at some rate.  For added complementary info, here is a link to a Physics Today article from 2001 about this topic.

Here are the steps in the typical faculty search process:
  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The challenge with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire.  In tight financial times, this can become more challenging. I know of public universities having to cancel searches in 2008/2009 even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some leadtime and not have to back-track.
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.  In cross-departmental searches (sometimes there will be a search in an interdisciplinary area like "energy"), a dean would likely put together the committee.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Back when I was applying for jobs, MIT and Berkeley had run the same ad every year, grazing for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean usually meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
  • Applications come in.  Everyone does this electronically now, which is generally a big time-saver.  (Some online systems can be clunky, since occasionally universities try to use the same system to hire faculty as they do to hire groundskeepers, but generally things go smoothly.)  Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later.  I understand this perspective, but I largely disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)  
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least weed out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab.  "Type A" organized people often bring that intensity to teaching as well.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is formed, and the committee meets to hash that down to, in the end, four or five to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within a committee on which I've served. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.

Tips for candidates:

  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture and human nature, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons.  I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision - I've now been here 15+ years, and we've enjoyed ourselves (my occasional Texas-centric blog posts aside).
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require $1M in startup and an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to that of a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
As always, more feedback in the comments is appreciated.

Monday, November 09, 2015

Aliens, quantum consciousness, and the like

I've seen a number of interesting news items lately.  Here are some that you may have missed.
  • You've probably heard about the recent observations of the star KIC 8462852.  This star, over 1000 ly away, was observed by the Kepler planet-finding mission looking for transit signatures of extrasolar planets.  Short version:  It seems likely that some very weird objects are in orbit around the star, occasionally occulting large percentages (like more than 20%) of the star's light, far more light blocking and much less periodicity than is typically seen in the other 150,000 stars observed by Kepler.   On the one hand, it was suggested that one exotic (though of course unlikely) explanation for this unique phenomenon is megastructures built by an alien civilization.  This was freely admitted to be a long-shot, but generated a lot of attention and excitement.  Now there has been a followup, where observers have pointed the Allen array at the star, and they have looked from 1-3 GHz for unusual radio emissions, finding nothing.   This has generally been handled in the press like it kills the aliens explanation.   Actually, if you read the paper, we'd only be able to detect such emissions (assuming they aren't beamed right at us) if the system was putting out something like a petawatt (1015 Watts) in that frequency range.   The most likely explanation of the Kepler observations is still some natural phenomenon, but the lack of detectable radio signals is hardly conclusive evidence for that.
  • I was previously unaware that the Institute for Quantum Information and Matter at CalTech had a really nice blog, called Quantum Frontiers.  There, I learned from John Preskill that Matthew Fisher has been making serious suggestions that quantum information physics may be relevant to biological systems and neuroscience.   It's important to look hard at these ideas, but I have to admit my own deep-rooted skepticism that either (1) entangled or superposition states survive in biological environments long enough to play a central role in neurological effects; or (2) that there are biological mechanisms for performing quantum information operations on such states.  While nuclear spins are a comparatively isolated quantum system, it's very hard for me to see how they could be entangled and manipulated in some serious way biologically.
  • You may need to sit down for this.  We probably have not found evidence of parallel universes.
  • Nature had a nice Halloween article about six "zombie" ideas in physics that are so-named because they either refuse to die or are "undead".  I've talked about the Big G problem before.

Tuesday, November 03, 2015

Anecdote 6: The Monkey Problem

I should've written about this as a Halloween physics scary story, but better late than never....  I'll try to tell this so that both physicists and non-technical readers can get some appreciation for the horror that was The Monkey Problem [dramatic chord!].

Back in my first year of grad school, along with about 20 of my cohort, I was taking the graduate-level course on classical mechanics at Stanford.  How hard can mechanics really get, assuming you don't try to do complicated chaos and nonlinear dynamics stuff?  I mean, it's just basic stuff  - blocks sliding down inclined planes, spinning tops, etc., right?  Right?

The course was taught by Michael Peskin, a brilliant and friendly person who was unfailingly polite ("Please excuse me!") while demonstrating how much more physics he knew than us.   Prof. Peskin clearly enjoys teaching and is very good at it, though he does have a tendency toward rapid-fire, repeated changes of variables ("Instead of working in terms of \(x\) and \(y\), let's do a transformation and work in terms of \(\xi(x,y)\) and \(\zeta(x,y)\), and then do a transformation and work in terms of their conjugate momenta, \(p_{\xi}\) and \(p_{\zeta}\).") and some unfortunate choices of notation ("Let the initial and final momenta of the particles be \(p\), \(p'\), \(\bar{p}\), and \(\bar{p}'\), respectively.").  For the final exam in the class, a no-time-limit (except for the end of exam period) take-home, Prof. Peskin assigned what has since become known among its victims as The Monkey Problem.

For non-specialists, let me explain a bit about rotational motion.   You can skip this paragraph if you want, but if you're not a physicist, the horror of what is to come might not come across as well.  There are a number of situations in mechanics where we care about extended objects that are rotating.  For example, you may want to be able to describe and understand a ball rolling down a ramp, or a flywheel in some machine.   The standard situation that crops up in high school and college physics courses is the "rigid body", where you know the axis of rotation, and you know how mass is distributed around that axis.   The "rigid" part means that the way the mass is distributed is not changing with time.  If no forces are acting to "spin up" or "spin down" the body (no torques), then we say its "angular momentum" \(\mathbf{L}\) is constant.  In this simple case, \(\mathbf{L}\) is proportional to the rate at which the object is spinning, \(\mathbf{\omega}\), through the relationship \(\mathbf{L} = \tilde{\mathbf{I}}\cdot \mathbf{\omega}\).    Here \(\tilde{\mathbf{I}}\) is called the "inertia tensor" or for simple situations the "moment of inertia", and it is determined by how the mass of the body is arranged around the axis of rotation.  If the mass is far from the axis, \(I\) is large; if the mass is close to the axis, \(I\) is small.  Sometimes even if we relax the "rigid" constraint things can be simple.  For example, when a figure skater pulls in his/her arms (see figure, from here), this reduces \(I\) about the rotational axis, meaning that \(\omega\) must increase to preserve \(L\).

Prof. Peskin posed the following problem:
When you look at the problem as a student, you realize a couple of things.  First, you break out in a cold sweat, because this is a non-rigid body problem.  That is, the inertia tensor of the (cage+monkey) varies with time, \(\tilde{\mathbf{I}}= \tilde{\mathbf{I}}(t)\), and you are expected to come up with \(\mathbf{\omega}(t)\).    However, you realize that there are signs of hope:

1) Thank goodness there is no gravity or other force in the problem, so \(\mathbf{L}\) is constant.  That means all you have to do for part (a) is solve \(\mathbf{L} = \tilde{\mathbf{I}}(t)\cdot \mathbf{\omega}(t)\) for \(\mathbf{\omega}(t)\).  

2) The monkey at least moves "slowly", so you don't have to worry about the possibility that the monkey moves very far during one spin of the cage.  Physicists like this kind of simplification - basically making this a quasi-static problem.

3) Initially the system is rotationally symmetric about \(z\), so \(\mathbf{L}\) and \(\mathbf{\omega}\) both point along \(z\).  That's at least simple.  

4) Moreover, at the end of the problem, the system is again rotationally symmetric about \(z\), meaning that \(\mathbf{\omega}\) at the conclusion of the problem just has to be the same as \(\mathbf{\omega}\) at the beginning.

Unfortunately, that's about it.  The situation isn't so bad while the monkey is crawling along the disk.  However, once the monkey reaches the edge of the disk and starts climbing toward the north pole of the cage, the problem becomes very very messy.  The plane of the disk starts to tilt as the monkey climbs.  The whole system looks a lot like it's tumbling.  While \(\mathbf{L}\) stays pointed along \(z\), the angular velocity \(\mathbf{\omega}\) moves all over the place.  You end up with differential equations describing everything that can only be solved numerically on a computer.

This is a good example of a problem that turned out a wee bit harder than the professor was intending.  Frustration among the students was high.  At the time, I had two apartment-mates who were also in the class.  We couldn't discuss it during the exam week because of the honor code.  We'd see each other on the way to brush teeth or eat dinner, and an unspoken "how's it going?" would be met with an unspoken "ughh."  Prof. Peskin polled the SLAC theory group for their answers, supposedly.  (My answer was apparently at least the same as Prof. Peskin's, so that's something.)  Partial credit was generous.  

There was also a nice sarcastic response on the part of the students, in the form of "alternate solutions".  Examples included "Heisenberg's Monkey" (the uncertainty principle makes it impossible to know \(\mathbf{I}\)....); "Feynman's Monkey" (the monkey picks the lock on the cage and escapes); and the "Gravity Probe B Monkey" (I need 30 years and $143M to come to an approximate solution).  

Now I feel like I've done my cultural duty, spreading the tale of the mechanics problem so hard that we all remember it 22+ years later.  Professors, take this as a cautionary tale of how easy it can be to write really difficult problems.  To my students, at least I've never done anything like this to you on an exam....








Thursday, October 29, 2015

Phase transitions: Hiding in plain sight

Phase transitions are amazing things.  As some knob is turned (temperature, pressure, magnetic field), a physical system undergoes a sudden, seemingly discontinuous change in material properties.  This doesn't happen when looking only at one or two of the constituents of the system - only when we have a big ensemble.  In our daily experience, we are used to the control parameter being temperature, and we take particular notice of phases that have dramatically different, obvious-to-the-naked-eye properties.  Solids and liquids have completely different responses to shear (or rather, liquids lack rigidity).  Liquids and gases have vastly different densities.

It turns out that there are many more phases out there, distinguished in ways that are more subtle and harder to see.  Gadolinium is a magnet below room temperature, and a nonmagnetic metal above room temperature, but to the naked eye looks the same in both phases.  We only know that the transition is there because it has measurable consequences (e.g., you could actually see magnetic forces from a hunk of cold gadolinium).

This week, there was some media attention paid to work from David Hsieh's group at Cal Tech, where they discovered an example of a particularly subtle transition in strontium iridate (Sr2IrO4).  In that stuff, similar in some ways to the copper oxide superconductors (based on CuO4 motifs), there are unpaired electrons (and therefore unpaired spins) on the iridium atoms.  Below a critical temperature (near 200 K, or about -70 Celsius), these spins somehow spontaneously arrange themselves in a subtle way that picks out special directions in the crystal lattice and breaks mirror symmetry, but is not some comparatively well-known kind of magnetic ordering.  They are only able to identify this weird "hidden" ordered phase via a particular optical measurement, since the broken symmetry of the ordered state "turns on" some optical processes that would otherwise be forbidden.  The interest in this system stems from whether similar things may be happening in the cuprate superconductors, and whether the iridates could be tweaked and manipulated like the cuprates.  Neat stuff, and a reminder that sometimes nature can be subtle:  There can be a major change in symmetry properties (i.e., above the transition temperature, the x and y directions in the crystal are equivalent, but below the transition they aren't anymore) that shows up spontaneously, but is still hard to detect.

Tuesday, October 27, 2015

No, dark matter probably did not do in the dinosaurs.

Resurfacing briefly in the midst of proposal writing, I feel compelled to comment on recent media coverage of Lisa Randall's new book.  The science story is an interesting one, as recounted here.   In brief:  There is some evidence of periodicity in mass extinctions due to impacts, though there are at least three hidden assumptions even in that statement.  One possible source of periodicity could be associated with the motion of the solar system (and by extension the earth) in and out of the galactic plane as the sun orbits the center of mass of the Milky Way.  Lisa Randall and Matthew Reece argued that passage through a comparatively thin disk of dark matter on the galactic plane could lead to gravitational perturbations that could lead to Oort Cloud comets getting dinked toward the inner solar system.  A neat idea, though pretty hard to test except through indirect means.  

So, could this have happened?  Sure.  Is it likely?  Well, that's much trickier to evaluate.  There are plenty of other sources of gravitational perturbations (e.g., the passage of nearby stars) that we know for sure take place.  There is no strong observational evidence right now that the would-be disk of dark matter exists, let alone whether it has the properties needed to provide a significant uptick in cometary impacts.  Lisa Randall is undoubtedly a gifted writer, and there is real science here (witness the published paper that discusses ways to test the idea), but the breathless media reaction is somewhat disappointing.

Sunday, October 18, 2015

STEM education and equality of opportunity

Friday evening I went to the Katy Independent School District's Robert R. Shaw Center for STEAM, where they were having a Science Movie Night (pdf).  The science and technology policy part of Rice's Baker Institute for Public Policy had put the organizers in touch with me.  It was a very fun time.   On a night when there were two (!) homecoming high school football games next door, the movie night drew about 80 highly engaged students.  After the film, they stayed and we talked about the science of the film (what it got right and what they fudged) for another half an hour.  It was a great time.

The Shaw Center is amazing - it's a fantastic space, something like 10000 sq ft of reconfigurable maker-space, with a shop, immersive labs, and it provides a home to more than one of Katy ISD's robotics teams.  Frankly, this place rivals or exceeds the available undergrad lab facilities at many universities.  Katy is a reasonably affluent suburb of Houston, and I was floored to learn that this great science/engineering facility was built with district money, not donations or corporate sponsorship/underwriting.  This is a case where public school funding has been deliberately and consciously dedicated to providing a district-wide resource for hands-on science and engineering learning.

In a study in contrasts, my sons and I then volunteered Saturday morning at the Teachers Aid facility run by the Houston Food Bank.  At the Teachers Aid facility, teachers from qualifying schools (where 70+% of the enrollment is sufficiently low-income that they qualify for free lunches) can arrive, by appointment, and pick up basic school supplies (pencils, pens, notebooks) for their students.  In three hours we helped about 70 teachers who serve more than 3000 students.  These are teachers who chose to come in on their own time, to get basic supplies that neither their schools nor the students themselves can afford.  

It's appalling to see the divergence in basic educational opportunities between the more affluent school districts and the economically disadvantaged.  We have to do better.  Making sure that children, regardless of their background, have access to a good education should be a guiding principle of our society, not something viewed as pie-in-the-sky or politically tainted.  It amazes me that some people seem to disagree with this.

Tuesday, October 13, 2015

Several topics: The Nobel, self-organization, and Marcy

I'm heavily into some writing obligations right now, but I wanted to point out a few things:

  • Obviously this year's Nobel in physics was for neutrino oscillations.  ZZ has thoughtfully provided two nice postings (here and here) with relevant links for further reading.  I remember the solar neutrino problem vividly from undergrad days, including one professor semi-jokingly speculating that perhaps the lack of expected electron neutrinos was because the sun had gone out.  The whole particles-oscillating-into-new-types business, famous in physics circles from the K mesons, is tough to explain to a general audience.  You could probably come up with some analog description involving polarized light....
  • Here is a neat article about a record Chinese traffic jam, and it includes links to papers about self-organization.  I'm happy to see this kind of article - I think there is real value in pointing out to people that there can be emergent, organized states that result from the aggregate of simple rules.  That's the heart of condensed matter and statistical physics.
  • This week buzzfeed helped break the story that Geoff Marcy, an astronomer famed for his role in the discovery of extrasolar planets and mentioned as a likely Nobel candidate, was found guilty of multiple violations of Berkeley's sexual misconduct policy.  Note to those who haven't followed this:  This is the result of a long investigation - it's not hearsay, it's essentially a conviction via a long university investigative process.  Marcy's apology letter is here.   Apparently this bad behavior had been tolerated for many years.  Berkeley's response has been, shall we say, tepid.  Despite a clear finding of years of inappropriate behavior involving students 1/3 his age, the response is "do this any more, and you'll be dismissed".  The initial response from the department head had an inexcusably awful last paragraph that implied that this whole process was hardest on Marcy, rather than on the victims.   This is terrible.   People, including departmental colleagues, are calling for Marcy to step down.  Bottom line:  There is simply no excuse for this kind of behavior, and actual sanctions must be applied when people are found through due process to have violated this level of basic social decency.




Tuesday, October 06, 2015

Table-top electron microscopes

A quick question in the hopes that some people in this blog's readership have direct experience:  Anyone work with a table-top scanning electron microscope (SEM) that they really like?  Any rough idea of the cost and operations challenges (e.g., having to replace tungsten filaments all the time)?  I was chatting with someone about educational opportunities that something like these would present, and I was curious about the numbers without wanting to email vendors or anything that formal.  Thanks for any information.

(Note: It would really be fun to try to develop a really low-budget SEM - the electron microscopy version of this.  On the one hand, you could imagine microfabricated field emitters and the amazing cheapness of CCDs could help.  However, the need for good vacuum and some means of beam focusing and steering makes this much more difficult.  Clearly a good undergrad design project....)

Sunday, October 04, 2015

Annual Nobel speculation

It's getting to be that time of year again.  The 2015 Nobel Prize in Physics will be announced this coming Tuesday morning (EU/US time).  Based on past patterns, it looks like it could well be an astro prize.  Dark matter/galaxy rotation curves anyone?  Or extrasolar planets?  (I still like Aharonov + Berry for geometric phases, perhaps with Thouless as well.  However, it's unlikely that condensed matter will come around this year.)

On Wednesday, the chemistry prize will be awarded.  There, I have no clue.  Curious Wavefunction has a great write-up that you should read, though.

Speculate away!

Wednesday, September 30, 2015

DOE Experimental Condensed Matter Physics PI Meeting 2015 - Day 3

Things I learned from the last (half)day of the DOE PI meeting:
  • "vortex explosion" would be a good name for a 1980s metal band.
  • Pulsed high fields make possible some really amazing measurements in both high \(T_{\mathrm{C}}\) materials and more exotic things like SmB6.
  • Looking at structural defects (twinning) and magnetic structural issues (spin orientation domain walls) can give insights into complicated issues in pnictide superconductors.
  • Excitons can be a nice system for looking at coherence phenomena ordinarily seen in cold atom systems.  See here and here.  Theory proposes that you could play with these at room temperature with the right material system.
  • Thermal gradients can drive spin currents even in insulating paramagnets, and these can be measured with techniques that could be performed down to small length scales. 
  • Very general symmetry considerations when discussing competing ordered states (superconductivity, charge density wave order, spin density wave order) can lead to testable predictions.
  • Hybrid, monocrystalline nanoparticles combining metals and semiconductors are very pretty and can let you drive physical processes based on the properties of both material systems.

Tuesday, September 29, 2015

DOE Experimental Condensed Matter Physics PI Meeting 2015 - Day 2

Among the things I learned at the second day of the meeting:

  • In relatively wide quantum wells, and high fields, you can enter the quantum Hall insulating state.  Using microwave measurements, you can see signatures of phase transitions within the insulating state - there are different flavors of insulator in there.  See here.
  • As I'd alluded to a while ago, you can make "artificial" quantum systems with graphene-like energetic properties (for example).  
  • In 2d hole gasses at the interface between Ge and overlying SiGe, you can get really huge anisotropy of the electrical resistivity in magnetic fields, with the "hard" axis along the direction of the in-plane magnetic field.
  • In single-layer thick InN quantum wells with GaN above and below, you can have a situation where there is basically zero magnetoresistance.  That's really weird.
  • In clever tunneling spectroscopy experiments (technique here) on 2d hole gasses, you can see sharp inelastic features that look like inelastic excitation of phonons.  
  • Tunneling measurements through individual magnetic nanoparticles can show spin-orbit-coupling-induced level spacings, and cranking up the voltage bias can permit spin processes that are otherwise blockaded.  See here.
  • Niobium islands on a gold film are a great tunable system for studying the motion of vortices in superconductors, and even though the field is a mature one, new and surprising insights come out when you have a clean, controlled system and measurement techniques. 
  • Scanning Josephson microscopy (requiring a superconducting STM tip, a superconducting sample, and great temperature and positional control) is going to be very powerful for examining the superconducting order parameter on atomic scales.
  • In magnetoelectric systems (e.g., ferroelectrics coupled to magnetic materials), combinations of nonlinear optics and electronic measurements are required to unravel which of the various possible mechanisms (charge vs strain mediated) generates the magnetoelectric coupling.
  • Strongly coupling light in a cavity with Rydberg atoms should be a great technique for generating many body physics for photons (e.g., the equivalent of quantum Hall).
  • Carbon nanotube devices can be great systems for looking at quantum phase transitions and quantum critical scaling, in certain cases.
  • Controlling vortex pinning and creep is hugely important in practical superconductors.  Arrays of ferromagnetic particles as in artificial spin ice systems can control and manipulate vortices.  Thermal fluctuations in high temperature superconductors could end up limiting performance badly, even if the transition temperature is at room temperature or more, and the situation is worse if the material is more anisotropic in terms of effective mass.
  • "Oxides are like people; it is their defects that make them interesting."

Monday, September 28, 2015

DOE Experimental Condensed Matter Physics PI meeting 2015 - Day 1

Things I learned at today's session of the DOE ECMP PI meeting:
  • In the right not-too-thick, not-too-thin layers of the 3d topological insulator Bi1.5Sb0.5Te1.7Se1.3 (a cousin of Bi2Se3 that actually is reasonably insulating in the bulk), it is possible to use top and bottom gates to control the surface states on the upper and lower faces, independently.  See here.
  • In playing with suspended structures of different stackings of a few layers of graphene, you can get some dramatic effects, like the appearance of large, sharp energy gaps.  See here.
  • While carriers in graphene act in some ways like massless particles because their band energy depends linearly on their crystal momentum (like photon energy depends linearly on photon momentum in electrodynamics), they have a "dynamical" effective mass, \(m^* = \hbar (\pi n_{2d})^{1/2}/v_{\mathrm{F}}\), related to how the electronic states respond to an electric bias.  
  • PdCoO2 is a weird layered metal that can be made amazingly clean, so that its residual resistivity can be as small as 8 n\(\Omega\)-cm.  That's about 200 times smaller than the room temperature resistivity of gold or copper.  
  • By looking at how anisotropic the electrical resistivity is as a function of direction in the plane of layered materials, and how that anisotropy can vary with applied strain, you can define a "nematic susceptibility".  That susceptibility implies the existence of fluctuations in the anisotropy of the electronic properties (nematic fluctuations).  Those fluctuations seem to diverge at the structural phase transition in the iron pnictide superconductors.  See here.   Generically, these kinds of fluctuations seem to boost the transition temperature of superconductors.
  • YPtBi is a really bizarre material - nonmetallic temperature dependence, high resistivity, small carrier density, yet superconducts.  
  • Skyrmions (see here) can be nucleated in controlled ways in the right material systems.  Using the spin Hall effect, they can be pushed around.  They can also be moved by thermally driven spin currents, and interestingly skyrmions tend to flow from the cold side of a sample to the hot side.  
  • It's possible to pump angular momentum from an insulating ferromagnet, through an insulating antiferromagnet (NiO), and into a metal.  See here.
  • The APS Conferences for Undergraduate Women in Physics have been a big hit, using attendance as a metric.  Extrapolating, in a couple of years it looks like nearly all of the undergraduate women majoring in physics in the US will likely be attending one of these.
  • Making truly nanoscale clusters out of some materials (e.g., Co2Si, Mn5Si3) can turn them from weak ferromagnets or antiferromagnets in the bulk into strong ferromagnets in nanoparticle form.   See here.

Friday, September 25, 2015

DOE Experimental Condensed Matter Physics PI meeting 2015

As they did two years ago, the Basic Energy Sciences part of the US DOE is having a gathering of experimental condensed matter physics investigators at the beginning of next week.  The DOE likes to do this (see here for proceedings of past meetings), with the idea of getting people together to talk about the current and future state of the field and ideally seed some collaborations.  I will try to blog a bit about the meeting, as I did in 2013 (here and here).

Friday, September 18, 2015

CMP and materials in science fiction

Apologies for the slower posting frequency.  Other obligations (grants, research, teaching, service) are significant right now.

I thought it might be fun to survey people for examples of condensed matter and materials physics as they show up in science fiction (and/or comics, which are fantasy more than hard SF).  I don't mean examples where fiction gets science badly wrong or some magic rock acts as a macguffin (Infinity Stones, Sankara stones) - I mean cases where materials and their properties are thought-provoking.

A couple of my favorites:
  • scrith, the bizarre material used to construct the Ringworld.  It's some exotic material that has 40% opacity to neutrinos without being insanely dense like degenerate matter.
  • From the same author, shadow square wire, which is an absurdly strong material that also doubles as a high temperature superconductor.  (Science goof in there:  Niven says that this material is also a perfect (!) thermal conductor.  That's incompatible with superconductivity, though - the energy gap that gives you the superconductivity suppresses the electronic contribution to thermal conduction.  Ahh well.)
  • Even better, from the same author, the General Products Hull, a giant single-molecule structure that is transparent in the visible, opaque to all other wavelengths, and where the strength of the atomic bonds is greatly enhanced by a fusion power source.
  • Vibranium, the light, strong metal that somehow can dissipate kinetic energy very efficiently.  (Like many materials in SF, it has whatever properties it needs to for the sake of the plot.  Hard to reconcile the dissipative properties with Captain America's ability to bounce his shield off objects with apparently perfect restitution.)
  • Old school:  cavorite, the H. G. Wells wonder material that blocks the gravitational interaction.
What are your favorite examples?

Friday, September 11, 2015

Amazingly clear results: density gradient ultracentrifugation edition

Ernest Rutherford reportedly said something like, if your experiment needs statistics, you should have done a better experiment.  Sometimes this point is driven home by an experimental technique that gives results that are strikingly clear.  To the right is an example of this, from Zhu et al., Nano Lett. (in press), doi:  10.1021/acs.nanolett.5b03075.  The technique here is called "density gradient ultracentrifugation". 

You know that the earth's atmosphere is denser at ground level, with density decreasing as you go up in altitude.  If you ignore temperature variation in the atmosphere, you get a standard undergraduate statistical physics problem ("the exponential atmosphere") - the gravitational attraction to the earth pulls the air molecules down, but the air has a non-zero temperature (and therefore kinetic energy).  A density gradient develops so that the average gravitational "drift" downward is balanced on average by "diffusion" upward (from high density to low density). 

The idea of density gradient ultracentrifugation is to work with a solution instead of the atmosphere, and generate a vastly larger effective gravitational force (to produce a much sharper density gradient within the fluid) by using an extremely fast centrifuge.  If there are particles suspended within the solution, they end up sitting at a level in the test tube that corresponds to their average density.  In this particular paper, the particles in question are little suspended bits of hexagonal boron nitride, a quasi-2d material similar in structure to graphite.  The little hBN flakes have been treated with a surfactant to suspend them, and depending on how many layers are in each flake, they each have a different effective density in the fluid.  After appropriate dilution and repeated spinning (41000 RPM for 14 hours for the last step!), you can see clearly separated bands, corresponding to layers of suspension containing particular thickness hBN flakes.  This paper is from the Hersam group, and they have a long history with this general technique, especially involving nanotubes.  The results are eye-popping and seem nearly miraculous.  Very cool.

Wednesday, September 09, 2015

The (Intel) Science Talent Search - time to end corporate sponsorship?

When I was a kid, I heard about the Westinghouse Science Talent Search, a national science fair competition that sparked the imaginations of many many young, would-be scientists and engineeers for decades.  I didn't participate in it, but it definitely was inspiring.  As an undergrad, I was fortunate enough to work a couple of summers for Westinghouse's R&D lab, their Science Technology Center outside of Pittsburgh, learning a lot about what engineers and applied physicists actually do.  When I was in grad school, Westinghouse as a major technology corporation basically ceased to exist, and Intel out-bid rival companies for the privilege of supporting and giving their name to the STS.  Now, Intel has decided to drop its sponsorship, for reasons that are completely opaque.  "Intel's interests have changed," says the chair of the administrative board that runs the contest.

While it seems likely that some other corporate sponsor will step forward, I have to ask two questions.  First, why did Intel decide to get out of this?  Seriously, the cost to them has to be completely negligible.  Is there some compelling business reason to drop this, under the assumption that someone else will take up the mantle?  It's a free country, and of course they can do what they like with their name and sponsorship, but this just seems bizarre.  Was this viewed as a burden?  Was there a sense that they didn't get enough effective advertising or business return for their investment?  Did it really consume far more resources than they were comfortable allocating?

Second, why should a company sponsor this?  I ask this as it seems likely that the companies with the biggest capital available to act as sponsors will be corporations like Google, Microsoft, Amazon - companies that don't, as their core mission, actually do physical sciences and engineering research.  Wouldn't it be better to establish a philanthropic entity to run this competition - someone who would not have to worry about business pressures in terms of the financing?   There are a number of excellent, well-endowed foundations who seem to have missions that align well with the STS.  There's the Gordon and Betty Moore Foundation, the David and Lucille Packard Foundation, the Alfred P. Sloan Foundation, the W. M. Keck Foundation, the Dreyfus Foundation, the John D. and Catherine T. MacArthur Foundation, and I'm sure I'm leaving out some possibilities.  I hope someone out there gives serious consideration to endowing the STS, rather than going with another corporate sponsorship deal that may not stand the test of time.

Update:  From the Wired article about this, the STS cost Intel about $6M/yr.  Crudely, that means that an endowment of $120M would be enough to support this activity in perpetuity, assuming 5% payout (typical university investment assumptions, routinely beaten by Harvard and others).

Update 2:  I've thought about this some more, and maybe the best solution would be for a university to sponsor this.   For example, this seems tailor-made for MIT, which styles itself as a huge hub of innovation (see the Technology Review, e.g.). Stanford could do it.  Harvard could do $6M a year and not even notice.  It would be perfect as a large-scale outreach/high school education sponsorship effort.  Comments?

Sunday, September 06, 2015

Science and narrative tension

Recently I've come across some good examples of multidisciplinary science communication.  The point of commonality:  narrative tension, in the sense that the science is woven in as part of telling a story.  The viewer/reader really wants to know how the story resolves, and either is willing to deal with the science to get there, or (more impressive, from the communication standpoint) actually wants to see the science behind the plot resolution.

Possibly the best example of the latter is The Martian, by Andy Weir.  If you haven't read it, you should.  There is going to be a big budget film coming out based on it, and while the preview looks very good, the book is going to be better.  Here is an interview with Andy Weir by Adam Savage, and it makes the point that people can actually like math and science as part of the plot.


Another recent example, more documentary-style, is The Mystery of Matter:  Search for the Elements, which aired this past month on PBS in the US.  The three episodes are here, here, and here.  This contains much of the same information as a Nova episode, Hunting the Elements.  It's interesting to contrast the two - some people certainly like the latter's fun, jaunty approach (wherein the host plays the every-person proxy for the audience, saying "Gee whiz!" and asking questions of scientists), while the former has some compelling historical reenactments.  I like the story-telling approach a bit more, but that may be because I'm a sucker for history.  Note:  Nice job by David Muller in Hunting the Elements, using Cornell's fancy TEM to look at the atoms in bronze.

I also heard a good story on NPR this week about Ainissa Ramirez, a materials scientist who has reoriented her career path into "science evangelist".   Her work and passion are very impressive, and she is also a proponent of story-telling as a way to hold interest.  We overlapped almost perfectly in time at Stanford during our doctorates, and I wish we'd met. 

Now to think about the equivalent of The Martian, but where the audience longs to learn more about condensed matter physics and nanoscale science to see how the hero survives.... (kidding.  Mostly.)