In the meantime, a reminder to my APS colleagues: The APS divisional elections are going on right now, ending on January 4. The Division of Condensed Matter Physics and the Division of Materials Physics are both holding elections, and unfortunately there were some problems with the distributions of the electronic ballots, particularly to people with ".edu" email addresses. These issues have been resolved and reminders sent, but if you are a member of DCMP or DMP and have not received your ballots electronically, I urge you to contact the respective secretary/treasurers (linked from the governance sections of the division webpages). (Full disclosure: I'm a candidate for a DCMP "member-at-large" position.)
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Tuesday, December 29, 2015
APS elections - reminder
Sorry for the year-end lull in posting. Work-related writing is taking a lot of my time right now, though I will be posting a few things soon.
Friday, December 18, 2015
Big Ideas in Quantum Materials - guest post, anyone?
Earlier this week UCSD played host to what looks like a great conference/workshop, Big Ideas in Quantum Materials. Unfortunately, due to multiple commitments here I was unable to attend. Would anyone who did go like to write a guest post hitting some of the highlights or summarizing the major insights from the meeting? If so, please respond in the comments or contact me via email and we can do this. (I'd rather do this as a post than have someone try to squeeze it into the comments since the format is more flexible, incl links, etc.)
Wednesday, December 16, 2015
Rice Academy of Fellows
This is a non-physics post, as I struggle to get done many tasks before the break.
Rice is jump-starting a new endowed postdoctoral fellow program (think Harvard Society of Fellows/Berkeley Miller Institute). The first set of fellows is going to be "health-related research" themed, with subsequent cohorts of fellows having different themes. Here is an announcement with additional information, if you or someone you know is interested:
The Rice University Academy of Fellows is accepting applications for its first cohort of scholars through January 11, 2016. Scholars who want to pursue health-related research can find details and apply at http://www.riceacademy.rice.edu. Applicants must have earned their doctoral degree between September 1, 2012 and August 31, 2016, and postdoctoral fellows are expected to begin September 1, 2016. All Rice professors are eligible to host Rice Academy Postdoctoral Fellows.
Rice is jump-starting a new endowed postdoctoral fellow program (think Harvard Society of Fellows/Berkeley Miller Institute). The first set of fellows is going to be "health-related research" themed, with subsequent cohorts of fellows having different themes. Here is an announcement with additional information, if you or someone you know is interested:
The Rice University Academy of Fellows is accepting applications for its first cohort of scholars through January 11, 2016. Scholars who want to pursue health-related research can find details and apply at http://www.riceacademy.rice.edu. Applicants must have earned their doctoral degree between September 1, 2012 and August 31, 2016, and postdoctoral fellows are expected to begin September 1, 2016. All Rice professors are eligible to host Rice Academy Postdoctoral Fellows.
Joining the Rice University Academy of Fellows is a fantastic opportunity for young scholars. The postdoctoral fellows will join a dynamic intellectual community led by the Rice Academy Faculty Fellows. The standard stipend is $60,000 (the advisor, host department, or some other entity must contribute towards the stipend $20,000 and the corresponding fringe). Rice Academy Postdoctoral Fellows take a concurrent adjunct non-tenure track faculty appointment.
Wednesday, December 02, 2015
Advanced undergrad labs - survey
To my readers at universities: I am interested in learning more about how other institutions do junior/senior level physics undergrad lab courses. My impression is that there are roughly three approaches:
- Self-guided or not to various degrees, students pick some set of predefined experiments that are presumably meant to teach pieces of physics while exposing the students to key components of modern research (more serious data acquisition; statistics+error analysis; sophisticated research instrumentation beyond what they would see in a first-year undergrad lab, such as lock-in amplifiers, high speed counters and vetoing, lasers, vacuum systems). Sometimes students would work with an instructor to commission a new experiment rather than do one of the existing set. This approach is what I saw as an undergrad - I remember running into a classmate late at night who had been doing some classic experiment confirming the \(1/r^{2}\) form of the Coulomb force law, and I remember three friends working as a team to commission a dye laser as part of such a project.
- More topically narrow but intense/sophisticated labs. For example, when I was a grad student I was a TA for a dedicated low temperature physics lab, where students chose from a list of experiments, designed some apparatus (!), had the parts machined by the shop (!!), and then actually assembled and ran their experiments over the course of a quarter. It gave students a real sense of serious experimental research in its various phases, but only aimed to expose them to a comparatively narrow slice of modern physics. I've heard of similar lab courses based on optics or atomic physics projects, and entire courses about electronics.
- Some hybrid, where students do a combination of pre-fab experiments and then do a one-semester experimental project actually in an active research group, as part of their lab training and credit.
Tuesday, December 01, 2015
Various items - solids, explanations, education reform, and hiring for impact
I'm behind in a lot of writing of various flavors right now, but here are some items of interest:
- Vassily Lubchenko across town at the University of Houston has written a mammoth review article for Adv. Phys. about how to think about glasses, the glass transition, and crystals. It includes a real discussion of mechanical rigidity as a comparatively universal property - basically "why are solids solid?".
- Randall Munroe of xkcd fame has come out with another book, Thing Explainer, in which he tackles a huge array of difficult science and technology ideas and concepts using only the 1000 most common English words. For a sample, he has an article in this style in The New Yorker in honor of the 100th anniversary of general relativity.
- There was an editorial in the Washington Post on Sunday talking about how to stem the ever-rising costs of US university education. This is a real problem, though I'm concerned that some of the authors' suggestions don't connect to the real world (e.g., if you want online courses to function in a serious, high quality way, that still requires skilled labor, and that labor isn't free).
- Much university hiring is incremental, and therefore doesn't "move the needle" much in terms of departmental rankings, reputation, or resources. There are rare exceptions. Four years ago the University of Chicago launched their Institute for Molecular Engineering, with the intent of creating something like 35 new faculty lines over 15 years. Now Princeton has announced that they are going to hire 10 new faculty lines in computer science. That will increase the size of that one department from 32 to 42 tenure/tenure-track faculty. Wow.
Thursday, November 26, 2015
Anecdote 7: Time travel and the most creative lecture I ever saw
My senior undergrad year, Princeton offered their every-three-years-or-so undergrad general relativity course (AST 301), taught at the time by J. R. Gott III. Prof. Gott ran a pretty fun class, and he was a droll lecturer with a trace Southern accent and a dry sense of humor. He was most well known at the time for solving the equations of general relativity for the case of cosmic strings, sort of 1d analogs of black holes. He'd shown that if you have one cosmic string move past another at speeds approaching the speed of light, you could in principle go back in time.
The lectures were in a small tiered auditorium with the main door in the front, and a back entrance behind the last row. On one Thursday in the middle of the semester, we were sitting there waiting for class to start, when the front door of the auditorium flies open, and in bursts Gott, with (uncharacteristically) messy hair and dressed (unusually) in some kind of t-shirt. He dashed in, ran over to the utility closet in the front of the room, tore it open, and threw in a satchel of some kind before slamming the door. He turned, wild-eyed, to the class, and proclaimed, "Don't be alarmed by anything you may see here today!" before running out the front door.
This was odd. We looked around at each other, rather mystified.
Two minutes later, right at the official start time for class, the back door of the classroom opened, and in stepped a calm, combed Prof. Gott, wearing a dress shirt, tie, and jacket. He announced, "I'm really sorry about this, but my NASA program officer is here today on short notice, and I have to meet with him about my grant. Don't worry, though. I've arranged a substitute lecturer for today, who should be here any minute. I'll see you next Tuesday." He then left by the back door.
Another minute or two goes by. The front door opens again, and in steps a reasonably composed Prof. Gott, again wearing the t-shirt. "Good morning everyone. I'm your substitute lecturer today. I've come back in time from after next Tuesday's lecture to give this class."
This was met with good-natured laughter by the students.
"Unfortunately," he continued, "I didn't have time to prepare any transparencies for today. That's fine, though, because I'll just make them after the lecture, and then go back in time to before the lecture, and leave them somewhere for myself. Ahh - I know! The closet!" He walked over to the closet, opened the door, and retrieved the bag with the slides. There was more laughter and scattered clapping. "Of course," said Prof. Gott, "now that I have these, I don't have to make them, do I. I can just take these slides back to before the start of class." Pause for effect. "So, if I do that, then where did these come from?" More laughter.
Prof. Gott went on to deliver a class about time travel within general relativity (note to self: I need to read this book!).
Postscript: The following Tuesday, Prof. Gott arrived to teach class wearing the t-shirt outfit from the previous Thursday. We were suitably impressed by this attention to detail. As he walked by handing back our homework sets, I noticed that his wristwatch had a calendar on it, and I said that we should've checked that last time. He hesitated, smiled a little grin, and then went on.
The lectures were in a small tiered auditorium with the main door in the front, and a back entrance behind the last row. On one Thursday in the middle of the semester, we were sitting there waiting for class to start, when the front door of the auditorium flies open, and in bursts Gott, with (uncharacteristically) messy hair and dressed (unusually) in some kind of t-shirt. He dashed in, ran over to the utility closet in the front of the room, tore it open, and threw in a satchel of some kind before slamming the door. He turned, wild-eyed, to the class, and proclaimed, "Don't be alarmed by anything you may see here today!" before running out the front door.
This was odd. We looked around at each other, rather mystified.
Two minutes later, right at the official start time for class, the back door of the classroom opened, and in stepped a calm, combed Prof. Gott, wearing a dress shirt, tie, and jacket. He announced, "I'm really sorry about this, but my NASA program officer is here today on short notice, and I have to meet with him about my grant. Don't worry, though. I've arranged a substitute lecturer for today, who should be here any minute. I'll see you next Tuesday." He then left by the back door.
Another minute or two goes by. The front door opens again, and in steps a reasonably composed Prof. Gott, again wearing the t-shirt. "Good morning everyone. I'm your substitute lecturer today. I've come back in time from after next Tuesday's lecture to give this class."
This was met with good-natured laughter by the students.
"Unfortunately," he continued, "I didn't have time to prepare any transparencies for today. That's fine, though, because I'll just make them after the lecture, and then go back in time to before the lecture, and leave them somewhere for myself. Ahh - I know! The closet!" He walked over to the closet, opened the door, and retrieved the bag with the slides. There was more laughter and scattered clapping. "Of course," said Prof. Gott, "now that I have these, I don't have to make them, do I. I can just take these slides back to before the start of class." Pause for effect. "So, if I do that, then where did these come from?" More laughter.
Prof. Gott went on to deliver a class about time travel within general relativity (note to self: I need to read this book!).
Postscript: The following Tuesday, Prof. Gott arrived to teach class wearing the t-shirt outfit from the previous Thursday. We were suitably impressed by this attention to detail. As he walked by handing back our homework sets, I noticed that his wristwatch had a calendar on it, and I said that we should've checked that last time. He hesitated, smiled a little grin, and then went on.
Thursday, November 19, 2015
Entanglement + spacetime - can someone clue me in here?
There is a feature article in the current issue of Nature titled "The quantum source of space-time", and I can't decide if I'm just not smart enough to appreciate some brilliant work, or if this is some really bad combination of hype and under-explanation.
The article talks about the AdS-CFT correspondence - the very pretty mathematical insight that sometimes you can take certain complicated "strong coupling" problems (say gravitational problems) in 3d and map them mathematically to simpler (weakly coupled) problems about variables that live on the 2d boundary of that 3d volume. I've mentioned this before as a trendy idea that's being applied to some condensed matter problems, though this is not without criticism.
Anyway, the article then says that there is deep high energy theory work going on looking at what happens if you mess with quantum entanglement of the degrees of freedom on that boundary. The claim appears to be that, in some abstract limit that I confess I don't understand, if you kill entanglement on the boundary, then spacetime itself "falls apart" in the 3d bulk. First question for my readers: Can anyone point to a genuinely readable discussion of this stuff (tensor networks, etc.) for the educated non-expert?
Then things really go off the deep end, with claims that entanglement between particles is equivalent to an Einstein-Rosen wormhole connecting the particles. Now, I'm prepared to believe that maybe there is some wild many-coordinate-transformations way of making the math describing entanglement look like the math describing some wormhole. However, the theorists quoted here say things that sound stronger than that, and that's completely crazy. I can create entangled photons in a lab with a low-power laser and a nonlinear crystal, and there is no way that this is physically equivalent to creating highly curved regions of spacetime and nontrivially altering the topology of spacetime. Can someone explain to me whether the theoretical claims are like the former (there is some formal mathematical similarity between entangled particles and wormholes) or the much more extreme statement?
Tuesday, November 17, 2015
Guide to faculty searches, 2015 edition
As I did four years ago and four years before that, I wanted to re-post my primer on how faculty searches work in physics. I know the old posts are out there and available via google, but I feel like it never hurts to revisit career-related topics at some rate. For added complementary info, here is a link to a Physics Today article from 2001 about this topic.
Here are the steps in the typical faculty search process:
Tips for candidates:
- The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then). The main challenges are two-fold: (1) Ideally the department has some strategic plan in place to determine the area that they'd like to fill. Note that not all departments do this - occasionally you'll see a very general ad out there that basically says, "ABC University Dept. of Physics is authorized to search for a tenure-track position in, umm, physics. We want to hire the smartest person that we can, regardless of subject area." The challenge with this is that there may actually be divisions within the department about where the position should go, and these divisions can play out in a process where different factions within the department veto each other. This is pretty rare, but not unheard of. (2) The university needs to have the resources in place to make a hire. In tight financial times, this can become more challenging. I know of public universities having to cancel searches in 2008/2009 even after the authorization if the budget cuts get too severe. A well-run university will be able to make these judgments with some leadtime and not have to back-track.
- The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad. In cross-departmental searches (sometimes there will be a search in an interdisciplinary area like "energy"), a dean would likely put together the committee.
- The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad like the type I mentioned above. Back when I was applying for jobs, MIT and Berkeley had run the same ad every year, grazing for talent. They seem to do just fine. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
- In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
- The dean usually meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion, or their marital status).
- Applications come in. Everyone does this electronically now, which is generally a big time-saver. (Some online systems can be clunky, since occasionally universities try to use the same system to hire faculty as they do to hire groundskeepers, but generally things go smoothly.) Every year when I post this, someone argues that it's ridiculous to make references write letters, and that the committee should do a sort first and ask for letters later. I understand this perspective, but I largely disagree. Letters can contain an enormous amount of information, and sometimes it is possible to identify outstanding candidates due to input from the letters that might otherwise be missed. (For example, suppose someone's got an incredible piece of postdoctoral work about to come out that hasn't been published yet. It carries more weight for letters to highlight this, since the candidate isn't exactly unbiased about their own forthcoming publications.)
- The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least weed out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. Creative people can want to express their creativity in the classroom as well as the lab. "Type A" organized people often bring that intensity to teaching as well.
- Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is formed, and the committee meets to hash that down to, in the end, four or five to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within a committee on which I've served. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
Tips for candidates:
- Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture and human nature, but don't feel like this is some evaluation of you as a human being.
- Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons. I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision - I've now been here 15+ years, and we've enjoyed ourselves (my occasional Texas-centric blog posts aside).
- Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
- Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know". I know that some people may think that research statements are more of an issue for experimentalists, since the statements indicate a lot about lab and equipment needs. Believe me - research statements are important for all candidates. Committee members need to know where you're coming from and what you want to do - what kinds of problems interest you and why. The committee also wants to see that you actually plan ahead. These days it's extremely hard to be successful in academia by "winging it" in terms of your research program.
- Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require $1M in startup and an experienced grad student putting in 60 hours a week.
- Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to that of a mid-career scientist.
- Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. (I should point out that truly negative letters are very rare.) Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses. The more you can do to make life easier for your letter writers, the better.
Monday, November 09, 2015
Aliens, quantum consciousness, and the like
I've seen a number of interesting news items lately. Here are some that you may have missed.
- You've probably heard about the recent observations of the star KIC 8462852. This star, over 1000 ly away, was observed by the Kepler planet-finding mission looking for transit signatures of extrasolar planets. Short version: It seems likely that some very weird objects are in orbit around the star, occasionally occulting large percentages (like more than 20%) of the star's light, far more light blocking and much less periodicity than is typically seen in the other 150,000 stars observed by Kepler. On the one hand, it was suggested that one exotic (though of course unlikely) explanation for this unique phenomenon is megastructures built by an alien civilization. This was freely admitted to be a long-shot, but generated a lot of attention and excitement. Now there has been a followup, where observers have pointed the Allen array at the star, and they have looked from 1-3 GHz for unusual radio emissions, finding nothing. This has generally been handled in the press like it kills the aliens explanation. Actually, if you read the paper, we'd only be able to detect such emissions (assuming they aren't beamed right at us) if the system was putting out something like a petawatt (1015 Watts) in that frequency range. The most likely explanation of the Kepler observations is still some natural phenomenon, but the lack of detectable radio signals is hardly conclusive evidence for that.
- I was previously unaware that the Institute for Quantum Information and Matter at CalTech had a really nice blog, called Quantum Frontiers. There, I learned from John Preskill that Matthew Fisher has been making serious suggestions that quantum information physics may be relevant to biological systems and neuroscience. It's important to look hard at these ideas, but I have to admit my own deep-rooted skepticism that either (1) entangled or superposition states survive in biological environments long enough to play a central role in neurological effects; or (2) that there are biological mechanisms for performing quantum information operations on such states. While nuclear spins are a comparatively isolated quantum system, it's very hard for me to see how they could be entangled and manipulated in some serious way biologically.
- You may need to sit down for this. We probably have not found evidence of parallel universes.
- Nature had a nice Halloween article about six "zombie" ideas in physics that are so-named because they either refuse to die or are "undead". I've talked about the Big G problem before.
Tuesday, November 03, 2015
Anecdote 6: The Monkey Problem
I should've written about this as a Halloween physics scary story, but better late than never.... I'll try to tell this so that both physicists and non-technical readers can get some appreciation for the horror that was The Monkey Problem [dramatic chord!].
Back in my first year of grad school, along with about 20 of my cohort, I was taking the graduate-level course on classical mechanics at Stanford. How hard can mechanics really get, assuming you don't try to do complicated chaos and nonlinear dynamics stuff? I mean, it's just basic stuff - blocks sliding down inclined planes, spinning tops, etc., right? Right?
The course was taught by Michael Peskin, a brilliant and friendly person who was unfailingly polite ("Please excuse me!") while demonstrating how much more physics he knew than us. Prof. Peskin clearly enjoys teaching and is very good at it, though he does have a tendency toward rapid-fire, repeated changes of variables ("Instead of working in terms of \(x\) and \(y\), let's do a transformation and work in terms of \(\xi(x,y)\) and \(\zeta(x,y)\), and then do a transformation and work in terms of their conjugate momenta, \(p_{\xi}\) and \(p_{\zeta}\).") and some unfortunate choices of notation ("Let the initial and final momenta of the particles be \(p\), \(p'\), \(\bar{p}\), and \(\bar{p}'\), respectively."). For the final exam in the class, a no-time-limit (except for the end of exam period) take-home, Prof. Peskin assigned what has since become known among its victims as The Monkey Problem.
For non-specialists, let me explain a bit about rotational motion. You can skip this paragraph if you want, but if you're not a physicist, the horror of what is to come might not come across as well. There are a number of situations in mechanics where we care about extended objects that are rotating. For example, you may want to be able to describe and understand a ball rolling down a ramp, or a flywheel in some machine. The standard situation that crops up in high school and college physics courses is the "rigid body", where you know the axis of rotation, and you know how mass is distributed around that axis. The "rigid" part means that the way the mass is distributed is not changing with time. If no forces are acting to "spin up" or "spin down" the body (no torques), then we say its "angular momentum" \(\mathbf{L}\) is constant. In this simple case, \(\mathbf{L}\) is proportional to the rate at which the object is spinning, \(\mathbf{\omega}\), through the relationship \(\mathbf{L} = \tilde{\mathbf{I}}\cdot \mathbf{\omega}\). Here \(\tilde{\mathbf{I}}\) is called the "inertia tensor" or for simple situations the "moment of inertia", and it is determined by how the mass of the body is arranged around the axis of rotation. If the mass is far from the axis, \(I\) is large; if the mass is close to the axis, \(I\) is small. Sometimes even if we relax the "rigid" constraint things can be simple. For example, when a figure skater pulls in his/her arms (see figure, from here), this reduces \(I\) about the rotational axis, meaning that \(\omega\) must increase to preserve \(L\).
Prof. Peskin posed the following problem:
Back in my first year of grad school, along with about 20 of my cohort, I was taking the graduate-level course on classical mechanics at Stanford. How hard can mechanics really get, assuming you don't try to do complicated chaos and nonlinear dynamics stuff? I mean, it's just basic stuff - blocks sliding down inclined planes, spinning tops, etc., right? Right?
The course was taught by Michael Peskin, a brilliant and friendly person who was unfailingly polite ("Please excuse me!") while demonstrating how much more physics he knew than us. Prof. Peskin clearly enjoys teaching and is very good at it, though he does have a tendency toward rapid-fire, repeated changes of variables ("Instead of working in terms of \(x\) and \(y\), let's do a transformation and work in terms of \(\xi(x,y)\) and \(\zeta(x,y)\), and then do a transformation and work in terms of their conjugate momenta, \(p_{\xi}\) and \(p_{\zeta}\).") and some unfortunate choices of notation ("Let the initial and final momenta of the particles be \(p\), \(p'\), \(\bar{p}\), and \(\bar{p}'\), respectively."). For the final exam in the class, a no-time-limit (except for the end of exam period) take-home, Prof. Peskin assigned what has since become known among its victims as The Monkey Problem.
For non-specialists, let me explain a bit about rotational motion. You can skip this paragraph if you want, but if you're not a physicist, the horror of what is to come might not come across as well. There are a number of situations in mechanics where we care about extended objects that are rotating. For example, you may want to be able to describe and understand a ball rolling down a ramp, or a flywheel in some machine. The standard situation that crops up in high school and college physics courses is the "rigid body", where you know the axis of rotation, and you know how mass is distributed around that axis. The "rigid" part means that the way the mass is distributed is not changing with time. If no forces are acting to "spin up" or "spin down" the body (no torques), then we say its "angular momentum" \(\mathbf{L}\) is constant. In this simple case, \(\mathbf{L}\) is proportional to the rate at which the object is spinning, \(\mathbf{\omega}\), through the relationship \(\mathbf{L} = \tilde{\mathbf{I}}\cdot \mathbf{\omega}\). Here \(\tilde{\mathbf{I}}\) is called the "inertia tensor" or for simple situations the "moment of inertia", and it is determined by how the mass of the body is arranged around the axis of rotation. If the mass is far from the axis, \(I\) is large; if the mass is close to the axis, \(I\) is small. Sometimes even if we relax the "rigid" constraint things can be simple. For example, when a figure skater pulls in his/her arms (see figure, from here), this reduces \(I\) about the rotational axis, meaning that \(\omega\) must increase to preserve \(L\).
Prof. Peskin posed the following problem:
When you look at the problem as a student, you realize a couple of things. First, you break out in a cold sweat, because this is a non-rigid body problem. That is, the inertia tensor of the (cage+monkey) varies with time, \(\tilde{\mathbf{I}}= \tilde{\mathbf{I}}(t)\), and you are expected to come up with \(\mathbf{\omega}(t)\). However, you realize that there are signs of hope:
1) Thank goodness there is no gravity or other force in the problem, so \(\mathbf{L}\) is constant. That means all you have to do for part (a) is solve \(\mathbf{L} = \tilde{\mathbf{I}}(t)\cdot \mathbf{\omega}(t)\) for \(\mathbf{\omega}(t)\).
2) The monkey at least moves "slowly", so you don't have to worry about the possibility that the monkey moves very far during one spin of the cage. Physicists like this kind of simplification - basically making this a quasi-static problem.
3) Initially the system is rotationally symmetric about \(z\), so \(\mathbf{L}\) and \(\mathbf{\omega}\) both point along \(z\). That's at least simple.
4) Moreover, at the end of the problem, the system is again rotationally symmetric about \(z\), meaning that \(\mathbf{\omega}\) at the conclusion of the problem just has to be the same as \(\mathbf{\omega}\) at the beginning.
Unfortunately, that's about it. The situation isn't so bad while the monkey is crawling along the disk. However, once the monkey reaches the edge of the disk and starts climbing toward the north pole of the cage, the problem becomes very very messy. The plane of the disk starts to tilt as the monkey climbs. The whole system looks a lot like it's tumbling. While \(\mathbf{L}\) stays pointed along \(z\), the angular velocity \(\mathbf{\omega}\) moves all over the place. You end up with differential equations describing everything that can only be solved numerically on a computer.
This is a good example of a problem that turned out a wee bit harder than the professor was intending. Frustration among the students was high. At the time, I had two apartment-mates who were also in the class. We couldn't discuss it during the exam week because of the honor code. We'd see each other on the way to brush teeth or eat dinner, and an unspoken "how's it going?" would be met with an unspoken "ughh." Prof. Peskin polled the SLAC theory group for their answers, supposedly. (My answer was apparently at least the same as Prof. Peskin's, so that's something.) Partial credit was generous.
There was also a nice sarcastic response on the part of the students, in the form of "alternate solutions". Examples included "Heisenberg's Monkey" (the uncertainty principle makes it impossible to know \(\mathbf{I}\)....); "Feynman's Monkey" (the monkey picks the lock on the cage and escapes); and the "Gravity Probe B Monkey" (I need 30 years and $143M to come to an approximate solution).
Now I feel like I've done my cultural duty, spreading the tale of the mechanics problem so hard that we all remember it 22+ years later. Professors, take this as a cautionary tale of how easy it can be to write really difficult problems. To my students, at least I've never done anything like this to you on an exam....
Thursday, October 29, 2015
Phase transitions: Hiding in plain sight
Phase transitions are amazing things. As some knob is turned (temperature, pressure, magnetic field), a physical system undergoes a sudden, seemingly discontinuous change in material properties. This doesn't happen when looking only at one or two of the constituents of the system - only when we have a big ensemble. In our daily experience, we are used to the control parameter being temperature, and we take particular notice of phases that have dramatically different, obvious-to-the-naked-eye properties. Solids and liquids have completely different responses to shear (or rather, liquids lack rigidity). Liquids and gases have vastly different densities.
It turns out that there are many more phases out there, distinguished in ways that are more subtle and harder to see. Gadolinium is a magnet below room temperature, and a nonmagnetic metal above room temperature, but to the naked eye looks the same in both phases. We only know that the transition is there because it has measurable consequences (e.g., you could actually see magnetic forces from a hunk of cold gadolinium).
This week, there was some media attention paid to work from David Hsieh's group at Cal Tech, where they discovered an example of a particularly subtle transition in strontium iridate (Sr2IrO4). In that stuff, similar in some ways to the copper oxide superconductors (based on CuO4 motifs), there are unpaired electrons (and therefore unpaired spins) on the iridium atoms. Below a critical temperature (near 200 K, or about -70 Celsius), these spins somehow spontaneously arrange themselves in a subtle way that picks out special directions in the crystal lattice and breaks mirror symmetry, but is not some comparatively well-known kind of magnetic ordering. They are only able to identify this weird "hidden" ordered phase via a particular optical measurement, since the broken symmetry of the ordered state "turns on" some optical processes that would otherwise be forbidden. The interest in this system stems from whether similar things may be happening in the cuprate superconductors, and whether the iridates could be tweaked and manipulated like the cuprates. Neat stuff, and a reminder that sometimes nature can be subtle: There can be a major change in symmetry properties (i.e., above the transition temperature, the x and y directions in the crystal are equivalent, but below the transition they aren't anymore) that shows up spontaneously, but is still hard to detect.
It turns out that there are many more phases out there, distinguished in ways that are more subtle and harder to see. Gadolinium is a magnet below room temperature, and a nonmagnetic metal above room temperature, but to the naked eye looks the same in both phases. We only know that the transition is there because it has measurable consequences (e.g., you could actually see magnetic forces from a hunk of cold gadolinium).
This week, there was some media attention paid to work from David Hsieh's group at Cal Tech, where they discovered an example of a particularly subtle transition in strontium iridate (Sr2IrO4). In that stuff, similar in some ways to the copper oxide superconductors (based on CuO4 motifs), there are unpaired electrons (and therefore unpaired spins) on the iridium atoms. Below a critical temperature (near 200 K, or about -70 Celsius), these spins somehow spontaneously arrange themselves in a subtle way that picks out special directions in the crystal lattice and breaks mirror symmetry, but is not some comparatively well-known kind of magnetic ordering. They are only able to identify this weird "hidden" ordered phase via a particular optical measurement, since the broken symmetry of the ordered state "turns on" some optical processes that would otherwise be forbidden. The interest in this system stems from whether similar things may be happening in the cuprate superconductors, and whether the iridates could be tweaked and manipulated like the cuprates. Neat stuff, and a reminder that sometimes nature can be subtle: There can be a major change in symmetry properties (i.e., above the transition temperature, the x and y directions in the crystal are equivalent, but below the transition they aren't anymore) that shows up spontaneously, but is still hard to detect.
Tuesday, October 27, 2015
No, dark matter probably did not do in the dinosaurs.
Resurfacing briefly in the midst of proposal writing, I feel compelled to comment on recent media coverage of Lisa Randall's new book. The science story is an interesting one, as recounted here. In brief: There is some evidence of periodicity in mass extinctions due to impacts, though there are at least three hidden assumptions even in that statement. One possible source of periodicity could be associated with the motion of the solar system (and by extension the earth) in and out of the galactic plane as the sun orbits the center of mass of the Milky Way. Lisa Randall and Matthew Reece argued that passage through a comparatively thin disk of dark matter on the galactic plane could lead to gravitational perturbations that could lead to Oort Cloud comets getting dinked toward the inner solar system. A neat idea, though pretty hard to test except through indirect means.
So, could this have happened? Sure. Is it likely? Well, that's much trickier to evaluate. There are plenty of other sources of gravitational perturbations (e.g., the passage of nearby stars) that we know for sure take place. There is no strong observational evidence right now that the would-be disk of dark matter exists, let alone whether it has the properties needed to provide a significant uptick in cometary impacts. Lisa Randall is undoubtedly a gifted writer, and there is real science here (witness the published paper that discusses ways to test the idea), but the breathless media reaction is somewhat disappointing.
Sunday, October 18, 2015
STEM education and equality of opportunity
Friday evening I went to the Katy Independent School District's Robert R. Shaw Center for STEAM, where they were having a Science Movie Night (pdf). The science and technology policy part of Rice's Baker Institute for Public Policy had put the organizers in touch with me. It was a very fun time. On a night when there were two (!) homecoming high school football games next door, the movie night drew about 80 highly engaged students. After the film, they stayed and we talked about the science of the film (what it got right and what they fudged) for another half an hour. It was a great time.
The Shaw Center is amazing - it's a fantastic space, something like 10000 sq ft of reconfigurable maker-space, with a shop, immersive labs, and it provides a home to more than one of Katy ISD's robotics teams. Frankly, this place rivals or exceeds the available undergrad lab facilities at many universities. Katy is a reasonably affluent suburb of Houston, and I was floored to learn that this great science/engineering facility was built with district money, not donations or corporate sponsorship/underwriting. This is a case where public school funding has been deliberately and consciously dedicated to providing a district-wide resource for hands-on science and engineering learning.
In a study in contrasts, my sons and I then volunteered Saturday morning at the Teachers Aid facility run by the Houston Food Bank. At the Teachers Aid facility, teachers from qualifying schools (where 70+% of the enrollment is sufficiently low-income that they qualify for free lunches) can arrive, by appointment, and pick up basic school supplies (pencils, pens, notebooks) for their students. In three hours we helped about 70 teachers who serve more than 3000 students. These are teachers who chose to come in on their own time, to get basic supplies that neither their schools nor the students themselves can afford.
It's appalling to see the divergence in basic educational opportunities between the more affluent school districts and the economically disadvantaged. We have to do better. Making sure that children, regardless of their background, have access to a good education should be a guiding principle of our society, not something viewed as pie-in-the-sky or politically tainted. It amazes me that some people seem to disagree with this.
The Shaw Center is amazing - it's a fantastic space, something like 10000 sq ft of reconfigurable maker-space, with a shop, immersive labs, and it provides a home to more than one of Katy ISD's robotics teams. Frankly, this place rivals or exceeds the available undergrad lab facilities at many universities. Katy is a reasonably affluent suburb of Houston, and I was floored to learn that this great science/engineering facility was built with district money, not donations or corporate sponsorship/underwriting. This is a case where public school funding has been deliberately and consciously dedicated to providing a district-wide resource for hands-on science and engineering learning.
In a study in contrasts, my sons and I then volunteered Saturday morning at the Teachers Aid facility run by the Houston Food Bank. At the Teachers Aid facility, teachers from qualifying schools (where 70+% of the enrollment is sufficiently low-income that they qualify for free lunches) can arrive, by appointment, and pick up basic school supplies (pencils, pens, notebooks) for their students. In three hours we helped about 70 teachers who serve more than 3000 students. These are teachers who chose to come in on their own time, to get basic supplies that neither their schools nor the students themselves can afford.
It's appalling to see the divergence in basic educational opportunities between the more affluent school districts and the economically disadvantaged. We have to do better. Making sure that children, regardless of their background, have access to a good education should be a guiding principle of our society, not something viewed as pie-in-the-sky or politically tainted. It amazes me that some people seem to disagree with this.
Tuesday, October 13, 2015
Several topics: The Nobel, self-organization, and Marcy
I'm heavily into some writing obligations right now, but I wanted to point out a few things:
- Obviously this year's Nobel in physics was for neutrino oscillations. ZZ has thoughtfully provided two nice postings (here and here) with relevant links for further reading. I remember the solar neutrino problem vividly from undergrad days, including one professor semi-jokingly speculating that perhaps the lack of expected electron neutrinos was because the sun had gone out. The whole particles-oscillating-into-new-types business, famous in physics circles from the K mesons, is tough to explain to a general audience. You could probably come up with some analog description involving polarized light....
- Here is a neat article about a record Chinese traffic jam, and it includes links to papers about self-organization. I'm happy to see this kind of article - I think there is real value in pointing out to people that there can be emergent, organized states that result from the aggregate of simple rules. That's the heart of condensed matter and statistical physics.
- This week buzzfeed helped break the story that Geoff Marcy, an astronomer famed for his role in the discovery of extrasolar planets and mentioned as a likely Nobel candidate, was found guilty of multiple violations of Berkeley's sexual misconduct policy. Note to those who haven't followed this: This is the result of a long investigation - it's not hearsay, it's essentially a conviction via a long university investigative process. Marcy's apology letter is here. Apparently this bad behavior had been tolerated for many years. Berkeley's response has been, shall we say, tepid. Despite a clear finding of years of inappropriate behavior involving students 1/3 his age, the response is "do this any more, and you'll be dismissed". The initial response from the department head had an inexcusably awful last paragraph that implied that this whole process was hardest on Marcy, rather than on the victims. This is terrible. People, including departmental colleagues, are calling for Marcy to step down. Bottom line: There is simply no excuse for this kind of behavior, and actual sanctions must be applied when people are found through due process to have violated this level of basic social decency.
Tuesday, October 06, 2015
Table-top electron microscopes
A quick question in the hopes that some people in this blog's readership have direct experience: Anyone work with a table-top scanning electron microscope (SEM) that they really like? Any rough idea of the cost and operations challenges (e.g., having to replace tungsten filaments all the time)? I was chatting with someone about educational opportunities that something like these would present, and I was curious about the numbers without wanting to email vendors or anything that formal. Thanks for any information.
(Note: It would really be fun to try to develop a really low-budget SEM - the electron microscopy version of this. On the one hand, you could imagine microfabricated field emitters and the amazing cheapness of CCDs could help. However, the need for good vacuum and some means of beam focusing and steering makes this much more difficult. Clearly a good undergrad design project....)
(Note: It would really be fun to try to develop a really low-budget SEM - the electron microscopy version of this. On the one hand, you could imagine microfabricated field emitters and the amazing cheapness of CCDs could help. However, the need for good vacuum and some means of beam focusing and steering makes this much more difficult. Clearly a good undergrad design project....)
Sunday, October 04, 2015
Annual Nobel speculation
It's getting to be that time of year again. The 2015 Nobel Prize in Physics will be announced this coming Tuesday morning (EU/US time). Based on past patterns, it looks like it could well be an astro prize. Dark matter/galaxy rotation curves anyone? Or extrasolar planets? (I still like Aharonov + Berry for geometric phases, perhaps with Thouless as well. However, it's unlikely that condensed matter will come around this year.)
On Wednesday, the chemistry prize will be awarded. There, I have no clue. Curious Wavefunction has a great write-up that you should read, though.
Speculate away!
On Wednesday, the chemistry prize will be awarded. There, I have no clue. Curious Wavefunction has a great write-up that you should read, though.
Speculate away!
Wednesday, September 30, 2015
DOE Experimental Condensed Matter Physics PI Meeting 2015 - Day 3
Things I learned from the last (half)day of the DOE PI meeting:
- "vortex explosion" would be a good name for a 1980s metal band.
- Pulsed high fields make possible some really amazing measurements in both high \(T_{\mathrm{C}}\) materials and more exotic things like SmB6.
- Looking at structural defects (twinning) and magnetic structural issues (spin orientation domain walls) can give insights into complicated issues in pnictide superconductors.
- Excitons can be a nice system for looking at coherence phenomena ordinarily seen in cold atom systems. See here and here. Theory proposes that you could play with these at room temperature with the right material system.
- Thermal gradients can drive spin currents even in insulating paramagnets, and these can be measured with techniques that could be performed down to small length scales.
- Very general symmetry considerations when discussing competing ordered states (superconductivity, charge density wave order, spin density wave order) can lead to testable predictions.
- Hybrid, monocrystalline nanoparticles combining metals and semiconductors are very pretty and can let you drive physical processes based on the properties of both material systems.
Tuesday, September 29, 2015
DOE Experimental Condensed Matter Physics PI Meeting 2015 - Day 2
Among the things I learned at the second day of the meeting:
- In relatively wide quantum wells, and high fields, you can enter the quantum Hall insulating state. Using microwave measurements, you can see signatures of phase transitions within the insulating state - there are different flavors of insulator in there. See here.
- As I'd alluded to a while ago, you can make "artificial" quantum systems with graphene-like energetic properties (for example).
- In 2d hole gasses at the interface between Ge and overlying SiGe, you can get really huge anisotropy of the electrical resistivity in magnetic fields, with the "hard" axis along the direction of the in-plane magnetic field.
- In single-layer thick InN quantum wells with GaN above and below, you can have a situation where there is basically zero magnetoresistance. That's really weird.
- In clever tunneling spectroscopy experiments (technique here) on 2d hole gasses, you can see sharp inelastic features that look like inelastic excitation of phonons.
- Tunneling measurements through individual magnetic nanoparticles can show spin-orbit-coupling-induced level spacings, and cranking up the voltage bias can permit spin processes that are otherwise blockaded. See here.
- Niobium islands on a gold film are a great tunable system for studying the motion of vortices in superconductors, and even though the field is a mature one, new and surprising insights come out when you have a clean, controlled system and measurement techniques.
- Scanning Josephson microscopy (requiring a superconducting STM tip, a superconducting sample, and great temperature and positional control) is going to be very powerful for examining the superconducting order parameter on atomic scales.
- In magnetoelectric systems (e.g., ferroelectrics coupled to magnetic materials), combinations of nonlinear optics and electronic measurements are required to unravel which of the various possible mechanisms (charge vs strain mediated) generates the magnetoelectric coupling.
- Strongly coupling light in a cavity with Rydberg atoms should be a great technique for generating many body physics for photons (e.g., the equivalent of quantum Hall).
- Carbon nanotube devices can be great systems for looking at quantum phase transitions and quantum critical scaling, in certain cases.
- Controlling vortex pinning and creep is hugely important in practical superconductors. Arrays of ferromagnetic particles as in artificial spin ice systems can control and manipulate vortices. Thermal fluctuations in high temperature superconductors could end up limiting performance badly, even if the transition temperature is at room temperature or more, and the situation is worse if the material is more anisotropic in terms of effective mass.
- "Oxides are like people; it is their defects that make them interesting."
Monday, September 28, 2015
DOE Experimental Condensed Matter Physics PI meeting 2015 - Day 1
Things I learned at today's session of the DOE ECMP PI meeting:
- In the right not-too-thick, not-too-thin layers of the 3d topological insulator Bi1.5Sb0.5Te1.7Se1.3 (a cousin of Bi2Se3 that actually is reasonably insulating in the bulk), it is possible to use top and bottom gates to control the surface states on the upper and lower faces, independently. See here.
- In playing with suspended structures of different stackings of a few layers of graphene, you can get some dramatic effects, like the appearance of large, sharp energy gaps. See here.
- While carriers in graphene act in some ways like massless particles because their band energy depends linearly on their crystal momentum (like photon energy depends linearly on photon momentum in electrodynamics), they have a "dynamical" effective mass, \(m^* = \hbar (\pi n_{2d})^{1/2}/v_{\mathrm{F}}\), related to how the electronic states respond to an electric bias.
- PdCoO2 is a weird layered metal that can be made amazingly clean, so that its residual resistivity can be as small as 8 n\(\Omega\)-cm. That's about 200 times smaller than the room temperature resistivity of gold or copper.
- By looking at how anisotropic the electrical resistivity is as a function of direction in the plane of layered materials, and how that anisotropy can vary with applied strain, you can define a "nematic susceptibility". That susceptibility implies the existence of fluctuations in the anisotropy of the electronic properties (nematic fluctuations). Those fluctuations seem to diverge at the structural phase transition in the iron pnictide superconductors. See here. Generically, these kinds of fluctuations seem to boost the transition temperature of superconductors.
- YPtBi is a really bizarre material - nonmetallic temperature dependence, high resistivity, small carrier density, yet superconducts.
- Skyrmions (see here) can be nucleated in controlled ways in the right material systems. Using the spin Hall effect, they can be pushed around. They can also be moved by thermally driven spin currents, and interestingly skyrmions tend to flow from the cold side of a sample to the hot side.
- It's possible to pump angular momentum from an insulating ferromagnet, through an insulating antiferromagnet (NiO), and into a metal. See here.
- The APS Conferences for Undergraduate Women in Physics have been a big hit, using attendance as a metric. Extrapolating, in a couple of years it looks like nearly all of the undergraduate women majoring in physics in the US will likely be attending one of these.
- Making truly nanoscale clusters out of some materials (e.g., Co2Si, Mn5Si3) can turn them from weak ferromagnets or antiferromagnets in the bulk into strong ferromagnets in nanoparticle form. See here.
Friday, September 25, 2015
DOE Experimental Condensed Matter Physics PI meeting 2015
As they did two years ago, the Basic Energy Sciences part of the US DOE is having a gathering of experimental condensed matter physics investigators at the beginning of next week. The DOE likes to do this (see here for proceedings of past meetings), with the idea of getting people together to talk about the current and future state of the field and ideally seed some collaborations. I will try to blog a bit about the meeting, as I did in 2013 (here and here).
Friday, September 18, 2015
CMP and materials in science fiction
Apologies for the slower posting frequency. Other obligations (grants, research, teaching, service) are significant right now.
I thought it might be fun to survey people for examples of condensed matter and materials physics as they show up in science fiction (and/or comics, which are fantasy more than hard SF). I don't mean examples where fiction gets science badly wrong or some magic rock acts as a macguffin (Infinity Stones, Sankara stones) - I mean cases where materials and their properties are thought-provoking.
A couple of my favorites:
I thought it might be fun to survey people for examples of condensed matter and materials physics as they show up in science fiction (and/or comics, which are fantasy more than hard SF). I don't mean examples where fiction gets science badly wrong or some magic rock acts as a macguffin (Infinity Stones, Sankara stones) - I mean cases where materials and their properties are thought-provoking.
A couple of my favorites:
- scrith, the bizarre material used to construct the Ringworld. It's some exotic material that has 40% opacity to neutrinos without being insanely dense like degenerate matter.
- From the same author, shadow square wire, which is an absurdly strong material that also doubles as a high temperature superconductor. (Science goof in there: Niven says that this material is also a perfect (!) thermal conductor. That's incompatible with superconductivity, though - the energy gap that gives you the superconductivity suppresses the electronic contribution to thermal conduction. Ahh well.)
- Even better, from the same author, the General Products Hull, a giant single-molecule structure that is transparent in the visible, opaque to all other wavelengths, and where the strength of the atomic bonds is greatly enhanced by a fusion power source.
- Vibranium, the light, strong metal that somehow can dissipate kinetic energy very efficiently. (Like many materials in SF, it has whatever properties it needs to for the sake of the plot. Hard to reconcile the dissipative properties with Captain America's ability to bounce his shield off objects with apparently perfect restitution.)
- Old school: cavorite, the H. G. Wells wonder material that blocks the gravitational interaction.
Friday, September 11, 2015
Amazingly clear results: density gradient ultracentrifugation edition
Ernest Rutherford reportedly said something like, if your experiment needs statistics, you should have done a better experiment. Sometimes this point is driven home by an experimental technique that gives results that are strikingly clear. To the right is an example of this, from Zhu et al., Nano Lett. (in press), doi: 10.1021/acs.nanolett.5b03075. The technique here is called "density gradient ultracentrifugation".
You know that the earth's atmosphere is denser at ground level, with density decreasing as you go up in altitude. If you ignore temperature variation in the atmosphere, you get a standard undergraduate statistical physics problem ("the exponential atmosphere") - the gravitational attraction to the earth pulls the air molecules down, but the air has a non-zero temperature (and therefore kinetic energy). A density gradient develops so that the average gravitational "drift" downward is balanced on average by "diffusion" upward (from high density to low density).
The idea of density gradient ultracentrifugation is to work with a solution instead of the atmosphere, and generate a vastly larger effective gravitational force (to produce a much sharper density gradient within the fluid) by using an extremely fast centrifuge. If there are particles suspended within the solution, they end up sitting at a level in the test tube that corresponds to their average density. In this particular paper, the particles in question are little suspended bits of hexagonal boron nitride, a quasi-2d material similar in structure to graphite. The little hBN flakes have been treated with a surfactant to suspend them, and depending on how many layers are in each flake, they each have a different effective density in the fluid. After appropriate dilution and repeated spinning (41000 RPM for 14 hours for the last step!), you can see clearly separated bands, corresponding to layers of suspension containing particular thickness hBN flakes. This paper is from the Hersam group, and they have a long history with this general technique, especially involving nanotubes. The results are eye-popping and seem nearly miraculous. Very cool.
You know that the earth's atmosphere is denser at ground level, with density decreasing as you go up in altitude. If you ignore temperature variation in the atmosphere, you get a standard undergraduate statistical physics problem ("the exponential atmosphere") - the gravitational attraction to the earth pulls the air molecules down, but the air has a non-zero temperature (and therefore kinetic energy). A density gradient develops so that the average gravitational "drift" downward is balanced on average by "diffusion" upward (from high density to low density).
The idea of density gradient ultracentrifugation is to work with a solution instead of the atmosphere, and generate a vastly larger effective gravitational force (to produce a much sharper density gradient within the fluid) by using an extremely fast centrifuge. If there are particles suspended within the solution, they end up sitting at a level in the test tube that corresponds to their average density. In this particular paper, the particles in question are little suspended bits of hexagonal boron nitride, a quasi-2d material similar in structure to graphite. The little hBN flakes have been treated with a surfactant to suspend them, and depending on how many layers are in each flake, they each have a different effective density in the fluid. After appropriate dilution and repeated spinning (41000 RPM for 14 hours for the last step!), you can see clearly separated bands, corresponding to layers of suspension containing particular thickness hBN flakes. This paper is from the Hersam group, and they have a long history with this general technique, especially involving nanotubes. The results are eye-popping and seem nearly miraculous. Very cool.
Wednesday, September 09, 2015
The (Intel) Science Talent Search - time to end corporate sponsorship?
When I was a kid, I heard about the Westinghouse Science Talent Search, a national science fair competition that sparked the imaginations of many many young, would-be scientists and engineeers for decades. I didn't participate in it, but it definitely was inspiring. As an undergrad, I was fortunate enough to work a couple of summers for Westinghouse's R&D lab, their Science Technology Center outside of Pittsburgh, learning a lot about what engineers and applied physicists actually do. When I was in grad school, Westinghouse as a major technology corporation basically ceased to exist, and Intel out-bid rival companies for the privilege of supporting and giving their name to the STS. Now, Intel has decided to drop its sponsorship, for reasons that are completely opaque. "Intel's interests have changed," says the chair of the administrative board that runs the contest.
While it seems likely that some other corporate sponsor will step forward, I have to ask two questions. First, why did Intel decide to get out of this? Seriously, the cost to them has to be completely negligible. Is there some compelling business reason to drop this, under the assumption that someone else will take up the mantle? It's a free country, and of course they can do what they like with their name and sponsorship, but this just seems bizarre. Was this viewed as a burden? Was there a sense that they didn't get enough effective advertising or business return for their investment? Did it really consume far more resources than they were comfortable allocating?
Second, why should a company sponsor this? I ask this as it seems likely that the companies with the biggest capital available to act as sponsors will be corporations like Google, Microsoft, Amazon - companies that don't, as their core mission, actually do physical sciences and engineering research. Wouldn't it be better to establish a philanthropic entity to run this competition - someone who would not have to worry about business pressures in terms of the financing? There are a number of excellent, well-endowed foundations who seem to have missions that align well with the STS. There's the Gordon and Betty Moore Foundation, the David and Lucille Packard Foundation, the Alfred P. Sloan Foundation, the W. M. Keck Foundation, the Dreyfus Foundation, the John D. and Catherine T. MacArthur Foundation, and I'm sure I'm leaving out some possibilities. I hope someone out there gives serious consideration to endowing the STS, rather than going with another corporate sponsorship deal that may not stand the test of time.
Update: From the Wired article about this, the STS cost Intel about $6M/yr. Crudely, that means that an endowment of $120M would be enough to support this activity in perpetuity, assuming 5% payout (typical university investment assumptions, routinely beaten by Harvard and others).
Update 2: I've thought about this some more, and maybe the best solution would be for a university to sponsor this. For example, this seems tailor-made for MIT, which styles itself as a huge hub of innovation (see the Technology Review, e.g.). Stanford could do it. Harvard could do $6M a year and not even notice. It would be perfect as a large-scale outreach/high school education sponsorship effort. Comments?
While it seems likely that some other corporate sponsor will step forward, I have to ask two questions. First, why did Intel decide to get out of this? Seriously, the cost to them has to be completely negligible. Is there some compelling business reason to drop this, under the assumption that someone else will take up the mantle? It's a free country, and of course they can do what they like with their name and sponsorship, but this just seems bizarre. Was this viewed as a burden? Was there a sense that they didn't get enough effective advertising or business return for their investment? Did it really consume far more resources than they were comfortable allocating?
Second, why should a company sponsor this? I ask this as it seems likely that the companies with the biggest capital available to act as sponsors will be corporations like Google, Microsoft, Amazon - companies that don't, as their core mission, actually do physical sciences and engineering research. Wouldn't it be better to establish a philanthropic entity to run this competition - someone who would not have to worry about business pressures in terms of the financing? There are a number of excellent, well-endowed foundations who seem to have missions that align well with the STS. There's the Gordon and Betty Moore Foundation, the David and Lucille Packard Foundation, the Alfred P. Sloan Foundation, the W. M. Keck Foundation, the Dreyfus Foundation, the John D. and Catherine T. MacArthur Foundation, and I'm sure I'm leaving out some possibilities. I hope someone out there gives serious consideration to endowing the STS, rather than going with another corporate sponsorship deal that may not stand the test of time.
Update: From the Wired article about this, the STS cost Intel about $6M/yr. Crudely, that means that an endowment of $120M would be enough to support this activity in perpetuity, assuming 5% payout (typical university investment assumptions, routinely beaten by Harvard and others).
Update 2: I've thought about this some more, and maybe the best solution would be for a university to sponsor this. For example, this seems tailor-made for MIT, which styles itself as a huge hub of innovation (see the Technology Review, e.g.). Stanford could do it. Harvard could do $6M a year and not even notice. It would be perfect as a large-scale outreach/high school education sponsorship effort. Comments?
Sunday, September 06, 2015
Science and narrative tension
Recently I've come across some good examples of multidisciplinary science communication. The point of commonality: narrative tension, in the sense that the science is woven in as part of telling a story. The viewer/reader really wants to know how the story resolves, and either is willing to deal with the science to get there, or (more impressive, from the communication standpoint) actually wants to see the science behind the plot resolution.
Possibly the best example of the latter is The Martian, by Andy Weir. If you haven't read it, you should. There is going to be a big budget film coming out based on it, and while the preview looks very good, the book is going to be better. Here is an interview with Andy Weir by Adam Savage, and it makes the point that people can actually like math and science as part of the plot.
Another recent example, more documentary-style, is The Mystery of Matter: Search for the Elements, which aired this past month on PBS in the US. The three episodes are here, here, and here. This contains much of the same information as a Nova episode, Hunting the Elements. It's interesting to contrast the two - some people certainly like the latter's fun, jaunty approach (wherein the host plays the every-person proxy for the audience, saying "Gee whiz!" and asking questions of scientists), while the former has some compelling historical reenactments. I like the story-telling approach a bit more, but that may be because I'm a sucker for history. Note: Nice job by David Muller in Hunting the Elements, using Cornell's fancy TEM to look at the atoms in bronze.
I also heard a good story on NPR this week about Ainissa Ramirez, a materials scientist who has reoriented her career path into "science evangelist". Her work and passion are very impressive, and she is also a proponent of story-telling as a way to hold interest. We overlapped almost perfectly in time at Stanford during our doctorates, and I wish we'd met.
Now to think about the equivalent of The Martian, but where the audience longs to learn more about condensed matter physics and nanoscale science to see how the hero survives.... (kidding. Mostly.)
Possibly the best example of the latter is The Martian, by Andy Weir. If you haven't read it, you should. There is going to be a big budget film coming out based on it, and while the preview looks very good, the book is going to be better. Here is an interview with Andy Weir by Adam Savage, and it makes the point that people can actually like math and science as part of the plot.
Another recent example, more documentary-style, is The Mystery of Matter: Search for the Elements, which aired this past month on PBS in the US. The three episodes are here, here, and here. This contains much of the same information as a Nova episode, Hunting the Elements. It's interesting to contrast the two - some people certainly like the latter's fun, jaunty approach (wherein the host plays the every-person proxy for the audience, saying "Gee whiz!" and asking questions of scientists), while the former has some compelling historical reenactments. I like the story-telling approach a bit more, but that may be because I'm a sucker for history. Note: Nice job by David Muller in Hunting the Elements, using Cornell's fancy TEM to look at the atoms in bronze.
I also heard a good story on NPR this week about Ainissa Ramirez, a materials scientist who has reoriented her career path into "science evangelist". Her work and passion are very impressive, and she is also a proponent of story-telling as a way to hold interest. We overlapped almost perfectly in time at Stanford during our doctorates, and I wish we'd met.
Now to think about the equivalent of The Martian, but where the audience longs to learn more about condensed matter physics and nanoscale science to see how the hero survives.... (kidding. Mostly.)
Tuesday, September 01, 2015
Nano and the oil industry
I went to an interesting lunchtime talk today by Sergio Kapusta, former chief scientist of Shell. He gave a nice overview of the oil/gas industry and where nanoscience and nanotechnology fit in. Clearly one of the main issues of interest is assessing (and eventually recovering) oil and gas trapped in porous rock, where the hydrocarbons can be trapped due to capillarity and the connectivity of the pores and cracks may be unknown. Nanoparticles can be made with various chemical functionalizations (for example, dangling ligands known to be cleaved if the particle temperature exceeds some threshold) and then injected into a well; the particles can then be sought at another nearby well. The particles act as "reporters". The physics and chemistry of getting hydrocarbons out of these environments is all about the solid/liquid interface at the nanoscale. More active sensor technologies for the aggressive, nasty down-hole environment are always of interest, too.
When asked about R&D spending in the oil industry, he pointed out something rather interesting: R&D is actually cheap compared to the huge capital investments made by the major companies. That means that it's relatively stable even in boom/bust cycles because it's only a minor perturbation on the flow of capital.
Interesting numbers: Total capital in hardware in the field for the petrochemical industry is on the order of $2T, built up over several decades. Typical oil consumption worldwide is around 90M barrels equivalent per day (!). If the supply ranges from 87-93M barrels per day, the price swings from $120 to $40/barrel, respectively. Pretty wild.
When asked about R&D spending in the oil industry, he pointed out something rather interesting: R&D is actually cheap compared to the huge capital investments made by the major companies. That means that it's relatively stable even in boom/bust cycles because it's only a minor perturbation on the flow of capital.
Interesting numbers: Total capital in hardware in the field for the petrochemical industry is on the order of $2T, built up over several decades. Typical oil consumption worldwide is around 90M barrels equivalent per day (!). If the supply ranges from 87-93M barrels per day, the price swings from $120 to $40/barrel, respectively. Pretty wild.
Thursday, August 27, 2015
Short term-ism and industrial research
I have written multiple times (here and here, for example) about my concern that the structure of financial incentives and corporate governance have basically killed much of the American corporate research enterprise. Simply put: corporate officers are very heavily rewarded based on very short term metrics (stock price, year-over-year change in rate of growth of profit). When faced with whether to invest company resources in risky long-term research that may not pay off for years if ever, most companies opt out of that investment. Companies that do make long-term investments in research are generally quasi-monopolies. The definition of "research" has increasingly crept toward what used to be called "development"; the definition of "long term" has edged toward "one year horizon for a product"; and physical sciences and engineering research has massively eroded in favor of much less expensive (in infrastructure, at least) work on software and algorithms.
I'm not alone in making these observations - Norm Augustine, former CEO of Lockheed Martin, basically says the same thing, for example. Hillary Clinton has lately started talking about this issue.
Now, writing in The New Yorker this week, James Surowiecki claims that "short termism" is a myth. Apparently companies love R&D and have been investing in it more heavily. I think he's just incorrect, in part because I don't think he really appreciates the difference between research and development, and in part because I don't think he appreciates the sliding definitions of "research", "long term" and the difference between software development and physical sciences and engineering. I'm not the only one who thinks his article has issues - see this article at Forbes.
No one disputes the long list of physical research enterprises that have been eliminated, gutted, strongly reduced, or refocused onto much shorter term projects. A brief list includes IBM, Xerox, Bell Labs, Motorola, General Electric, Ford, General Motors, RCA, NEC, HP Labs, Seagate, 3M, Dupont, and others. Even Microsoft has been cutting back. No one disputes that corporate officers have often left these organizations with fat benefits packages after making long-term, irreversible reductions in research capacity (I'm looking at you, Carly Fiorina). Perhaps "short termism" is too simple an explanation, but claiming that all is well in the world of industrial research just rings false.
I'm not alone in making these observations - Norm Augustine, former CEO of Lockheed Martin, basically says the same thing, for example. Hillary Clinton has lately started talking about this issue.
Now, writing in The New Yorker this week, James Surowiecki claims that "short termism" is a myth. Apparently companies love R&D and have been investing in it more heavily. I think he's just incorrect, in part because I don't think he really appreciates the difference between research and development, and in part because I don't think he appreciates the sliding definitions of "research", "long term" and the difference between software development and physical sciences and engineering. I'm not the only one who thinks his article has issues - see this article at Forbes.
No one disputes the long list of physical research enterprises that have been eliminated, gutted, strongly reduced, or refocused onto much shorter term projects. A brief list includes IBM, Xerox, Bell Labs, Motorola, General Electric, Ford, General Motors, RCA, NEC, HP Labs, Seagate, 3M, Dupont, and others. Even Microsoft has been cutting back. No one disputes that corporate officers have often left these organizations with fat benefits packages after making long-term, irreversible reductions in research capacity (I'm looking at you, Carly Fiorina). Perhaps "short termism" is too simple an explanation, but claiming that all is well in the world of industrial research just rings false.
Monday, August 24, 2015
News items: Feynman, superconductors, faculty shuffle
A few brief news items - our first week of classes this term is a busy time.
- Here is a video of Richard Feynman, explaining why he can't readily explain permanent magnets to the interviewer. This gets right to the heart of why explaining science in a popular, accessible way can be very difficult. Sure, he could come up with really stretched and tortured analogies, but truly getting at the deeper science behind the permanent magnets and their interactions would require laying a ton of groundwork, way more than what an average person would want to hear.
- Here is a freely available news article from Nature about superconductivity in H2S at very high pressures. I was going to write at some length about this but haven't found the time. The short version: There have been predictions for a long time that hydrogen, at very high pressures like in the interior of Jupiter, should be metallic and possibly a relatively high temperature superconductor. There are later predictions that hydrogen-rich alloys and compounds could also superconduct at pretty high temperatures. Now it seems that hydrogen sulfide does just this. Crank up the pressure to 1.5 million atmospheres, and that stinky gas becomes what seems to be a relatively conventional (!) superconductor, with a transition temperature close to 200 K. The temperature is comparatively high because of a combination of an effectively high speed of sound (the material gets pretty stiff at those pressures), a large density of electrons available to participate, and a strong coupling between the electrons and those vibrations (so that the vibrations can provide an effective attractive interaction between the electrons that leads to pairing). The important thing about this work is that it shows that there is no obvious reason why superconductivity at or near room temperature should be ruled out.
- Congratulations to Prof. Laura Greene, incoming APS president, who has been named the new chief scientist of the National High Magnetic Field Lab.
- Likewise, congratulations to Prof. Meigan Aronson, who has been named Texas A&M University's new Dean of Science.
Friday, August 21, 2015
Anecdote 5: Becoming an experimentalist, and the Force-o-Matic
As an undergrad, I was a mechanical engineering major doing an engineering physics program from the engineering side. When I was a sophomore, my lab partner in the engineering fluid mechanics course, Brian, was doing the same program, but from the physics side. Rather than doing a pre-made lab, we chose to take the opportunity to do an experiment of our own devising. We had a great plan. We wanted to compare the drag forces on different shapes of boat hulls. The course professor got us permission to go to a nearby research campus, where we would be able to take our homemade models and run them in their open water flow channel (like an infinity pool for engineering experiments) for about three hours one afternoon.
The idea was simple: The flowing water would tend to push the boat hull downstream due to drag. We would attach a string to the hull, run the string over a pulley, and hang known masses on the end of the string, until the weight of the masses (transmitted via the string) pulled upstream to balance out the drag force - that way, when we had the right amount of weight on there, the boat hull would sit motionless in the flow channel. By plotting the weight vs. the flow velocity, we'd be able to infer the dependence of the drag force on the flow speed, and we could compare different hull designs.
Like many great ideas, this was wonderful right up until we actually tried to implement it in practice. Because we were sophomores and didn't really have a good feel for the numbers, we hadn't estimated anything and tacitly assumed that our approach would work. Instead, the drag forces on our beautiful homemade wood hulls were much smaller than we'd envisioned, so much so that just the horizontal component of the force from the sagging string itself was enough to hold the boats in place. With only a couple of hours at our disposal, we had to face the fact that our whole measurement scheme was not going to work.
What did we do? With improvisation that would have made McGyver proud, we used a protractor, chewing gum, and the spring from a broken ballpoint pen to create a much "softer" force measurement apparatus, dubbed the Force-o-Matic. With the gum, we anchored one end of the stretched spring to the "origin" point of the protractor, with the other end attached to a pointer made out of the pen cap, oriented to point vertically relative to the water surface. With fine thread instead of the heavier string, we connected the boat hull to the tip of the pointer, so that tension in the thread laterally deflected the extended spring by some angle. We could then later calibrate the force required to produce a certain angular deflection. We got usable data, an A on the project, and a real introduction, vividly memorable 25 years later, to real experimental work.
The idea was simple: The flowing water would tend to push the boat hull downstream due to drag. We would attach a string to the hull, run the string over a pulley, and hang known masses on the end of the string, until the weight of the masses (transmitted via the string) pulled upstream to balance out the drag force - that way, when we had the right amount of weight on there, the boat hull would sit motionless in the flow channel. By plotting the weight vs. the flow velocity, we'd be able to infer the dependence of the drag force on the flow speed, and we could compare different hull designs.
Like many great ideas, this was wonderful right up until we actually tried to implement it in practice. Because we were sophomores and didn't really have a good feel for the numbers, we hadn't estimated anything and tacitly assumed that our approach would work. Instead, the drag forces on our beautiful homemade wood hulls were much smaller than we'd envisioned, so much so that just the horizontal component of the force from the sagging string itself was enough to hold the boats in place. With only a couple of hours at our disposal, we had to face the fact that our whole measurement scheme was not going to work.
What did we do? With improvisation that would have made McGyver proud, we used a protractor, chewing gum, and the spring from a broken ballpoint pen to create a much "softer" force measurement apparatus, dubbed the Force-o-Matic. With the gum, we anchored one end of the stretched spring to the "origin" point of the protractor, with the other end attached to a pointer made out of the pen cap, oriented to point vertically relative to the water surface. With fine thread instead of the heavier string, we connected the boat hull to the tip of the pointer, so that tension in the thread laterally deflected the extended spring by some angle. We could then later calibrate the force required to produce a certain angular deflection. We got usable data, an A on the project, and a real introduction, vividly memorable 25 years later, to real experimental work.
Friday, August 14, 2015
Drought balls and emergent properties
There has been a lot of interest online recently about the "drought balls" that the state of California is using to limit unwanted photochemistry and evaporation in its reservoirs. These are hollow balls each about 10 cm in diameter, made from a polymer mixed with carbon black. When dumped by the zillions into reservoirs, they don't just help conserve water: They spontaneously become a teaching tool about condensed matter physics.
As you can see from the figure, the balls spontaneously assemble into "crystalline" domains. The balls are spherically symmetric, and they experience a few interactions: They are buoyant, so they float on the water surface; they are rigid objects, so they have what a physicist would call "hard-core, short-ranged repulsive interactions" and what a chemist would call "steric hindrance"; a regular person would say that you can't make two balls occupy the same place. Because they float and distort the water surface, they also experience some amount of an effective attractive interaction. They get agitated by the rippling of the water, but not too much. Throw all those ingredients together, and amazing things happen: The balls pack together in a very tight spatial arrangement. The balls are spherically symmetric, and there's nothing about the surface of the water that picks out a particular direction. Nonetheless, the balls "spontaneously break rotational symmetry in the plane" and pick out a directionality to their arrangement. There's nothing about the surface of the water that picks out a particular spatial scale or "origin", but the balls "spontaneously break continuous translational symmetry", picking out special evenly-spaced lattice sites. Physicists would say they preserve discrete rotational and translational symmetries. The balls in different regions of the surface were basically isolated to begin with, so they broke those symmetries differently, leading to a "polycrystalline" arrangement, with "grain boundaries". As the water jostles the system, there is a competition between the tendency to order and the ability to rearrange, and the grains rearrange over time. This arrangement of balls has rigidity and supports collective motions (basically the analog of sound) within the layer that are meaningless when talking about the individual balls. We can even spot some density of "point defects", where a ball is missing, or an "extra" ball is sitting on top.
What this tells us is that there are certain universal, emergent properties of what we think of as solids that really do not depend on the underlying microscopic details. This is a pretty deep idea - that there are collective organizing principles that give emergent universal behaviors, even from very simple and generic microscopic rules. Knowing that the balls are made deep down from quarks and leptons does not tell you anything about these properties.
As you can see from the figure, the balls spontaneously assemble into "crystalline" domains. The balls are spherically symmetric, and they experience a few interactions: They are buoyant, so they float on the water surface; they are rigid objects, so they have what a physicist would call "hard-core, short-ranged repulsive interactions" and what a chemist would call "steric hindrance"; a regular person would say that you can't make two balls occupy the same place. Because they float and distort the water surface, they also experience some amount of an effective attractive interaction. They get agitated by the rippling of the water, but not too much. Throw all those ingredients together, and amazing things happen: The balls pack together in a very tight spatial arrangement. The balls are spherically symmetric, and there's nothing about the surface of the water that picks out a particular direction. Nonetheless, the balls "spontaneously break rotational symmetry in the plane" and pick out a directionality to their arrangement. There's nothing about the surface of the water that picks out a particular spatial scale or "origin", but the balls "spontaneously break continuous translational symmetry", picking out special evenly-spaced lattice sites. Physicists would say they preserve discrete rotational and translational symmetries. The balls in different regions of the surface were basically isolated to begin with, so they broke those symmetries differently, leading to a "polycrystalline" arrangement, with "grain boundaries". As the water jostles the system, there is a competition between the tendency to order and the ability to rearrange, and the grains rearrange over time. This arrangement of balls has rigidity and supports collective motions (basically the analog of sound) within the layer that are meaningless when talking about the individual balls. We can even spot some density of "point defects", where a ball is missing, or an "extra" ball is sitting on top.
What this tells us is that there are certain universal, emergent properties of what we think of as solids that really do not depend on the underlying microscopic details. This is a pretty deep idea - that there are collective organizing principles that give emergent universal behaviors, even from very simple and generic microscopic rules. Knowing that the balls are made deep down from quarks and leptons does not tell you anything about these properties.
Tuesday, August 11, 2015
Anecdote 4: Sometimes advisers are right.
When I was a first-year grad student, I started working in my adviser's lab, learning how to do experiments at extremely low temperatures. This involved working quite a bit with liquid helium, which boils at atmospheric pressure at only 4.2 degrees above absolute zero, and is stored in big, vacuum-jacketed thermos bottles called dewars (named after James Dewar). We had to transfer liquid helium from storage dewars into our experimental systems, and very often we were interested in knowing how much helium was left in the bottom of a storage dewar.
The easiest way to do this was to use a "thumper" - a skinny (maybe 1/8" diameter) thin-walled stainless steel tube, a few feet long, open at the bottom, and silver-soldered to a larger (say 1" diameter) brass cylinder at the top, with the cylinder closed off by a stretched piece of latex glove. When the bottom of the tube was inserted into the dewar (like a dipstick) and lowered into the cold gas, the rubber membrane at the top of the thumper would spontaneously start to pulse (hence the name). The frequency of the thumping would go from a couple of beats per second when the bottom was immersed in liquid helium to more of a buzz when the bottom was raised into vapor. You can measure the depth of the liquid left in the dewar this way, and look up the relevant volume of liquid on a sticker chart on the side of the dewar.
The "thumping" pulses are called Taconis oscillations. They are an example of "thermoacoustic" oscillations. The physics involved is actually pretty neat, and I'll explain it at the end of this post, but that's not really the point of this story. I found this thumping business to be really weird, and I wanted to know how it worked, so I walked across the hall from the lab and knocked on my adviser's door, hoping to ask him for a reference. He was clearly busy (being department chair at the time didn't help), and when I asked him "How do Taconis oscillations happen?" he said, after a brief pause, "Well, they're driven by the temperature difference between the hot and cold ends of the tube, and they're a complicated nonlinear phenomenon." in a tone that I thought was dismissive. Doug O. loves explaining things, so I figured either he was trying to get rid of me, or (much less likely) he didn't really know.
I decided I really wanted to know. I went to the physics library upstairs in Varian Hall and started looking through books and chasing journal articles. Remember, this was back in the wee early dawn of the web, so there was no such thing as google or wikipedia. Anyway, I somehow found this paper and its sequels. In there are a collection of coupled partial differential equations looking at the pressure and density of the fluid, the flow of heat along the tube, the temperature everywhere, etc., and guess what: They are complicated, nonlinear, and have oscillating solutions. Damn. Doug O. wasn't blowing me off - he was completely right (and knew that a more involved explanation would have been a huge mess). I quickly got used to this situation.
Epilogue: So, what is going on in Taconis oscillations, really? Well, suppose you assume that there is gas rushing into the open end of the tube and moving upward toward the closed end. That gas is getting compressed, so it would tend to get warmer. Moreover, if the temperature gradient along the tube is steep enough, the upper walls of the tube can be warmer than the incoming gas, which then warms further by taking heat from the tube walls. Now that the pressure of the gas has built up near the closed end, there is a pressure gradient that pushes the gas back down the tube. The now warmed gas cools as it expands, but again if the tube walls have a steep temperature gradient, the gas can dump heat into the tube walls nearer the bottom. This is discussed in more detail here. Turns out that you have basically an engine, driven by the flow of heat from the top to the bottom, that cyclically drives gas pulses. The pulse amplitude ratchets up until the dissipation in the whole system equals the work done per cycle on the gas. More interesting than that: Like some engines, you can run this one backwards. If you drive pressure pulses properly, you can use the gas to pump heat from the cold side to the hot side - this is the basis for the thermoacoustic refrigerator.
The easiest way to do this was to use a "thumper" - a skinny (maybe 1/8" diameter) thin-walled stainless steel tube, a few feet long, open at the bottom, and silver-soldered to a larger (say 1" diameter) brass cylinder at the top, with the cylinder closed off by a stretched piece of latex glove. When the bottom of the tube was inserted into the dewar (like a dipstick) and lowered into the cold gas, the rubber membrane at the top of the thumper would spontaneously start to pulse (hence the name). The frequency of the thumping would go from a couple of beats per second when the bottom was immersed in liquid helium to more of a buzz when the bottom was raised into vapor. You can measure the depth of the liquid left in the dewar this way, and look up the relevant volume of liquid on a sticker chart on the side of the dewar.
The "thumping" pulses are called Taconis oscillations. They are an example of "thermoacoustic" oscillations. The physics involved is actually pretty neat, and I'll explain it at the end of this post, but that's not really the point of this story. I found this thumping business to be really weird, and I wanted to know how it worked, so I walked across the hall from the lab and knocked on my adviser's door, hoping to ask him for a reference. He was clearly busy (being department chair at the time didn't help), and when I asked him "How do Taconis oscillations happen?" he said, after a brief pause, "Well, they're driven by the temperature difference between the hot and cold ends of the tube, and they're a complicated nonlinear phenomenon." in a tone that I thought was dismissive. Doug O. loves explaining things, so I figured either he was trying to get rid of me, or (much less likely) he didn't really know.
I decided I really wanted to know. I went to the physics library upstairs in Varian Hall and started looking through books and chasing journal articles. Remember, this was back in the wee early dawn of the web, so there was no such thing as google or wikipedia. Anyway, I somehow found this paper and its sequels. In there are a collection of coupled partial differential equations looking at the pressure and density of the fluid, the flow of heat along the tube, the temperature everywhere, etc., and guess what: They are complicated, nonlinear, and have oscillating solutions. Damn. Doug O. wasn't blowing me off - he was completely right (and knew that a more involved explanation would have been a huge mess). I quickly got used to this situation.
Epilogue: So, what is going on in Taconis oscillations, really? Well, suppose you assume that there is gas rushing into the open end of the tube and moving upward toward the closed end. That gas is getting compressed, so it would tend to get warmer. Moreover, if the temperature gradient along the tube is steep enough, the upper walls of the tube can be warmer than the incoming gas, which then warms further by taking heat from the tube walls. Now that the pressure of the gas has built up near the closed end, there is a pressure gradient that pushes the gas back down the tube. The now warmed gas cools as it expands, but again if the tube walls have a steep temperature gradient, the gas can dump heat into the tube walls nearer the bottom. This is discussed in more detail here. Turns out that you have basically an engine, driven by the flow of heat from the top to the bottom, that cyclically drives gas pulses. The pulse amplitude ratchets up until the dissipation in the whole system equals the work done per cycle on the gas. More interesting than that: Like some engines, you can run this one backwards. If you drive pressure pulses properly, you can use the gas to pump heat from the cold side to the hot side - this is the basis for the thermoacoustic refrigerator.