A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Thursday, August 30, 2012
Safety training
A brief post for a busy time: For various reasons, I'm interested in lab safety training of grad students (and postdocs and undergrads) at universities. In particular, I am curious about best practices - approaches that are not invasive, burdensome, or one-size-fits-all, but never the less actually improve the safety of researchers. If you think your institution does this particularly well, please post in the comments about what they do and why, or drop me an email. Thanks.
Sunday, August 26, 2012
"Impact factors" and the damage they do.
The Wall Street Journal ran this article yesterday, which basically talks about how some journals try to manipulate their impact factors by, for example, insisting that authors of submitted manuscripts add references that point to that specific journal. Impact factor is (according to that article's version of the Thompson/ISI definition) the number of citations of a journal in a given year, divided by the number of papers published in that journal over the last two years. I've written before about why I think impact factors are misleading: they're a metric deeply based on outliers rather than typical performance. Nature and Science (for example) have high impact factors not really because their typical paper is much more highly cited than, e.g., Phys Rev B's. Rather, they have higher impact factors because the probability that a paper in Nature or Science is one of those that gets thousands of citations is higher than in Phys Rev B. The kind of behavior reported in the article is the analog of an author self-citing like crazy in an effort to boost citation count or h-index. (Ignoring how easy it is to detect and make corrections,) Self-citation can't make a lousy, unproductive researcher look like a powerhouse, but it can make a marginal researcher look less marginal. Likewise, spiking citations for journals can't make a lousy journal suddenly have an IF of ten, but they can make a journal's IF look like 1 instead of 0.3, and some people actually care about this.
Why should any publishing scientific researcher spend any time thinking about this? All you have to do is look at the comments on that article, and you'll see. Behavior and practices that damage the integrity of the scientific publishing enterprise have the potential to do grave harm to the (already precarious) standing of science. If the average person thinks that scientific research is a rigged game, full of corrupt scientists looking only to further their own image and careers, and that scientific research is no more objective than people arguing about untestable opinions, that's tragic.
Why should any publishing scientific researcher spend any time thinking about this? All you have to do is look at the comments on that article, and you'll see. Behavior and practices that damage the integrity of the scientific publishing enterprise have the potential to do grave harm to the (already precarious) standing of science. If the average person thinks that scientific research is a rigged game, full of corrupt scientists looking only to further their own image and careers, and that scientific research is no more objective than people arguing about untestable opinions, that's tragic.
Friday, August 24, 2012
Just enough to be dangerous....
A colleague and I were talking this morning about what knowledge depth is desirable in referees. A reviewer who does not know an area in detail will sometimes give a bold or unconventional idea a fair, unbiased hearing. In the other limit, a reviewer who is a hardcore expert and has really pondered the Big Picture questions in an area can sometimes (not always) appreciate a new or creative approach that is likely to have big impact even if that approach is unorthodox. Where you can really run into trouble sometimes is with reviewers who know just enough to be dangerous - they can identify critical issues of concern, know the orthodoxy of a field, and don't necessarily appreciate the Big Picture or potential impact. This may be the root of the claim I've heard from journal editors, that often the "recommended referees" that people suggest when submitting articles end up being the harshest critics. Just a thought. In general, we are all well served by getting the most knowledgeable referees possible.
Sunday, August 19, 2012
And this guy sits on the House Science Committee.
Today Congressman Todd Akin from Missouri, also the Republican nominee for the US Senate seat currently held by Sen. Claire McCaskill, said that women have a biological mechanism that makes it very difficult for them to get pregnant in the case of "legitimate rape" (whatever that is). Specifically, he said "If it’s a legitimate rape, the female body has ways to try to shut that whole thing down." Yes, he said that, and there's video. Regardless of your politics or your views on abortion, isn't it incredibly embarrassing that a member of the House Science Committee would say something so staggeringly ignorant?
Update: Once again, The Onion gets it right.
Update: Once again, The Onion gets it right.
Wednesday, August 15, 2012
Intro physics - soliciting opinions
For the third year in a row, I'm going to be teaching Rice's honors intro mechanics course (PHYS 111). I use the outstanding but mathematically challenging (for most first-year undergrads) book by Kleppner and Kolenkow. It seems pretty clear (though I have done no rigorous study of this) that the students who perform best in the course are those that are the most comfortable with real calculus (both differential and integral), and not necessarily those with the best high school physics background. Teaching first-year undergrads is generally great fun in this class, though quite a bit of work. Since these are a self-selected bunch who really want to be there, and since Rice undergrads are generally very bright, they are a good audience.
I do confess, though, that (like all professors who really care about educating students) I go back and forth about whether I've structured the class properly. It's definitely set up like a traditional lecture course, and while I try to be interactive with the students, it is a far cry from some of the modern education research approaches. I don't use clickers (though I've thought seriously about it), and I don't use lots of peer instruction or discovery-based interactions. The inherent tradeoffs are tricky: we don't really have the properly configured space or personnel resources to do some of the very time-intensive discussion/discovery-based approaches. Likewise, while those approaches undoubtedly teach some of the audience better than traditional methods, perhaps with greater retention, it's not clear whether the gains outweigh the fact that nearly all of those methods trade subject content for time. That is, in order to teach, e.g., angular momentum really well, they dispense with other topics. It's also not clear to me that these methods are well-suited to the Kleppner-Kolenkow level of material.
As unscientific as a blog posting is, I'd like to solicit input from readers. Anyone out there have particularly favorite approaches to teaching intro physics at this level? Evidence, anecdotal or otherwise, that particular teaching methods really lead to improved instruction, at the level of an advanced intro class (as opposed to general calc-based physics)?
I do confess, though, that (like all professors who really care about educating students) I go back and forth about whether I've structured the class properly. It's definitely set up like a traditional lecture course, and while I try to be interactive with the students, it is a far cry from some of the modern education research approaches. I don't use clickers (though I've thought seriously about it), and I don't use lots of peer instruction or discovery-based interactions. The inherent tradeoffs are tricky: we don't really have the properly configured space or personnel resources to do some of the very time-intensive discussion/discovery-based approaches. Likewise, while those approaches undoubtedly teach some of the audience better than traditional methods, perhaps with greater retention, it's not clear whether the gains outweigh the fact that nearly all of those methods trade subject content for time. That is, in order to teach, e.g., angular momentum really well, they dispense with other topics. It's also not clear to me that these methods are well-suited to the Kleppner-Kolenkow level of material.
As unscientific as a blog posting is, I'd like to solicit input from readers. Anyone out there have particularly favorite approaches to teaching intro physics at this level? Evidence, anecdotal or otherwise, that particular teaching methods really lead to improved instruction, at the level of an advanced intro class (as opposed to general calc-based physics)?
Wednesday, August 08, 2012
Another sad loss
It was disheartening to hear of another sad loss in the community of condensed matter physics, with the passing of Zlatko Tesanovic. I had met Zlatko when I visited Johns Hopkins way back when I was a postdoc, and he was a very fun person. My condolences to his family, friends, and colleagues.
Saturday, August 04, 2012
Confirmation bias - Matt Ridley may have some.
Matt Ridley is a columnist who writes generally insightful material for the Wall Street Journal about science and the culture of scientists. For the last three weeks, he has published a three-part series about confirmation bias, the tendency of people to overly weight evidence that agrees with their preconceived notions and downgrade the importance of evidence that disagrees with their preconceived notions. Confirmation bias is absolutely real and part of the human condition. Climate change skeptics have loudly accused climate scientists of confirmation bias in their interpretation of both data and modeling results. The skeptics claim that people like James Hansen will twist facts unrelentingly to support their emotion-based conclusion that climate change is real and caused by humans.
Generally Mr. Ridley writes well. However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters. He says: "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise.... He may be right, but such curve-fitting reasoning is an example of confirmation bias."
Climate science debate aside, that last statement is just flat-out wrong. First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias. Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling. To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses. The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct. An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct. Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?" Doing this right is tricky. Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing. Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct. (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)
The bottom line: when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct". Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.
Generally Mr. Ridley writes well. However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters. He says: "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise.... He may be right, but such curve-fitting reasoning is an example of confirmation bias."
Climate science debate aside, that last statement is just flat-out wrong. First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias. Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling. To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses. The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct. An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct. Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?" Doing this right is tricky. Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing. Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct. (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)
The bottom line: when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct". Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.