A couple of my colleagues pointed me to this blog, that of Raphaël Lévy at Liverpool. Lately it has become a clearing house for information about a particular controversy in nanoscale science, the question of "stripy nanoparticles". The ultrashort version of the story: Back in 2004, Francesco Stellacci (then at MIT, now in Switzerland) published a paper in Nature Materials arguing that his group had demonstrated something quite interesting and potentially useful. Very often when solution-based materials chemistry is used to synthesize nanoparticles, the nanoparticles end up coated in a monolayer of some molecule (a ligand). These ligands can act as surfactants to alter the kinetics of growth, but their most important function is to help the nanoparticles remain in suspension by preventing their aggregation (or one of my favorite science terms, flocculation). Anyway, Stellacci and company used two different kinds of ligand molecules, and claimed that they had evidence that the ligands spontaneously phase-segregated on the nanoparticle surface into parallel stripes. His group has gone on to publish many papers in high impact journals on these "stripy" particles.
However, it is clear from the many posts on Lévy's blog, to say nothing of the paper published in Small, that this claim is controversial. Basically those who disagree with Stellacci's interpretation argue that the scanned probe images that apparently show stripes are in fact displaying a particular kind of imaging artifact. As an AFM or STM tip is scanned over a surface, feedback control is used to maintain some constant conditions (e.g., constant AFM oscillation frequency or amplitude; constant STM tunneling current). If the feedback isn't tuned properly, there can be "ringing" so that the image shows oscillatory features as a function of time (and therefore tip position).
I have no stake in this, though I have to say that the arguments and images shown by the skeptics are pretty persuasive. I'd have to dig through Stellacci's counterarguments and ancillary experiments, but this doesn't look great.
This whole situation does raise some interesting questions, though. Lévy points out that many articles seem to be published that take the assertion of stripiness practically on faith or on very scant evidence. Certainly once there is a critical mass of published literature in big journals claiming some effect, it can be hard as a reviewer to argue that that body of work is all wrong. Still, if you see (a series of) results in the literature that you really think are incorrectly interpreted, what it is the appropriate way to handle something like this? Write a "comment" in each of these journals? How should journals respond to concerns like this? I do know that editors at high profile journals really don't like even reviewing "rebuttal" papers - they'd much rather have a "comment" or to let sleeping dogs lie. Interesting stuff, nonetheless.
Update: To clarify, I am not taking a side here scientifically - in the long run, the community will settle these questions, particularly those of reproducibility. Further, one other question raised here is the appropriate role of blogs. They are an alternative way of airing scientific concerns (compared to the comment/rebuttal format), and that's probably a net good, but I don't think a culture of internet campaigns against research with which we disagree is a healthy limiting case.