A couple of my colleagues pointed me to this blog, that of Raphaël Lévy at Liverpool. Lately it has become a clearing house for information about a particular controversy in nanoscale science, the question of "stripy nanoparticles". The ultrashort version of the story: Back in 2004, Francesco Stellacci (then at MIT, now in Switzerland) published a paper in Nature Materials arguing that his group had demonstrated something quite interesting and potentially useful. Very often when solution-based materials chemistry is used to synthesize nanoparticles, the nanoparticles end up coated in a monolayer of some molecule (a ligand). These ligands can act as surfactants to alter the kinetics of growth, but their most important function is to help the nanoparticles remain in suspension by preventing their aggregation (or one of my favorite science terms, flocculation). Anyway, Stellacci and company used two different kinds of ligand molecules, and claimed that they had evidence that the ligands spontaneously phase-segregated on the nanoparticle surface into parallel stripes. His group has gone on to publish many papers in high impact journals on these "stripy" particles.
However, it is clear from the many posts on Lévy's blog, to say nothing of the paper published in Small, that this claim is controversial. Basically those who disagree with Stellacci's interpretation argue that the scanned probe images that apparently show stripes are in fact displaying a particular kind of imaging artifact. As an AFM or STM tip is scanned over a surface, feedback control is used to maintain some constant conditions (e.g., constant AFM oscillation frequency or amplitude; constant STM tunneling current). If the feedback isn't tuned properly, there can be "ringing" so that the image shows oscillatory features as a function of time (and therefore tip position).
I have no stake in this, though I have to say that the arguments and images shown by the skeptics are pretty persuasive. I'd have to dig through Stellacci's counterarguments and ancillary experiments, but this doesn't look great.
This whole situation does raise some interesting questions, though. Lévy points out that many articles seem to be published that take the assertion of stripiness practically on faith or on very scant evidence. Certainly once there is a critical mass of published literature in big journals claiming some effect, it can be hard as a reviewer to argue that that body of work is all wrong. Still, if you see (a series of) results in the literature that you really think are incorrectly interpreted, what it is the appropriate way to handle something like this? Write a "comment" in each of these journals? How should journals respond to concerns like this? I do know that editors at high profile journals really don't like even reviewing "rebuttal" papers - they'd much rather have a "comment" or to let sleeping dogs lie. Interesting stuff, nonetheless.
Update: To clarify, I am not taking a side here scientifically - in the long run, the community will settle these questions, particularly those of reproducibility. Further, one other question raised here is the appropriate role of blogs. They are an alternative way of airing scientific concerns (compared to the comment/rebuttal format), and that's probably a net good, but I don't think a culture of internet campaigns against research with which we disagree is a healthy limiting case.
Well, hopefully once he's convinced he can publish his own rebuttal paper like Moses Chan! In the meantime, however, I think if enough people don't believe a certain hypothesis then these papers will stop getting published. At some point, a reviewer has to weigh the mass of literature against versus one author's set of data in favor. I guess I'm somewhat naively of the belief that, on average, peer review works these kinks out. In this case, being an SPM guy, I tend to hang with the doubters. In my postdoc when reviewing manuscripts I would call into question things that could clearly be scanning artifacts and ask for additional information. I don't think it's unreasonable for reviewers to do this even with the body of work on the stripy particles.
ReplyDeleteI personally think journals should definitely consider rebuttal papers, it's both good scientific sense (helps inform the community in an even way) and good business sense (nothing draws money like controversy, media would buy more articles..though the fact that I can say that is a separate diatribe about publishing models). Comments to the journal, for better or worse, just don't have the same weight.
That's an interesting discussion. I strongly believe in arguing in science, and I'm happy that nowadays it's even more easy than before.
ReplyDeleteFew years ago we were in a similar situation (although the topic was not so "mainstream").
In two different papers the same simple synthesis (benzaldehyde and aniline to form the imine in water) was reported with extremely different yield (4% in one paper and 95% in the other). It was quite interesting for us because we were working on the reversibility of imines.
We repeated both the literature experiments and discover that both groups were overlooking the results. We contacted the editors of both journals but they didn't want to do something (one of the paper is from a Nobel laureate).
At that point we decided to publish our results as well. The paper was rejected from 5 journals before ending up in Tetrahedron Letters (after 6 referees).
At least now someone can look out for a "real" reference on the synthesis of imine in water.
ref: http://www.sciencedirect.com/science/article/pii/S0040403909011459
We first submitted to Nature Materials; that was more than 3 years ago. We went to Nature Materials because they had published the first in the series, and an additional one. Since then, they have published another two. Ours was rejected 5 days before their publication of the third ms. It was rejected on the basis of two reports, none of which by AFM experts. I'd like to share those reports as they make an interesting read, but they are protected by Nature confidentiality policy on the reviewing process. I have asked Nature Materials to lift this but they have not replied yet.
ReplyDeleteshould have said:
ReplyDelete"none of which by AFM/STM experts"
I toured Stellacci's lab before the paper came out and I was quite skeptical since the stripes are always in the scan direction. I directly questioned students involved and got no real explanation. I have always assumed that it is not real.
ReplyDeleteWhile rebuttal papers sound like a good idea, I think it is a slippery slope. It could lead to lots of submissions that are simply vindictive, and would put too much of the decision on the editors (even if the rebuttals are reviewed). Plus, there is no reason one cannot refute another's result in an original manuscript, which would serve the same purpose.
I think the natural way an incorrect result like this goes away is that it does not advance the field and does not impact anyone else's research.
I do think that there is an inherent flaw in the glossy journal's attitude about not wanting to publish any rebuttals. Sure, as the editor above points out, there is a limit where this could become by unproductive. However, we are far in the opposite direction, where a glossy journal can publish a paper that gets a lot of attention, including that of both nonspecialists (since the glossy journal readership is meant to be broad) and, e.g., program officers. Indeed, often there will be a news feature written about the paper to maximize the attention it receives. If such a paper is just wrong, that can have a significant effect transiently, where the timescale for the community overall to course correct can be years and involve many person-hours of work. The idea that article space is so limited and precious that a direct follow up or rebuttal is automatically inappropriate bothers me. We had our own experience in this regard (see this post for the links http://nanoscale.blogspot.com/2010/03/slightly-missing-point.html), where like Levy we had to publish such a paper in a totally different journal. In our case the situations was much less far along than this one is.
ReplyDeleteI realize it's a rather difficult experiment, but I don't see a reason that electron microscopy couldn't be employed to try to figure this out. If we're looking for a way to image these things that doesn't involve a scanning probe, TEM would be the obvious choice. What am I missing here?
ReplyDeleteOops, I see that in the supplement to the original paper they provided TEM images. Not sure that the images can be used to argue one way or the other though, since they're right near the resolution limit.
ReplyDeleteRyan
ReplyDeleteThere is some discussion of the electron microscopy here:
http://raphazlab.wordpress.com/2012/12/05/scientific-claims-should-be-supported-by-experimental-evidence-1/
(see also discussion of the Weinstock papers in the comments)
Also some discussion of the EM, in the comments of the round up post:
http://raphazlab.wordpress.com/2012/12/13/stripy-revisited-posts-where-to-start/#comments
As somebody who has worked on high resolution SPM for many years, the first instinct on seeing the images in the Nature Materials paper mentioned is "those are crappy scans". Another worrying aspect of the data is that some of the images have clearly been processed significantly, but nothing much about this processing is mentioned. The TEM images are inconclusive.
ReplyDeleteOn a second reading though, it looks like some of the sanity checks that any student would perform have indeed been performed. However, none of the results of the cross checks have been shown (even in the supplementary info). For example, if the scan was taken at 3 resolutions and 3 scan rates, why not show all the images in the supplementary info? I would put a large part of the blame on the referees of this paper - without sufficient backup that is IN PRINT (either in the main paper or in the supplement) - it should never have been published. It is a disservice both to the field and to the authors of the paper themselves.
Thanks again for covering the controversy. I have discussed your update in some details in a specific post here:
ReplyDeletehttp://raphazlab.wordpress.com/2012/12/18/scientific-controversy-is-healthy/
@SPMer
ReplyDeleteDear SPMer,
indeed those are crappy scans. Would you like to see raw data and that these are indeed feedback oscillations, that Nature Materials "fair" review process fail to catch? I have been drumming into Stellacci's head words "feedback ringing" and, at that time, he declined the existence of such phenomenon. The rest of his graduate students, especially the one performing these scans, were even more oblivious.
I raised the flag at MIT in 2005 and nothing happened. This is the second chance. Hats off to Raphael and his persistence.
Pedja. (pedja@mit.edu)
Well, I guess I like whiling away my time.
ReplyDeleteI was curious about this controversy, so I read today the response by Yu and Stellacci
http://dx.doi.org/10.1002/smll.201202322
to the allegations of Levy et al
I have to say that I was very disheartened by this response. In particular, as an experimentalist I looked closely at figures 3 and 4. Figure 3 is an image of nanoparticles taken at 3 different scan rates claiming to show that the same features are seen on all images.
(i) I hate to say this, but the quality of all the images in this figure is absolutely terrible.
(ii) The scan rates mentioned in the figure (300 nm/s, 800 nm/s and 1300 nm/s) are outrageous for the experimental conditions. For an Omicron instrument with relatively slow feedback, I would use such scan rates only for a UHV clean metal surface with angstrom flatness. These scan rates are at least an order of magnitude too high for the sample studied.
(iii) When I want to show that I see the same thing in three conditions, I would pick the same nanoparticle and show the same feature in the three scan conditions. Not what is done by the authors - pick three random nanoparticles in three different images and show that the same wavelength is seen. Just by statistics alone I would see such correlations.
The next figure (#4) claims to show that the features are preserved under rotation of the scan. The data that is presented does not do this. Two images are shown, and the authors pick at random two particles and claim that they see lines that match on the two particles. However, if we look at other particles in the figure, there appears to be absolutely no correlation in the two rotated images for the dimples or stripes on the particles.
What about an autocorrelation analysis? Some other mathematical tool? Surely one can do better.
Stellacci should be doing a better job, sorry.
Dear SPMer:
ReplyDeleteI truly expected more from Stellacci in his rebutal. What Stellacci isn't telling you in the paper, is that he is **offline** zooming from 100nmx100nm at 512x512 pixels to an area that is 70x70. Please take a look of unprocessed zooms: http://raphazlab.wordpress.com/2012/11/25/responding-to-the-response/#comment-565 and I would love to hear your comment back.
Then, Stellacci extrapolates 70x70 into 256x256 to artificially generate smoother image (I am holding off using the word "fabricate") and attempts to persuade the reader of any preferential orientation?
Now, cherry-piking of data: what happened with the rest of the nano particles? Considered that he initially had about 400 nano particles in 100nmx100nm scan, he picks TWO, that don't even show any orientation --- the extrapolated pixels are Poissonian noise. Notice that there is no quantitative analysis of this data. It leaves the reader to judge the stripiness by visual inspection. How is this even a scientific method?
Scanning speed --- I agree, outrageous. Add improper gain settings, and you get feedback oscillations --- or more famously called, ripples. Slow down, and you get no ripples, and he calles it " homoligand" nanoparticles. Stellacci also doesn't disclose his tunneling current error signal: it oscillates with the amplitde 100 times the set point. In most of the scans, the tunneling current becomes negative, and we all know what that means --- the tip is making the contact with the surface.
So, to summarize, cherry-picking data, offline zooms without disclosing extrapolation to a higher resolution, wrongly set gains, no characterization of the PID controller, feedback oscillations and high scanning speeds for 100 nm non-flat scan area?
I pointed all of these problems in 2005. Its 2012 and he is doing exactly the same thing and keeps sending this erroneous data to Nature, which happens to accept whatever he sends in.
Have you ever heard of anything like this before?
I would like to alert the followers of this blog on despicable behavior of the Associate Editor Pep Pamies of Nature Materials on Raphael's blog. "Pep" has demonstrated unquestionable bias and high level of unprofessinalism. Even worse, he proved his basic scientific inability to critically evaluate experimental measurements --- in this case, of the STM --- while rigidly defending on non-scientific basis Francesco Stellacci's erroneous STM measurements, which Nature Materials is consistently publishing. This will be a turning point in our defense of science and fair peer-review process. We are appalled by low editorial standards of the Nature Publishing Group. Hereby, I alert everyone to take a look at the following posts:
ReplyDeletehttp://raphazlab.wordpress.com/2012/11/25/responding-to-the-response/#comment-586
http://raphazlab.wordpress.com/2012/11/25/responding-to-the-response/#comment-594
Predrag: I think we are imposing on Doug's blog. This is my last post on this subject on Doug's blog.
ReplyDeleteAt the end of the day, if nobody reproduces the stripy stuff, it will die a natural death. I dont see any point in dragging Stellacci through the mud. I will say, however, that I have now read Levy's blog and most of the articles by Stellacci about the stripiness. I am sad that this is what his students are learning in his lab, and I hope that I do not have to review a paper or proposal from his lab. Having said this, even in the tiny field of SPM this is not the first set of papers like this, and is not likely to be the last. My advice to you is to not invest more of your time into this.
@STMer:
ReplyDeletethanks for your advice. A do have a few technical questions regarding statistical image analysis, so if you feel like answering, you know where to find me. (not related to the controversy directly of course!)
I am not moving from Doug's blog, I just alerted everybody of something that i could have never thought it could actually happen. I encourage readers to follow those links, SPM people will have a great amount of fun discovering who may in the future judge their submissions to Nature Materials. Scarrrry thought.
SPMer, thanks.
ReplyDeletePredrag, with all due respect, if you want to continue this discussion on Raphael Levy's site or your own site, that's great, but I would appreciate it if you are sparing in your use of the comments section of this post. I've already provided links to Levy's ongoing discussion thread, and he's already given you a big platform from which to expound on this topic.
@Doug:
ReplyDeleteI wanted to share some technical comments with SPMer, as he is not participating in Levy's blog, without having a purpose to advertise Levy's blog on your blog.
I shared the links above because they pretty much directly answer your question " How should journals respond to concerns like this?"
Pedja.
I am an STM-er. For 10 years. Atomic resolution spectroscopy is the only thing I do. Granted I hunt for flat areas to do my work, but I do encounter a lot of nm scale "mountains".
ReplyDeleteMy judgment (without it being worth anything here as I prefer to remain anonymous) is that the STM images in the papers by Stellaci are not proof of ligand ordering. I believe (but can't prove that without actually repeating these measurements myself) that they are indeed feedback-loop ringing.
Just as a FYI for the people involved in this discussion: I think a person that may be able to mediate here (not here, but at Nature) may be Liesbeth Venema. I believe it was she that got the first atomic resolution carbon nanotube STM images in Cees Dekker's group. Carefully analyzing the corrugation in such images (not spherical but cylindrical) should discriminate between the two camps with respect to the 2D-3D projection arguments.
Moreover, she is/has been an (associate?) editor at Nature.
She is qualified, has the contacts there, is apparently trusted by NPG.
Funny thing is the TEM image from their appendix, Fig S2a, that is cited as independent confirmation is also an instrumental artifact. The ring of black dots is the out of focus point spread function (basically a Fresnel fringe). This is a very common problem for casual users of a TEM who are looking for core-shell nanoparticle structures. By changing focus, either a dark or bright ring can be created. Going in to focus will make it go away. The focus of the image can be determined from a quick FFT of the amorphous background, and sure enough, the passband is about a factor of two off from the optimal defocus.
ReplyDeleteI guess one of the dangers of interdisciplinary work if you try to do everything yourself instead of forming teams of experts is that there are an awful lot of artifacts to learn about - sometimes the hard way.
@David A. Muller
ReplyDeleteI agree about your comment about forming multidisciplinary teams in order to avoid artifacts. However, I feel that the tenure system at US does not award that. For example, material growers or TEM experts are very rarely put down as first authors on papers (especially device papers).
Thanks to Anonymous for bringing Venema geometrical argument to my attention. I have taken this into account here:
ReplyDeletehttp://raphazlab.wordpress.com/2012/11/25/responding-to-the-response/#comment-656
At some point, you should just say he is not a scientist, he should be a fiction writer
ReplyDeleteComposing isn't only a thing of words yet its how you use your potential for rolling out an improvement in this world. Do share such potential stuff.
ReplyDeletewebsite translation services
مهندس افران جدة
ReplyDeleteشركة انشاء مسابح بجدة
شركة مكافحة حشرات بجدة
شركة تنظيف بجدة
شركة تنظيف مكيفات بجدة