Search This Blog

Saturday, August 04, 2012

Confirmation bias - Matt Ridley may have some.

Matt Ridley is a columnist who writes generally insightful material for the Wall Street Journal about science and the culture of scientists.  For the last three weeks, he has published a three-part series about confirmation bias, the tendency of people to overly weight evidence that agrees with their preconceived notions and downgrade the importance of evidence that disagrees with their preconceived notions.  Confirmation bias is absolutely real and part of the human condition.   Climate change skeptics have loudly accused climate scientists of confirmation bias in their interpretation of both data and modeling results.  The skeptics claim that people like James Hansen will twist facts unrelentingly to support their emotion-based conclusion that climate change is real and caused by humans.

Generally Mr. Ridley writes well.  However, in his concluding column today, Ridley says something that makes it hard to take him seriously as an unbiased observer in these matters.  He says:  "[A] team led by physicist Richard Muller of the University of California, Berkeley, concluded 'the carbon dioxide curve gives a better match than anything else we've tried' for the (modest) 0.8 Celsius-degree rise....  He may be right, but such curve-fitting reasoning is an example of confirmation bias." 

Climate science debate aside, that last statement is just flat-out wrong.  First, Muller was a skeptic - if anything, Muller's alarm at the result of his study shows that the conclusion goes directly against his bias.  Second, and more importantly, "curve-fitting reasoning" in the sense of "best fit" is at the very heart of physical modeling.  To put things in Bayesian language, a scientist wants to test the consistency of observed data with several candidate models or quantitative hypotheses.  The scientist assigns some prior probabilities to the models - the likelihood going in that the scientist thinks the models are correct.  An often used approach is "flat priors", where the initial assumption is that each of the models is equally likely to be correct.  Then the scientist does a quantitative comparison of the data with the models, essentially asking the statistical question, "Given model A, how likely is it that we would see this data set?"  Doing this right is tricky.  Whether a fit is "good" depends on how many "knobs" or adjustable parameters there are in the model and the size of the data set - if you have 20 free parameters and 15 data points, a good curve fit essentially tells you nothing.  Anyway, after doing this analysis correctly among different models, in Bayesian language the scientist comes up with posterior probabilities that the models are correct.   (In this case, Muller may have assigned the "anthropogenic contributions to global warming are significant" hypothesis a low prior probability, since he was a skeptic.)

The bottom line:  when done correctly, "curve fitting reasoning" is exactly the way that scientists distinguish the relative likelihoods that competing models are "correct".  Saying that "best fit among alternative models" is confirmation bias is just false, if the selection of models considered is fair and the analysis is quantitatively correct.  





2 comments:

CarlBrannen said...

Regarding Muller's "bias" as a "skeptic", here he writes in MIT's Technology Review magazine nine years ago (page 2):

"Let me be clear. My own reading of the literature and study of paleoclimate suggests strongly that carbon dioxide from burning of fossil fuels will prove to be the greatest pollutant of human history. It is likely to have severe and detrimental effects on global climate. I would love to believe that the results of Mann et al. are correct, and that the last few years have been the warmest in a millennium."

http://www.technologyreview.com/news/402357/medieval-global-warming/

His skepticism was only in agreeing that amateurs were correct when they demonstrated that Mann's hockey stick was a case of bad data analysis (and that trees are not, in fact, thermometers).

Thinking that you know the answer before you run the experiment is a good definition of experimental bias. The problem with climate science is that too many people come to the table with the answers already known.

Anonymous said...

Matt Ridley's main failure is willful ignorance and general dishonesty. Even when his critics have regularly refuted his claims, he goes on repeating them. That is denialism by definition. If someone isn't interested in honest debate, then what is the point?