A recent paper in The Proceedings of the National Academy of Sciences has caused quite a stir. The authors use citation counts to try prove “the relative climate expertise and scientific prominence of the researchers unconvinced of [anthropogenic climate change] are substantially below that of the convinced researchers.” I haven’t closely checked the methodology so I can’t comment in great detail. Roger Pielke Jr. unsurprisingly offers some sharp criticism. See Michael Levi in Slate and David Bruggeman for more along those lines. Check out this post at RealClimate and Michael Tobis for more supportive views.
I always find Jonathan Gilligan to be very insightful, so I’ll highlight his tempered responses to Roger’s first post, which is a bit overwrought in my view. Until I read the paper, I’ll tentatively agree with Gilligan’s assessments that “the PNAS paper seems to me pointless and banal, but innocuous.” I know Steve Schneider, a co-author on the PNAS paper, pretty well. I’d be shocked if he actually were trying to intimidate researchers or create a blacklist. I suspect he’s simply trying highlight that not every scientists’ opinion counts on climate change, something I’ve been arguing for a while now. While this paper may not be the best way to make that point, it does need to be made.
How do you reconcile those last two points? Schneider is not trying to create a blacklist, but he is trying to point out who we should and shouldn’t listen to? I don’t get it.
I just watched Schneider give a plenary talk at a conference here in Gold Coast yesterday (more info/reax here), and he mentioned the paper with pride. He suggested that it confirms what we all know – if you’re not ‘convinced’ (whatever that means) – you’re a bad scientist.
He also mentioned his new book: Science as a Contact Sport. His rough summary: in politics can’t be afraid to throw an elbow or slash someone across the face with a sharpened hockey stick (his words. seriously). He seems to advocate that climate scientists embrace that mentality, and I think his privileged (non-peer reviewed?) publication of this paper is a perfect example of that.
Hey Ryan. Thanks for the response.
I guess I have a stronger definition of blacklist. As Gilligan put it in one of his comments:
“PNAS publishes a paper that discusses the publication and citation counts of people who signed letters the authors judge supportive vs. opposed to the IPCC consensus.
No names are named in the paper, in the supplementary information, in the web page directly linked from the Supplementary Information (SI), or in web pages linked directly from that page. There is a list of “skeptics”, assembled by one of the authors (this list is not the same list used in the paper), that can be reached in three clicks from the SI (it’s elsewhere on the site to which the SI links). To me, this is a pretty tenuous connection. Others may see it differently.
The authors of the paper nowhere use any language suggesting that their methods should be used to generate a “blacklist.” In fact, although the authors don’t say so, there would be no need for a blacklist because judging scientists purely on their publication and citation counts would weed out almost all of those the authors judge to be at odds with the IPCC consensus, thus making a list redundant.”
As for who we should and shouldn’t listen to, I think Schneider has a point. With respect to climate change science, we should be listening to people like him and Hansen, not Freeman Dyson. For disasters, I’d say we should be listening to Roger. For adaptation, we should listen to people like you!
My attitude here connects with my post on scientific thinking. We both agreed that scientists aren’t magical, unique people and scientific thinking doesn’t give you special authority in every problem. The flip side of that is that scientists do have a (limited) domain of expertise where they should be trusted. So yes, I do think that for the narrow scientific questions around climate change, some people should be listened to more than others.
As for Schneider, I admit that he does get a bit excited at times! Trust me, I’ve seen it myself. But I also know that he has dealt with a lot of personal and professional attacks. Climate scientists really do feel besieged in some ways, and the opposition doesn’t always play nice. Yes, scientists’ communication and attitudes leaves a lot to be desired. And yes, they probably deserve some of the blame for trying to make science the basis of policy.
But since part of the blame lies with actions taken by scientists and their institutions throughout history, I’m sympathetic towards individual climate scientists. Individual climate scientists didn’t create the current environment where there’s a strong incentive to politicize and distort science. Yet, they’re individually facing the repercussions. And so it doesn’t surprise me when climate scientists react strongly. In the end, we are talking about human beings who can get emotional about things they really care about.
None of this is meant to serve as an excuse for over the top behavior. We need to call a spade a spade when we see one. It’s just that in this case, I don’t think either the PNAS paper or Schneider qualify for such treatment. I think it’s pretty mundane to point out that not every scientist should be listened to in every problem. When I blogged about Chu and the Gulf, you even agreed with me!
As for elbows and hockey sticks, I will agree with you that it is the wrong approach for scientists to take. I’m personally more attracted to Mike Hulme’s approach. But again, I’m also somewhat sympathetic to those who react differently.
Enough of this long, rambling response!
Shoot, I lost my first attempt to comment on this. The paper is responding to a real frustration, but the devil is in the details. Measuring expertise and credibility is not a trivial exercise, and I wonder (ironically) whether any of the authors actually have expertise and credibility in those kinds of measurements. I doubt that creating a black-list was the intention, but more thoughtfulness would have let the authors anticipate that some would take it this way. I think they could have avoided some of these problems by focusing on the literature itself rather than the researchers behind it (of course they would have had to change their question somewhat).
Anyone who is conspiratorially-minded about the scientific enterprise is definitely not going to be persuaded by this paper. And that it is in PNAS, which is known as a clubby journal where peer-review has historically been iffy, was not a wise tactic…nthat one of the authors is a well known advocate and another has a foundation affiliation doesn’t bump up the credibility factor.
This could have been a lot better done.
Good points Lt. I hadn’t thought about whether the authors have expertise in those kinds of measurements. And I agree it could have been done better.