It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Over the weekend, two groups of researchers noisily announced the release of new climate-science papers they’d written. Physicist Richard Muller’s team at the Berkeley Earth Surface Temperature study (BEST) declared that it had re-confirmed the temperature results of NASA and other groups. The Earth has indeed been heating up, the group found. Muller even wrote a New York Times op-ed to showcase his findings. Meanwhile, climate skeptic Anthony Watts trumpeted a new paper that questioned some of the techniques used by NOAA to calculate U.S. temperature trends. Watts’ paper was quickly heralded by climate-change doubters.
One possibility is that these papers are so crucial that they can’t possibly wait years before being vetted. That was one rationale behind announcing the faster-than-light neutrino result. Einstein might be wrong! That’s big news. But is that true in the case of these climate papers? Elizabeth Muller, the daughter of Richard Muller and a co-founder of the BEST project, tried to suggest as much: “I believe the findings in our papers are too important to wait for the year or longer that it could take to complete the journal review process.”
Yet many climatologists have countered that there’s no good substitute for letting peer review do its work, especially in a fraught field like climate science that attracts lots of public attention. A few years ago, Penn State’s Michael Mann and NASA’s Gavin Schmidt penned a defense of the peer-review process for Real Climate. Yes, they noted, bad papers do get past reviewers. Just because something is published in a journal doesn’t mean it should be taken as gospel. But the current system works remarkably well:
If new techniques endorsed by the World Meteorological Organisation are applied to official figures, over half of the global warming reported by US land-based thermometers between 1979 and 2008 simply disappears, researchers have found.
The new study used the same raw temperature measurements as US government federal scientific agencies, but the team deployed a revised metric that was better at taking into account the quality of the weather stations that housed the thermometers.
Previous studies have used a cruder metric to gauge station quality, which has to be taken into account so as to allow for the effect of asphalt, urban development and other local factors on the readings at any given thermometer. The new station-quality metric improves on older methods, not merely relying on distance but also the density of heat sinks and sources near the thermometers.
When the more sophisticated classification system is used, some dramatic results are seen. The new study reveals that the US National Oceanic and Atmospheric Administration (NOAA) discarded the temperature trend from the higher quality weather stations in favour of a warming temperature trend from low quality weather stations.
When these non-compliant airport stations are excluded, the top-quality class 1 or 2 thermometers report an increase of just 0.124°C/decade, rather than the 0.308°C/decade NOAA insists upon.
So between 1979 and 2008 US land did warm, but not by as much as the official state agencies reported. Higher quality stations, less affected by growing urbanisation did not reflect the trend.
The Watts work has significant implications. For example, work by climate activist and scientist James Hansen, who has overseen the NASA global temperature record's major increases, and Richard Muller's recent BEST programme - much in the news lately owing to his putative status as a "convert" to climate alarmism - use cruder station classification systems. Their statistical methods may be unimpeachable, but it now appears that they are using unreliable data sets.