It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: the2ofusr1
I don't know if you seen this but Ben Santer made a comment over at WUWT ..seems the debate may have started .wattsupwiththat.com... -1754698 a reply to: SlapMonkey
What they say: ‘The rate of warming over the past 15 years [at 0.05C per decade] is smaller than the trend since 1951.'
What this means: In their last hugely influential report in 2007, the IPCC claimed the world had warmed at a rate of 0.2C per decade 1990-2005, and that this would continue for the following 20 years.
The unexpected 'pause' means that at just 0.05C per decade, the rate 1998-2012 is less than half the long-term trend since 1951, 0.12C per decade, and just a quarter of the 2007-2027 prediction.
Some scientists - such as Oxford's Myles Allen - argue that it is misleading to focus on this 'linear trend', and that one should only compare averages taken from decade-long blocks.
What they say: ‘Surface temperature reconstructions show multi-decadal intervals during the Medieval Climate Anomaly (950-1250) that were in some regions as warm as in the late 20th Century.’
What this means: As recently as October 2012, in an earlier draft of this report, the IPCC was adamant that the world is warmer than at any time for at least 1,300 years. Their new inclusion of the ‘Medieval Warm Period’ – long before the Industrial Revolution and its associated fossil fuel burning – is a concession that its earlier statement is highly questionable.
What they say: ‘Models do not generally reproduce the observed reduction in surface warming trend over the last 10 – 15 years.’
What this means: The ‘models’ are computer forecasts, which the IPCC admits failed to ‘see... a reduction in the warming trend’. In fact, there has been no statistically significant warming at all for almost 17 years – as first reported by this newspaper last October, when the Met Office tried to deny this ‘pause’ existed.In its 2012 draft, the IPCC didn’t mention it either. Now it not only accepts it is real, it admits that its climate models totally failed to predict it.
What they say: ‘There is medium confidence that this difference between models and observations is to a substantial degree caused by unpredictable climate variability, with possible contributions from inadequacies in the solar, volcanic, and aerosol forcings used by the models and, in some models, from too strong a response to increasing greenhouse-gas forcing.’
What this means: The IPCC knows the pause is real, but has no idea what is causing it. It could be natural climate variability, the sun, volcanoes – and crucially, that the computers have been allowed to give too much weight to the effect carbon dioxide emissions (greenhouse gases) have on temperature change.
What they say: ‘Climate models now include more cloud and aerosol processes, but there remains low confidence in the representation and quantification of these processes in models.’
What this means: Its models don’t accurately forecast the impact of fundamental aspects of the atmosphere – clouds, smoke and dust.
What they say: ‘Most models simulate a small decreasing trend in Antarctic sea ice extent, in contrast to the small increasing trend in observations... There is low confidence in the scientific understanding of the small observed increase in Antarctic sea ice extent.’
What this means: The models said Antarctic ice would decrease. It’s actually increased, and the IPCC doesn’t know why.
What they say: ‘ECS is likely in the range 1.5C to 4.5C... The lower limit of the assessed likely range is thus less than the 2C in the [2007 report], reflecting the evidence from new studies.’
What this means: ECS – ‘equilibrium climate sensitivity’ – is an estimate of how much the world will warm every time carbon dioxide levels double. A high value means we’re heading for disaster. Many recent studies say that previous IPCC claims, derived from the computer models, have been way too high. It looks as if they’re starting to take notice, and so are scaling down their estimate for the first time.
The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend has changed, and for evaluating climate models. Here, I propose a method for estimating the duration of the hiatus that is robust to unknown forms of heteroskedasticity and autocorrelation (HAC) in the temperature series and to cherry-picking of endpoints.
For the specific case of global average temperatures I also add the requirement of spatial consistency between hemispheres. The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein. Application of the method shows that there is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 – 26 years in the lower troposphere. Use of a simple AR1 trend model suggests a shorter hiatus of 14 – 20 years but is likely unreliable.
The IPCC does not estimate the duration of the hiatus, but it is typically regarded as having extended for 15 to 20 years. While the HadCRUT4 record clearly shows numerous pauses and dips amid the overall upward trend, the ending hiatus is of particular note because climate models project continuing warming over the period. Since 1990, atmospheric carbon dioxide levels rose from 354 ppm to just under 400 ppm, a 13% increase. [1] reported that of the 114 model simulations over the 15-year interval 1998 to 2012, 111 predicted warming. [5] showed a similar mismatch in comparisons over a twenty year time scale, with most models predicting 0.2˚C – 0.4˚C/decade warming. Hence there is a need to address two questions: 1) how should the duration of the hiatus be measured? 2) Is it long enough to indicate a potential inconsistency between observations and models? This paper focuses solely on the first question.
a reply to: SonOfTheLawOfOne
Tim Ball October 4, 2014 at 9:55 am
The first action that exposed the modus operandi of the IPCC occurred with Santer’s actions in the 1995 second Report. He exploited a very limited editorial policy to dramatically alter the findings of Working Group I of the IPCC in the Summary. It is likely he did this with guidance from those controlling the output, because he was a very recent graduate and appointee to the IPCC. An action in itself that was questionable.
Benjamin Santer was a Climatic research Unit CRU graduate. Tom Wigley supervised his PhD titled, “Regional Validation of General Circulation Models” that used three top computer models to recreate North Atlantic conditions, where data was best. They created massive pressure systems that don’t exist in reality and failed to create known semipermanent systems. In other words he knew from the start the models don’t work, but this didn’t prevent him touting their effectiveness, especially after appointment as lead-author of Chapter 8 of the 1995 IPCC Report titled “Detection of Climate Change and Attribution of Causes” Santer determined to prove humans were a factor by altering the meaning of what was agreed by the others at the draft meeting in Madrid. Wigley moved to Colorado where he continued to fund and direct his disciples. Witness Wigley’s brief appearance in the 1990 documentary, The Greenhouse Conspiracy and the need to look after his graduate students.
Here are the comments agreed on by the committee as a whole followed by Santer’s replacements.
1. “None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases.”
2. “While some of the pattern-base discussed here have claimed detection of a significant climate change, no study to date has positively attributed all or part of climate change observed to man-made causes.”
3. “Any claims of positive detection and attribution of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.”
4. “While none of these studies has specifically considered the attribution issue, they often draw some attribution conclusions, for which there is little justification.”
Santer’s replacements
1. “There is evidence of an emerging pattern of climate response to forcing by greenhouse gases and sulfate aerosols … from the geographical, seasonal and vertical patterns of temperature change … These results point toward a human influence on global climate.”
2. “The body of statistical evidence in chapter 8, when examined in the context of our physical understanding of the climate system, now points to a discernible human influence on the global climate.”
As Avery and Singer noted in 2006, “Santer single-handedly reversed the ‘climate science’ of the whole IPCC report and with it the global warming political process! The ‘discernible human influence’ supposedly revealed by the IPCC has been cited thousands of times since in media around the world, and has been the ‘stopper’ in millions of debates among nonscientists.”
The model situation has deteriorated since Santer’s first efforts because they reduced the number of weather stations used and adjusted the early temperature record to change the gradient to create the outcome to support their thesis. Santer, like all the others, will never be held accountable and so he continues to believe he did nothing wrong, even though he admitted he made the changes.
Carl Wunsch, a visiting professor at Harvard and professor emeritus of oceanography at the Massachusetts Institute of Technology, offered a valuable cautionary comment on the range of papers finding oceanic drivers of short-term climate variations. He began by noting the challenge just in determining average conditions:
Part of the problem is that anyone can take a few measurements, average them, and declare it to be the global or regional value. It’s completely legitimate, but only if you calculate the expected uncertainty and do it in a sensible manner.
The system is noisy. Even if there were no anthropogenic forcing, one expects to see fluctuations including upward and downward trends, plateaus, spikes, etc. It’s the nature of turbulent, nonlinear systems. I’m attaching a record of the height of the Nile — 700-1300 CE. Visually it’s just what one expects. But imagine some priest in the interval from 900-1000, telling the king that the the Nile was obviously going to vanish…
Or pick your own interval. Or look at the central England temperature record or any other long geophysical one. If the science is done right, the calculated uncertainty takes account of this background variation. But none of these papers, Tung, or Trenberth, does that. Overlain on top of this natural behavior is the small, and often shaky, observing systems, both atmosphere and ocean where the shifting places and times and technologies must also produce a change even if none actually occurred. The “hiatus” is likely real, but so what? The fuss is mainly about normal behavior of the climate system.
The central problem of climate science is to ask what you do and say when your data are, by almost any standard, inadequate? If I spend three years analyzing my data, and the only defensible inference is that “the data are inadequate to answer the question,” how do you publish? How do you get your grant renewed? A common answer is to distort the calculation of the uncertainty, or ignore it all together, and proclaim an exciting story that the New York Times will pick up.
A lot of this is somewhat like what goes on in the medical business: Small, poorly controlled studies are used to proclaim the efficacy of some new drug or treatment. How many such stories have been withdrawn years later when enough adequate data became available?
originally posted by: Kali74
a reply to: Gh0stwalker
What's harmful about CO2 is that it is a greenhouse gas that is causing Earth to retain more heat than is part of the natural cycle which in turn is causing the climate to change. It would have to be in the thousands of parts per million to actually be harmful to breathe.
“Our analyses of ice cores from the ice sheet in Antarctica shows that the concentration of CO2 in the atmosphere follows the rise in Antarctic temperatures very closely and is staggered by a few hundred years at most,” explains Sune Olander Rasmussen, Associate Professor and centre coordinator at the Centre for Ice and Climate at the Niels Bohr Institute at the University of Copenhagen.
originally posted by: Kali74
a reply to: Gh0stwalker
What's harmful about CO2 is that it is a greenhouse gas that is causing Earth to retain more heat than is part of the natural cycle which in turn is causing the climate to change. It would have to be in the thousands of parts per million to actually be harmful to breathe.
...