Thursday 4 February 2021

"The major drivers of temperature increase here are statistical adjustments"

New US president Joe Biden has appointed a new climate adviser, one Gavin Schmidt. The same Gavin Schmidt under whom -- as atmospheric scientist Wei Zhang points out -- the temperature adjustments always go up, no matter what the thermometers say. 

For two decades, Schmidt and his predecessor have been administering the widely influential GISS data set of global temperature, which over those and decades has shown a steady rise in temperature compared to earlier decades that correlates with increasing atmospheric CO2. Yet as Zhang observes, it's not so much a rise in temperature that correlates with this increasing CO2, but a rise in the temperature adjustments made by Gavin and his colleagues!

"Why do [GISS's] temperature ADJUSTMENTS correlate with CO2?" wonders Zhang. "The probability that this happens by chance is shockingly close to zero."

What Wei Zhang has illustrated is almost a perfect correlation between adjustments to the surface temperature record made by NASA GISS (and Gavin Schmidt) and the concentration of CO2 in the atmosphere. They’ve artificially cooled the past prior to 1960 (about the time Mauna Loa CO2 measurements started) and artificially warmed 1960 to the present.
    The result? A steeper warming trend (adding 0.24°C) than what actually exists in the unadjusted data.
    It is proof of man-made climate change – created by adjusting the temperature data to fit a premise – that man-made CO2 released into the atmosphere is driving temperature.
    But it seems very clear from Dr. Wei Zhang’s analysis that the major drivers of temperature increase here are statistical adjustments.
. 


3 comments:

MarkT said...

I'm not convinced this is justified criticism of the statistical adjustments

I used to go to school with someone who's now a senior Australian climate scientist. We differ politically, but I can vouch for his intelligence (dux of our year), and have no reason to doubt his honesty or integrity. I put this article to him and asked him to comment. He responded quickly:

"You've come to the right place - I'm in charge of the section of the next IPCC report which explains why the latest generation of data sets show more warming than their predecessors (short answer - partly more effective use of the limited data available from remote areas, especially the fast-warming Arctic, and partly accounting better for the widespread change in sea surface temperature measurement from ships to buoys). Will send you some more detail soon."

In a later post he elaborated:

"The basic thing here is that what we need to be most concerned about are "systematic" issues which affect a large area or a large part of the observation network - an issue at a single site may be locally important but will have a negligible effect on a hemispheric average, but if you have a, say, a change in observations technology which occurs over a large part of the world, that's much more important. One also needs to remember that 70% of the world is ocean, so sea surface temperature makes up 70% of the global temperature assessment."

"For sea surface temperature (this is a bit of simplification), there have been three basic types of observing technology - buckets hauled up from over the side of ships and measured on deck; sensors mounted either on a ship's hull or its engine room water intake; and sensors mounted on buoys (either moored or drifting) which aren't associated with a ship at all. (There are sea surface temperature datasets that use satellites but we don't use them in the IPCC assessment). Roughly speaking, nearly all the pre-WW2 observations were buckets, ship-mounted sensors became dominant in the post-war period, now buoys have become progressively more important since the 1990s (I think they're now about 60% of the network). Again as a simplification, bucket observations are typically several tenths of a degree cooler than those from ship-mounted sensors (mostly evaporative cooling as the bucket is lifted to the deck), and buoys on average are about 0.1-0.15 degrees cooler than ships (most likely due to some level of heating from the body of the ships). The most recent generation of data sets corrects for the transition to buoys, the previous generations only corrected for the bucket-to-ship transition. (Because the cool bias of buckets is relatively large and affects such a large percentage of the world's area, the upshot of this is that uncorrected global temperature data actually shows a much stronger warming trend than corrected data). Working out exactly which ships used which methods involves quite a lot of detective work; there's also new sources of historic ocean data turning up all the time in various maritime or military archives (the system for collecting and archiving ocean data is nowhere near as systematic as the system for collecting meteorological data on land)."

[to be continued]

MarkT said...

[Part 2 continued]

"The other issue is the handling of areas with sparse data - this includes high latitudes in both hemispheres, as well as parts of Africa and South America (and even interior Australia, especially interior WA). Most earlier versions of data sets simply ignored areas with missing data and calculated averages over those parts of the world which did have data. That's fine (in a sampling sense) if the areas with missing data are behaving similarly to those which aren't. However, we know through multiple lines of evidence (the limited number of sites that are there, satellite data in the more recent period, as well as indirectly through the melting of land and sea ice) that the Arctic is warming at 2-3 times the global average, so if you are only sampling, say, 10% of the Arctic, you are going to be under-sampling that warming signal. (Conversely, the main 'warming holes' are in places which are reasonably well observed - the North Atlantic between Greenland and Europe (where there are some interesting things going on with ocean circulation), the southeast US, and to some extent northwest Australia where increased rainfall and extra cloud cover has slowed warming). The newest generation of data sets do interpolate over most or all of the polar regions from the available data (using a variety of statistical methods), so that coverage bias has largely been removed in the latest data set generations."

At that point I acknowledged that sounded plausible as to why the corrections needed to rise, but queried if he was personally confident no 'confirmation bias' was at play when the corrections were being applied. After all I pointed out, there’s a difference between following the evidence, versus having a predetermined conclusion about what should be happening and then going looking for evidence. His response:

"I don't see confirmation bias in that area, partly because - in the case of my work, and it would be the same in a global set - the adjustments are being made at a station level (or ship level if it's ocean), and the end result isn't known until it's all aggregated at the end."

"Where I think you do see some level of confirmation bias - not so much amongst the scientists directly working in the field (the IPCC reports are usually very cautious on this front) but amongst those who are reporting/communicating the science - is in the interpretation of extreme events. While the link between temperature extremes and climate change/CO2 is clear-cut, for many other types of extremes the signal is either much weaker or only applies in particular regions. (For example, I'd be very cautious drawing much of a connection between the 2017-19 NSW drought and climate change, but much more confident drawing a relationship between climate change and drought in Victoria or southwest WA where the rainfall decline is much clearer)."

Stone guy said...

he doesn't actually address the issue, he just ignores it and trouts out his IPCC canned talk.