In the beginning...when I got a job at a major research laboratory testing hypotheses and developing mathematical models of Holocene climate variability using long geophysical time series...the only way to get station, gridded, and divisional climate data was to go to the library and xerox the data tables from government documents, enter the data manually, and then analyze it using FORTRAN programs that you wrote yourself (so, you'll excuse me if I laugh every time someone b!tches that the "raw" climate data are not readily available).
In the late 1980s and early 1990s, we were still trying to determine if the LIA and MWP were real. In a 1991 paper, I cynically remarked that, "giving something a name (e.g., Medieval Warm Period) does not make it real." We spent the mid-1990s trying to decompose frequency signals (NAO, PDO, SST, SOI/ENSO, etc.) in the data.
At that time, the overwhelming consensus was on the "skeptical" side. Even the reviled Michael Mann was taking the cautionary road.
“Discrepancies between the observed and model-predicted trends must be resolved before a compelling connection can be drawn between 20th century changes in the behavior of the annual cycle in temperature, and anthropogenic forcing of the climate”
http://www.meteo.psu.edu/~mann/shared/articles/MannPark1996GRL.pdf
And, I'm not sure it is possible to be more equivocal than this:
“Our analysis suggests that a significant share of climate variability on interannual to century timescales may be associated with quasi-periodic processes of either external or internal origins.”
http://www.meteo.psu.edu/~mann/shared/articles/MannPark1994.pdf
The current “scientific consensus” did not swing to the pro-AGW side until the mid-late 1990s. Specifically, it was the 1997-1998 El Nino event that triggered the change because the signal was so strong that everyone saw it in their data and, more importantly, it helped resolve the mid-range noise problems that had plagued our spectral analysis.
If nothing else - I was there when Mann was building his (in)famous hockey stick. In fact, some of my data are in that little puppy. And unlike the scientific numb-skulls who defame Michael from a safe distance, I've disagreed with him in person – and he with me.
I think my position on the subject has been fairly clear.
=====
Mike --
<
I'm not sure what you mean here because the accusation by Wegman was that Mann FAILED to address the issue of serial correlation in the residual PC series. It is a BS claim because: he relied on the sleazy manipulation of the PCA by McIntyre and McKitrick; he never actually performed any Box-Jenkins analysis (although, looking at M&M's results, the auto-correlation is visually apparent) and; in the process of committing multiple acts of plagiarism (including the work of Ray Bradley - the "B" in "MBH"), he conveniently left out the parts that were contradictory.
PCA requires a full rectangular matrix. In running their PCA over the period beginning in AD 1400, M&M eliminated all data series with a later start date - which is most of the data. Doing this also changes the number, members, and rankings of the extracted principal components.
M&M ran 10,000 Monte Carlo simulations of their PCA (fair enough, it's a common method of model validation). During the simulation runs, they defined a variable called HSI (Hockey Stick Index) that collected statistics on the "blade" of the hockey stick. They then rank-sorted the 10,000 HSI values in descending order of the steepness of the hockey stick's blades - and selected the top 100 (the simple fact that 99% of these had a standard deviation > 1.0 is a clear indication of the non-random statistical bias of their sample).
As I already mentioned, you don't need any math to see the auto-correlation in the 100 (out of 10,000) series selected by M&M. Wegman took M&M's results on faith without ever bothering to do any analysis himself. In doing so, he committed a far more serious offense than plagiarism - incompetent and fraudulent analysis.
In any case, the Hockey Stick is old news - and it should never have been such a big deal to begin with. The MBH paper was only ever intended to be just a clever little experiment in combining multiple proxy records of varying resolutions into a single regression-based reconstruction.
There were two groups doing the same thing at the same time: MBH in the US and a European team headed by Keith Briffa. Then the IPCC decided that it would include one of the two reconstructions in a future report - and everyone started taking it way too seriously. We all had an opinion on which reconstruction was superior, but the truth is that (because of the robustness of the data) most people would not see any significant difference - and rightfully so.