Smerdon et al 2008 is an interesting article on RegEM, continuing a series of exchanges between Smerdon and the Mann group that has been going on for a couple of years.
We haven’t spent much time here on RegEM as we might have. I did a short note in Nov 2007 here.
In July and August 2006 open review(s) of Bürger and Cubasch (CPD, 2006) of Mann (dba Anonymous Reviewer #2) referred to “correct” RegEM, referring to Rutherford et al 2005.
On July 10, 2006, Jean S commented on Rutherford-Mann 2005 “adaptations”, noting three important “adaptations:
1. use of a “hybrid”: separate application of RegEM to “low-frequency” and “high-frequency” as separated by Mannian versions of Butterworth filters;
2. stepwise RegEM
3. an unreported “standardization” step. CA readers were aware by this time that short-segment standardization could have a surprising impact on reconstructions – a point that was then very much in the news with the confirmation of this point in the North and Wegman reports being very fresh at the time. Jean S observed of this unreported standardization:
The above code “standardizes” all proxies (and the surface temperature field) by subtracting the mean of the calibration period (1901-1971) and then divides by the std of the calibration period. I’m not sure whether this has any effect to the final results, but it is definitely also worth checking. If it does not have any effect, why would it be there?
The unreported standardization step noted by Jean S was subsequently determined to be at the heart of an important defect described in Smerdon and Kaplan 2007.
Mann et al 2005 had supposedly tested the RegEM methodology used in the Rutherford et al 2005 reconstruction, then presented as mutually supporting the MBH reconstruction (although Rutherford et al 2005 could be contested on alternate grounds than those discussed by Smerdon, since Rutherford et al 2005 used Mannian PCs without apology.) Smerdon and Kaplan 2007 findings are summarized as follows (in Smerdon et al 2008):
Mann et al 2005 attempted to test the R05 RegEM method using pseudoproxies derived from the National Center for Atmospheric Research (NCAR) Climate System Model (CSM) 1.4 millennial integration… Mann et al 2005 did not actually test the Rutherford et al 2005 technique, which was later shown to fail appropriate pseudoproxy tests (Smerdon and Kaplan 2007). The basis of the criticism by Smerdon and Kaplan (2007) focused on a critical difference between the standardization procedures used in the M05 and R05 studies (here we define the standardization of a time series as both the subtraction of the mean and division by the standard deviation over a specific time interval). Their principal conclusions were as follows: 1) the standardization scheme in M05 used information during the reconstruction interval, a luxury that is only possible in the pseudoclimate of a numerical model simulation and not in actual reconstructions of the earth’s climate; 2) when the appropriate pseudoproxy test of the R05 method was performed (i.e., the data matrix was standardized only during the calibration interval), biases and variance losses throughout the reconstruction interval; and 3) the similarity between the R05 and Mann et al. (1998) reconstructions, in light of the demonstrated problems with the R05 technique, suggests that both reconstructions may suffer from warm biases and variance losses.
In their Reply to Smerdon and Kaplan 2007 (Mann et al 2007b), they claimed that the selection of the ridge parameter using generalized cross validation (GCV), as performed in R05 and M05, was the source of the problem:
The problem lies in the use of a particular selection criterion (Generalized Cross Validation or ‘GCV’) to identify an optimal value of the ‘ridge parameter’, the parameter that controls the degree of smoothing of the covariance information in the data (and thus, the level of preserved variance in the estimated values, and consequently, the amplitude of the reconstruction).
Smerdon et al 2008 (JGR) delicately observed that this assertion was supported only by arm-waving:
The authors do not elaborate any further, however, making it unclear why such conclusions have been reached.
Smerdon et al 2008 report that “explanation” of Mann et al 2007a, 2007b for the problem is invalid, stating:
These results collectively rule out explanations of the standardization sensitivity in RegEMRidge that hinge on the selection of the regularization parameter, and point directly to the additional information (i.e., the mean and standard deviation fields of the full model period) included in the M05 standardization as the source of the differences between M05- and R05-derived reconstructions. It should be noted further that this information, especially in terms of the mean, happens to be “additional” only because of a special property of the dataset to which RegEM is applied herein: missing climate data occur during a period with an average temperature that is significantly colder than the calibration period. This property clearly violates an assumption that missing values are missing at random, which is a standard assumption of EM (Schneider 2006). If the missing data within the climate field were truly missing at random, there presumably would not be a significant systematic difference between the M05 and R05 standardizations, and hence corresponding reconstructions. The violation of the randomness assumption, however, is currently unavoidable for all practical problems of CFRs during the past millennium and thus its role needs to be evaluated for available reconstruction techniques.
Finally, when the application of RegEM-Ridge is appropriately confined to the calibration interval the method is particularly sensitive to high noise levels in the pseudoproxy data. This sensitivity causes low correlation skill of the reconstruction and thus a strong “tendency toward the mean” of the regression results. It therefore will likely pose some challenges to any regularization scheme applied to this dataset when the SNR in the proxies is high. We thus expect RegEMTTLS, which according to M07a does not show standardization sensitivity, to have significantly higher noise tolerance and skill than RegEM-Ridge. The precise reasons and details of this skill increase is a matter for future research. It remains a puzzling question, however, as to why the R05 historical reconstruction that was derived using RegEM-Ridge and the calibration-interval standardization (thus expected to be biased warm with dampened variability) and the M07a historical reconstruction that used RegEM-TTLS (thus expected not to suffer significantly from biases) are not notably different. The absence of a demonstrated explanation for the difference between the performance of RegEM-Ridge and RegEM-TTLS, in light of the new results presented herein, therefore places a burden of proof on the reconstruction community to fully resolve the origin of these differences and explain the present contradiction between pseudoproxy tests of RegEM and RegEM-derived historical reconstructions that show little sensitivity to the method of regularization used.
While Mann is normally not reticent about citing papers under review, Smerdon et al 2008 is, for some reason, not cited in either Mann et al 2008 or Steig et al 2009.
In my opinion, there are other issues with the RegEM project, quite aside from these ones. These relate more to exactly what one is trying to do with a given multivariate methodology.
References:
Bürger, G., and U. Cubasch. 2005. Are multiproxy climate reconstructions robust. Geophysical Research Letters 32, no. L23711: 1-4.
—. 2006. On the verification of climate reconstructions. Climate of the Past Discussions 2: 357-370.
Mann, M. E., S. Rutherford, E. Wahl, and C. Ammann. 2005. Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate. Journal of Climate 18, no. 20: 4097-4107.
Mann, M. E., S. Rutherford, E. Wahl, and C. Ammann. 2007a. Robustness of proxy-based climate field reconstruction methods. J. Geophys. Res 112. (revised Feb 2007, published June 2007) url
Mann, M. E., S. Rutherford, E. Wahl, and C. Ammann. 2007b. Reply to Smerdon and Kaplan. Journal of Climate 20: 5671-5674. url (Nov 2007)
Mann, M.E. 2006. Interactive comment on “On the verification of climate reconstructions” by G. Bürger and U. Cubasch. Climate of the Past Discussions 2: S139-S152. url
Rutherford, S., M. E. Mann, T. J. Osborn, R. S. Bradley, K. R. Briffa, M. K. Hughes, and P. D. Jones. 2005. Proxy-Based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity to Method, Predictor Network, Target Season, and Target Domain. Journal of Climate 18, no. 13: 2308-2329.
Smerdon, J. E., J. F. González-Rouco, and E. Zorita. 2008. Comment on “Robustness of proxy-based climate field reconstruction methods” by Michael E. Mann et al. J. Geophys. Res 113. url
Smerdon, J. E., and A. Kaplan. 2007. Comments on “Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate”: The Role of the Standardization Interval. Journal of Climate 20: 5666-5670. url
Smerdon, J. E., A. Kaplan, and D. Chang. 2008. On the origin of the standardization sensitivity in RegEM climate field reconstructions. Journal of Climate 21: 6710-6723. url
