How Good Are Past Predictions of Global Warming?

Tom M.L. Wigley

Critics of climate science claim that climate models lack predictive skill. In fact, some of the earliest predictions made thirty years ago have performed remarkably well.

 


Responsible bodies such as the Intergovernmental Panel on Climate Change (IPCC) provide overwhelming scientific evidence for a dominant anthropogenic influence on climate over at least the past fifty years. However, there are still outspoken critics, from Donald Trump on down, who deny this reality. The typical criticism, rarely supported by any form of credible analysis, is this: the critics say that climate models are wrong and that, therefore, they cannot be trusted as a basis for policy. But are climate models wrong? Have predictions made by climate models in the past already turned out to be flawed?

This denial of the science is alarming because if we fail to take the science seriously and act too little or too slowly to ameliorate the perceived threat of future climate change, the consequences could be very serious. But there is another side to this coin: What if the magnitude of the threat has been overestimated? The uncertainties in projections of future climate change are large, and future changes could be either more or less than the central projections. The problem that policy makers are faced with, therefore, is how to act in the face of such uncertainty. Although my personal view is that it is better to err on the side of caution, treating mitigation and advance adaptation measures as insurance policies. However, the possibility of erring on the “too large a response side” is still of concern.

This issue came home to me recently while watching a video of the recent (September 21, 2019) United Nations Youth Climate Summit. Summing up after short presentations by articulate youths concerned by what they see as a lack of action by the nations of the world, the U.N. General Secretary General António Guterres unwittingly captured my concern: “The worst forecasts that were made are being proven wrong, not because they were too dramatic (but) because they were not dramatic enough in relation to reality.”

What the Secretary General said is not that climate models are wrong but that even the “worst forecasts” have been underestimating the magnitude of change relative to observations.

One can cherry pick the data to support the Secretary General’s statement (the rapidity and magnitude of Arctic sea ice changes is a good example), but one can still ask the question, who is right? Are the critics (a.k.a. “deniers”) right? Are climate models seriously flawed? Or, to put this criticism less stridently, are models in general grossly overestimating changes? Or is the Secretary General right that models are underestimating changes?

I will attempt to at least throw some light on this issue, but before I do some history is relevant. The idea that there might be significant anthropogenic global warming in the future goes back into the nineteenth century, and there are excellent works that document this early history, such as the book The Discovery of Global Warming by Spencer Weart (2008). A milestone in the development of the science, however, came in 1979 with what is referred to as the “Charney Report” (U.S. National Research Council 1979). This was the first attempt to review the state of the science on the global warming problem, put together by some of the leading climate experts of the day. Surprisingly, as a recent Conversation article has reminded us (Nicholls 2019), some of the key statements in this report were almost prescient in their accuracy.

The Charney Report was published soon after the first three-dimensional, global, spatially specific (i.e., “gridded”) climate models had been developed, beginning with the work of Syukuro Manabe and Richard Wetherald (1975) at the National Oceanic and Atmospheric Administration’s (NOAA) Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. These atmospheric General Circulation Models (GCMs) were relatively rudimentary compared with those being used today, but they still required state-of-the-art computing power to run. A serious limitation was that they used extremely simple realizations of the ocean with no dynamics and so were unable to give any information about the possible time evolution of climate. The next generation of such models, with more realistic three-dimensional oceans coupled to an atmospheric GCM, are referred to as coupled Ocean-Atmosphere GCMs (OAGCMs). Although there are earlier examples, the earliest application of an OAGCM to the problem of increasing CO2 levels was in 1982 (Bryan et al. 1982). 

Observations of the climate were equally rudimentary. There were very few satellite data, so spatial coverage was restricted to information from weather stations, leaving large data gaps, particularly over the oceans. It was not until 1986 that the first estimates of true global-average temperature changes (i.e., covering both the land and ocean areas) were made (Jones et al. 1986), thanks to painstaking work compiling and correcting collections of observations from ships at sea made by researchers working independently in the United Kingdom and the United States. It was only then that we had the possibility—in principle at least—of comparing model predictions of changes in global-mean temperature with observed changes.

But even here we were thwarted. The most sophisticated models, such as the Bryan et al. model noted above, were not yet able to consider the full suite of species that could influence the climate and so could not make useful comparisons between modeled and observed change. Ideally we would like to compare the three-dimensional patterns of change in several observed climate variables with results from models driven by realistic projections of the emissions of all the important climate forcing agents. This would provide a very stringent test of models, but in the 1980s we were still a decade away from being able to make comparisons of this type (Santer et al. 1996).

The first credible projections of future climate were made by Jim Hansen and colleagues at NASA’s Goddard Institute for Space Studies (GISS) (Hansen et al. 1988). This seminal paper used the GISS atmospheric GCM coupled to what was still a relatively simple ocean model. The key difference from most other GCM studies of the time was that it accounted for heat transport below the ocean’s mixed layer and so could quantify oceanic lag effects. Hansen and his colleagues drove their model with estimates of projected changes in atmospheric composition (rather than emissions)—most importantly, concentrations of carbon dioxide and methane. I discuss results from their simulations below.

In parallel with this work, other simpler (but still physically realistic) models were being developed and explored: energy-balance models (EBMs). To project future climate changes, these models still had to account for oceanic thermal inertia, which they did by modeling oceanic heat transfer as an upwelling-diffusion process. The relevant models are called Upwelling-Diffusion EBMs (UD EBMs). (Upwelling-Diffusion here refers to the mathematical technique used to quantify the vertical flux of heat into and through the ocean.) The oceans are the key to getting realistic estimates of future climate change, because their massive heat sinks slow down the response of the climate system to any form of external forcing. These external forces might be, for example, the effect on climate of changes in greenhouse gas concentrations (our primary concern), changes in the energy output of the Sun, the effect of large volcanic eruptions (possible obfuscating effects), and so on. These models cannot project the three-dimensional details of climate change; their primary output is that for changes in global-mean temperature (GMT).

Even with physically credible climate models, it is still a challenge to predict the future, because we also need to predict how the emissions of greenhouse gases and other climate forcing agents are going to evolve. As I have noted, the Hansen et al. study bypassed the emissions in favor of estimated changes in concentrations directly, whereas UD EBMs generally add, up front, so-called “gas cycle” models that convert emissions to concentrations and allow the models to be driven by emissions projections.

By the late 1980s, we had two methods available to estimate future climate change in great detail: using models such as that of Hansen et al. and, for global-mean temperatures only, using UD EBMS. In addition, we had good estimates of how global temperatures had changed in the real world since the late nineteenth century. The stage was set for testing models against observations, but we still had to wait for the future to catch up to the model projections.

So here we are, thirty years on, with the time ripe to see how well these early projections have performed. In the following, I will give results for five of the earliest projections of changes in global-mean temperatures, made in 1988, 1989, 1990, and 1992, summarized below in Table 1.

As an example of the early work with UD EBMs, in 1989 I was asked by U.K. Prime Minister Margaret Thatcher to give a presentation on the state of climate science to her and her cabinet (U.K. Dept. of the Environment 1989). In my presentation to the Thatcher cabinet, I used a UD EBM called STUGE (Wigley et al. 1991) developed in collaboration with Sarah Raper when we were colleagues in the Climatic Research Unit at the University of East Anglia, and I gave estimates of future GMT changes from 1985 out to 2040, taking into account likely changes in the emissions of the main greenhouse gases (CO2, CH4, N2O, and a large number of halocarbons). In these estimates of global-mean temperatures, the only drivers of climate change were future greenhouse gas emissions. At about this time we already realized that future changes in climate would be caused not only by greenhouse gases but also by “aerosols,” small particles or droplets that reflect incoming solar radiation and change the radiative properties of clouds to cause a cooling effect that would, to some uncertain degree, offset the warming induced by greenhouse gases.

This neglect of aerosol effects was a serious shortcoming, because the main source of CO2, coal burning, produces not only emissions of CO2 but also emissions of sulfur dioxide (SO2). When oxidized and combined with water vapor, SO2 emissions lead to a build-up of sulfate (mainly sulfuric acid) aerosols in the atmosphere. These aerosols could potentially have a substantial cooling effect if SO2 emissions increased or, conversely, a warming effect if emissions decreased. While this was well known at the time, the magnitude of this aerosol effect was still highly uncertain, as it still is today (Wigley and Santer 2013); hence, their neglect.

The 1989 global-mean temperature projections are shown in Figure 1, where they are compared with observations. While the projections used estimates of future greenhouse gas emissions, uncertainties in the magnitude of these emissions was recognized as a major source of uncertainty in future GMT estimates. We now know that the greenhouse gas emissions projections made then were generally too high, so the projected future concentration projections, particularly for CO2 and CH4 (see Table 1), were also too high. This alone would lead to any warming projections to also be overestimated.

This is indeed the case, as shown in Figure 1 (from U.K. Dept. of the Environment 1989) and Table 1. The Figure shows two projections for temperature, labeled LOW and HIGH, with the LOW–HIGH range being an attempt to account for a range of uncertainties in both future greenhouse gas emissions and the structure of the climate model. For the latter, the main uncertainties were (and still are) in the climate “sensitivity” (characterized by the amount of global-mean warming that would eventually occur if the atmospheric concentration of CO2 were doubled) and the rate at which the heat from this warming was transferred from the atmosphere into the ocean. We assumed a climate sensitivity of 2.5C, close to the current best estimate. Our overestimate of future emissions meant that our projections of future warming were too high, even for the LOW projection, as is clear from Figure 1.

Figure 1: LOW and HIGH predictions of global-mean warming from UKDOE (1989), full lines, compared with observations from the HadCRUT4 data set, dots. The model projected changes are indistinguishable from linear. The linear trend line for the observations is also shown, dashed. Changes are relative to the 1985 through 1989 mean.

Similar projections to those in Figure 1, made with the same climate model, were also given in Annexes to the 1990 and 1992 IPCC reports (Jenkins and Derwent 1990; Mitchell and Gregory 1992), but they used different future projections of greenhouse gas emissions. Mitchell and Gregory used emissions scenarios from a new and more comprehensive assessment of future emissions, produced under the auspices of IPCC, referred to as the IS92 scenarios (Pepper et al. 1992)

As shown in Table 1, for four of the five cases considered here, the projected 2018 concentrations for CO2 and CH4 were higher than those eventually observed. The exception is the CO2 projection derived here from information in Hansen et al., which is remarkably close to what has been observed. However, the consequent projected warming rate, 0.27C/decade, is much higher than observed. This is in part because of overestimates of changes in the concentrations of other greenhouse gases (see the result for methane in Table 1), but also because the Hansen et al. model had quite a high climate sensitivity (4.2C).

 

Hansen 1988

Wigley 1989

IPCC 1990

IPCC 1992

WR 1992

Observed rate

0.18

0.18

0.17

0.17

0.17

Model best

0.27

0.32

0.27

0.26

0.18

Model range

0.15 to 0.32

0.25 to 0.43

Not stated

0.18 to 0.40

0.13 to 0.26

2018 CO2 conc.

404 (407)

419 (407)

421 (407)

431 (407)

419 (407)

2018 CH4 conc.

2314 (1857)

2132 (1857)

2407 (1857)

2009 (1857)

Not stated

Start year

1985

1985

1990

1990

1990


Table 1: Summary of results from early projections of global-mean temperature (GMT) change (warming rate). The first three rows give the rate of temperature change (
C/decade) from the stated start year (final row) through 2018. The additional rows give the predicted 2018 concentrations of CO2 and CH4, with the observed values in parenthesis. WR stands for Wigley and Raper. Observed temperature trends from HadCRUT4; concentrations from NOAA ESRL.

 

At this time, understanding of aerosol effects on climate was advancing rapidly. Through these developments, Sarah Raper and I were able to incorporate the effects of sulfate aerosol forcing into the most recent version of our climate model (an improvement on STUGE called MAGICC; Meinshausen et al. 2011) and produce, for the first time, projections of global-mean temperatures that allowed for sulfate aerosol changes (Wigley and Raper 1992). As in the 1992 IPCC report, we also used the IS92 emissions scenarios, which included projections for SO2 emissions. Temperature projection results from this paper are shown in Figure 2.

The model-observed agreement in Figure 2 looks remarkable, but this agreement is partly the result of compensating errors. In the early 1990s, the available history of SO2 emissions showed a steady long-term increase, and projections into the twenty-first century assumed this increase to continue. The effect of these changes was to cause a cooling in the model, offsetting and largely compensating for the warming error arising from overestimates of future greenhouse gas emissions. We now know, however, that SO2 emissions decreased after the 1980s (Klimont et al. 2013), leading to a warming rather than a cooling. So was the agreement in Figure 2 perhaps simply fortuitous?

Figure 2: Best-estimate projected global-mean warming from Wigley and Raper (1992, Fig. 5), full line, compared with observations from the HadCRUT4 data set, dots. The model projected changes are indistinguishable from linear. The linear trend line for the observations is also shown, dashed. Changes are relative to the 1990 through 1994 mean.

Emissions uncertainties are clearly an important source of uncertainty in projections of GMT. This was the case back in 1989 and remains so today. But in the case of the 1992 simulations, the issue is more complicated. If we were to re-run these calculations now with the correct (i.e., observed) emissions, what would happen? First, the reduced greenhouse gas emissions would lead to reduced warming in the model. And, second, the reduced SO2 emissions (and reduced aerosol loading in the atmosphere) would lead to warming in the model. The corrections act in opposite directions, so the net effect on the global-mean temperature projections would be small, and, even with the corrected emissions, there would still be a good correspondence between the model results and observations. The “flaw” of compensating errors in the Figure 2 results noted above is actually a red herring.

Nevertheless, there is a range of uncertainties that still bedevil attempts to quantify future climate change and to compare model projections with observations. Emissions uncertainties are still important for making future projections. Otherwise, the most important uncertainties are in the climate sensitivity, where the uncertainty range is about a factor of three with a central estimate of around 3.0C, and the magnitude of aerosol forcing, where the uncertainty range is around a factor of two. These uncertainties have not been reduced significantly over the past few decades.

These results from thirty years ago, peering with the model’s eyes into the future that is today, show that even then our understanding of climate change was robust enough to make predictions that were remarkably accurate. While UD EBMs still have a role to play, the three-dimensional atmospheric global circulation models of the 1980s have been replaced by models that include the full dynamical and thermodynamic properties of the oceans, coupled interactively to the atmosphere, the cryosphere, and the biosphere, now referred to as “Earth System Models.” Observational data sets have ballooned to encompass many different variables and improved in terms of spatial coverage and accuracy. And dozens, perhaps hundreds, of new tests have been carried out to confirm the reliability of these models.

The good news is that when properly assessed, projections made some thirty years ago have stood the test of time; the bad news is that in terms of action, we are still only scratching the surface of the responses needed to prevent, to quote from the 1992 United Nations Framework Convention on Climate Change, “dangerous anthropogenic interference with the climate system.” The real challenges still lie ahead.

 


References

  • Bryan, K., F.G. Komro, S. Manabe, et al. 1982. Transient climate response to increasing atmospheric carbon dioxide. Science 215: 56–58.
  • Hansen, J., I. Fung, A. Lacis, et al. 1988. Global climate changes as forecast by the Goddard Institute for Space Studies three-dimensional model. Journal of Geophysical Research 93: D8, 9341–9364.
  • Jenkins, J.G., and R.G. Derwent. 1990. Climatic consequences of emissions. Annex to Climate Change: The IPCC Scientific Assessment (J.T. Houghton, G.J. Jenkins, and J.J. Ephraums, eds.). Cambridge University Press, 329–339.
  • Jones, P.D., T.M.L. Wigley, and P.B. Wright. 1986. Global temperature variations between 1861 and 1984. Nature 322: 430–434.
  • Klimont, Z., S.J. Smith, and J. Cofala. 2013. The last decade of global anthropogenic sulfur dioxide: 2000–2011 emissions. Environmental Research Letters 8(014003): 1–6.
  • Manabe, S., and R.T. Wetherald. 1975. The effects of doubling the CO2 concentration on the climate of a General Circulation Model. Journal of the Atmospheric Sciences 32: 3–15.
  • Meinshausen, M., S.C.B. Raper, and T.M.L. Wigley. 2011. Emulating coupled atmosphere-ocean and carbon cycle models with a simpler model, MAGICC6 – Part I: Model description and calibration. Atmospheric and Chemical Physics 11: 1417–1456.
  • Mitchell, J.F.B., and J.M. Gregory. 1992. Climatic consequences of emissions and a comparison of IS92a and SA90. Annex to Climate Change 1992: The Supplementary Report to The IPCC Scientific Assessment (J.T. Houghton, B.A. Callander, and S.K. Varney, eds.), Cambridge University Press, 171–175.
  • Nicholls, N. 2019. 40 years ago, scientists predicted climate change, and hey, they were right. The Conversation (July).
  • Pepper, W.J., J.A. Leggett, R.J. Swart, et al. 1992. Emission Scenarios for the IPCC—an update: Background Documentation on Assumptions, Methodology and Results. U.S. EPA, Washington, D.C.
  • Santer, B.D., K.E. Taylor, T.M.L. Wigley, et al. 1996.  A search for human influences on the thermal structure of the atmosphere. Nature 382: 39–46.
  • U.K. Dept. of the Environment. 1989. Seminar on Climate Change. My lecture was titled “Scientific Assessment of Climate Change and its Impacts.”
  • U.S. National Research Council. 1979. Carbon Dioxide and Climate: A Scientific Assessment: National Academies Press, Washington, D.C.
  • Weart, Spencer R. 2008. The Discovery of Global Warming: Revised and Expanded Edition. Harvard University Press.
  • Wigley, T.M.L., and S.C.B. Raper. 1992. Implications for climate and sea level of revised IPCC emissions scenarios. Nature 357: 293–300.
  • Wigley, T.M.L., and B.D. Santer. 2013. A probabilistic quantification of the anthropogenic component of 20th century global warming. Climate Dynamics 40: 1087–1102.
  • Wigley, T.M.L., T. Holt, and S.C.B. Raper. 1991. STUGE (An Interactive Greenhouse Model): User’s Manual. Norwich, U.K.: Climatic Research Unit.

Tom M.L. Wigley

Tom M.L. Wigley is a highly cited climate scientist, a contributor to IPCC reports on many occasions, and a former director of the U.K. Climatic Research Unit. He was recently named one of the world’s top ten climate change scientists. He is at the University of Adelaide, South Australia, and the National Center for Atmospheric Research, Boulder, Colorado.