My thoughts on claims made by Dr. Patrick Frank (SLAC) on the validity of climate model projections of global warming:
My thoughts on claims made by Dr. Patrick Frank (SLAC) on the validity of climate model projections of global warming:
It is always useful to check past predictions against eventual observations. Below is the NASA GISTEMP observed global temperature (updated through 2016) overlain on top of various projections of CO2-induced warming from calculations published in 1981 (Hansen et al. 1981). 2015 and 2016 are literally off of the chart. This does not imply higher equilibrium climate sensitivity than that represented by the dashed line (5.6C) because these calculations did not include the effects of anthropogenic increases in non-CO2 greenhouse gasses. There are a number of other important caveats to this juxtaposition like Hansen’s model not allowing for unforced/internal variability as well as differences between the assumed and actual growth rate of atmospheric CO2 ect. Nevertheless, it is an interesting comparison.
NASA released their 2016 global mean surface temperature data today. With this datapoint in, observations are now above the average climate model value for this point in time (using 1986-2005 as the baseline):
This graphic uses the RCP 4.5 emissions scenario for the models but the divergence between RCP 4.5 and steeper emissions scenarios is not appreciable until the mid-21st century (see e.g. Figure 1 here).
We have published a new paper titled “Spread in the magnitude of climate model interdecadal global temperature variability traced to disagreements over high-latitude oceans“. Here is a brief summary:
Natural unforced variability in global mean surface air temperature (GMST) is of the same order of magnitude as current externally forced changes in GMST on decadal timescales. Thus, understanding the precise magnitude of unforced GMST variability is relevant for both the attribution of past climate changes to human causes as well to the prediction of climate change on policy-relevant timescales.
Climate models could be useful for estimating the true magnitude of unforced GMST variability provided that they more-or-less converge on the same answer. Unfortunately, current models show substantial disagreement on the magnitude of natural GMST variability, highlighting a key uncertainty in contemporary climate science. This large model spread must be narrowed in the future if we are to have confidence that models can be trusted to give useful insights on natural variability.
Since it is known that unforced GMST variability is heavily influenced by tropical Pacific surface temperatures, it might be tempting to suppose that the large inter-model spread in the simulated magnitude of GMST variability is due to model disagreement in the amount of simulated tropical Pacific variability. Perhaps surprisingly, our study shows that this is not the case and that the spread in the magnitude of model-simulated GMST variability is linked much more strongly to model disagreements over high-latitude oceans. Our findings suggesting that improving the simulation of air-sea interaction in these high-latitude ocean regions could narrow the range of simulated GMST variability, advance our fundamental understanding of natural variability, and appreciably improve our ability to forecast global warming on policy-relevant timescales.
We have recently published a study in Geophysical Research Letters titled “The necessity of cloud feedback for a basin-scale Atlantic Multidecadal Oscillation“.
The Atlantic Multidecadal Oscillation (AMO) – a basin-scale coherent oscillation of sea surface temperatures over the North Atlantic – is thought be one of the climate system’s most important modes of natural variability, affecting everything from drought to hurricane activity to natural fluctuations in global temperature. Traditionally, the basin-scale AMO has been explained as a direct consequence of variability in the Atlantic Ocean’s meridional overturning circulation (AMOC). In contrast, our study identifies atmospheric processes; specifically cloud feedback, as a necessary component for the existence of a basin-scale AMO, thus amending the canonical view of the AMO as a signature directly and solely attributable to oceanic processes.
We have new published research that shows in detail why the earth’s temperature remains stable when it is not pushed by outside forcings. Below is a summary in plain english. For a more technical discussion see here.
We have new published research that has implications for why global mean surface air temperature (GMT) is stable in the absence of external radiative forcings.
One of the central differences between weather prediction and climate projection is that the former is considered to be an “initial value problem” and the latter is considered to be a “forced boundary condition problem” (1). This dichotomy implies that weather is subject to chaotic variability and thus is fundamentally unpredictable beyond several weeks but climate can be projected into the future with some confidence as long as changes in the boundary conditions of the system are known (2). For GMT, the fundamental boundary conditions are the system’s basic radiative properties, i.e., the incoming solar radiation, planetary albedo, and the atmospheric infrared transitivity governed by greenhouse gas concentrations (3).
In reality, however, the forced boundary condition paradigm is complicated by nonlinear, two-way, interactions within the climate system. In particular, planetary albedo and the greenhouse effect are themselves complex functions of GMT and its spatial distribution (4). Therefore, if GMT is to be projected to within some fairly narrow range for a given change in boundary conditions (i.e., an increase in the greenhouse effect), it must be the case that the climate system can damp any unforced (internally generated) GMT perturbations. This idea becomes clearer when GMT evolution is expressed as a perturbation away from an equilibrium value set by the boundary conditions (ΔT=GMT-GMTequilibrium). ΔT change can be expressed as the sum of forcings (F), feedbacks (λΔT) and heat fluxes between the upper ocean’s mixed layer and the ocean below the mixed layer (Q),
In this formulation, F often represents external radiative forcings (e.g., changes in well-mixed greenhouse gasses, aerosol loading, incoming solar radiation, etc.), however, here we are concerned with the stability of ΔT in the absence of external forcings so F represents unforced energy imbalances at the top of the atmosphere (TOA) (5). C is the effective heat capacity of the land/atmosphere/ocean-mixed-layer system, λ is the feedback parameter (the reciprocal of the climate sensitivity parameter) and λΔT represents the radiative fast-feedbacks (positive downward) (6-11).
It is accepted that ΔT should be stable in the long run mostly because of the direct blackbody response of outgoing longwave radiation to ΔT change, which is often referred to as the Planck Response,
where Te is the effective radiating temperature of the Earth (≈255K) and σ is the Stefan-Boltzmann constant (12). The negative sign indicates increased energy loss by the climate system with warming. λPlanck is typically incorporated into λ in  as the reference sensitivity, e.g., λ=(1−fa)λPlanck, where fa denotes the feedback factor sum of the fast-feedbacks in the system (i.e., water vapor, lapse rate, surface albedo, and cloud feedbacks) (7). Net positive fast-feedbacks imply fa > 0. Therefore, larger positive fast feedbacks imply a less negative λ and a climate system that is less effective at damping ΔT anomalies. To make this idea more explicit,  can be discretized and rearranged to obtain,
Now ΔT evolution is explicitly represented as a first-order autoregressive function. In this form, θ is the autoregressive parameter that can be thought of as a quantitative representation of the restoring force, or stability of ΔT. Thus it becomes evident that the relative magnitude of the fast-feedbacks in the system will play a central role in our ability to consider GMT as a forced boundary condition problem. In particular, when θ is positive, but << 1, the system experiences a strong restoring force and ΔT is heavily damped. However, when the feedbacks in the climate system nearly overwhelm the Planck Response (fa à 1, θ à 1), the restoring force for ΔT disappears. With no restoring force, ΔT would be free to evolve in a chaotic and unpredictable manner comparable to Brownian motion or a “random walk” (13). In this case, GMT could be considered to be “intransitive” (14) and it might be better categorized as an initial value problem than as a forced boundary condition problem.
Consequently, most of modern climate science rests critically on the notion that the Planck Response overwhelms positive radiative fast-feedbacks in the climate system. In our paper, however, we document that at the local level, positive fast-feedbacks actually overwhelm the Planck Response over most of the surface of the Earth. The objective of the paper was to investigate how this finding can be reconciled with an apparently stable GMT at the global spatial scale.
We resolved this apparent paradox by showing that an anomalously warm Earth tends to restore equilibrium in complex and previously unappreciated ways. Our study shows in detail why global temperature should be stable in the absence of external forcings and therefore why global temperature does not evolve chaotically in the long run. Therefore this work explains why large, sustained, changes in global temperature require external radiative forcings like increases in greenhouse gas concentrations.
We focused our analysis on 27 Atmosphere-Ocean General Circulation Models (AOGCMs) from the Coupled Model Intercomparison Project – Phase 5 (CMIP5) (15). We utilize unforced preindustrial control runs which, by definition, included no external radiative forcings and thus all variability emerged spontaneously from the internal dynamics of the modeled climate system. We used the first 200 years of each AOGCMs preindustrial control run and we linearly detrended all analyzed variables so that our analysis was not contaminated with possibly unphysical model drift that may have been a result of insufficient model spin-up. Because this detrending procedure forced the AOGCM runs to be stable over the 200-year period, we are implicitly studying the restoring force for ΔT relative to any 200-year trend present in the control runs. Consequently, we are limited to studying the physical explanation for the stability of ΔT at timescales smaller than 200-years.
For more information see the AGU poster or the paper:
Brown, P.T., W. Li, J.H. Jiang, H. Su (2016) Unforced surface air temperature variability and its contrasting relationship with the anomalous TOA energy flux at local and global spatial scales. Journal of Climate, doi:10.1175/JCLI-D-15-0384.1
This is an update to our 2015 Scientific Reports paper: Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise. The paper used a novel statistical estimate of unforced variability that was derived from reconstructed and instrumental surface temperature records. We used our statistical estimate of unforced variability to aid in our interpretation of recently observed temperature variability (more info here).
Our paper used global temperature data through 2013 since that was the most recent year in the major global temperature datasets at the time that the paper was submitted. Below I update Figures 2 and 3 from the paper, incorporating the back-to-back record breaking warmth of 2014 and 2015.
Figure 2 updated to include 2014 and 2015.
Figure 3 updated to include 2014 and 2015.
The summary section of our paper stated:
We find that the interdecadal variability in the rate of global warming over the 20th century (i.e., acceleration from ~1910–1940, deceleration until ~1975, acceleration until ~2000) is within the 2.5–97.5% EUN, even if the forced signal is represented as a linear trend, indicating that this observed interdecadal variability in the rate of warming does not necessarily require interdecadal variability in the rate-of-increase of the forced signal.
This statement was about 20th century temperature and thus updates for 2014 and 2015 are somewhat irrelevant. Nevertheless, the updated Figure 2 (bottom left panel) indicates that recent warmth is just now starting to emerge from a linear-trend null hypothesis. This is not to say that a linear trend is the most likely representation of the forced component of variability – it just means that the linear trend forced component can’t quite be ruled out. This is now starting to change as observations move above the 97.5th percentile of the unforced range.
The summary section also stated:
We also find that recently observed GMT values, as well as trends, are near the lower bounds of the EUN for a forced signal corresponding to the RCP 8.5 emissions scenario but that observations are not inconsistent with a forced signal corresponding to the RCP 6.0 emissions scenario.
Note that we were not making a forecast about how likely the RCP 8.5 emissions scenario was. Instead, we were using the the multi-model mean warming associated with the RCP 8.5 emissions scenario (out to 2050) as a representation of the quickest rate of forced warming that could conceivably be occurring over the recent past (see here and here for further clarification).
The Figure 3 indicates that with the updated data, no trend over the past 25 years falls outside of the 5-95% range for any of the scenarios. The trends over the most recent ~5 years are higher than average for all the scenarios but still well within the range of unforced variability. Over the past 10-20 years, observed trends have been on the lower end of the RCP 8.5 range but closer to the middle of the RCP6.0 range. This indicates that over the past 10-20 years it may be more likely that we have been on a RCP6.0-like warming trajectory than a RCP8.5-like warming trajectory. This is similar to the conclusion of the original study.