Combining Physical and Statistical Models in Order to Narrow Uncertainty in Projected Global Warming

Below is a presentation I gave on our recent research published in Nature titled “Greater future global warming inferred from Earth’s recent energy budget”. This was for the Stanford University Department of Electrical Engineering and Computer Systems Colloquium (EE380). Thus, it is intended for a very technically-savvy but non-climate scientist audience.

Advertisements
Posted in Climate Change | Leave a comment

Global temperature: 2018 likely to be colder than 2017, record high possible in 2019

We are working on a new statistical method for predicting interannual variability in global mean surface air temperature (GMST). The method uses the preceding few years of globally gridded temperature anomalies and Partial Least Squares regression to predict the GMST of the following couple of years. See our recent Nature paper for information on applying Partial Least Squares regression in a different climate context.

The plot below shows our forecast for 2017 (using no data from 2017) compared to the just-released 2017 value for the NASA GISTEMP dataset. We also show our forecast for 2018 and 2019 with 68% confidence intervals. The method suggests that 2018 is likely to be colder than 2017 but record warmth is ‘more-likely-than-not’ in 2019.

Brown_GMST_Forecast_2018_2019

We are still in the midsts of sensitivity tests, the method is unpublished and it has not undergone peer review. Thus, these results should be considered to be part of a ‘beta version’ of our method.

Another caveat is that the method cannot possibly predict events like large volcanic eruptions which would drastically alter any annual GMST anomaly and invalidate our forecast.

Posted in Climate Change | Leave a comment

AGU Talk on potential changes in temperature variability with warming

Below is my talk from the 2017 AGU fall meeting. This talk is on a paper we published in Nature Climate Change about potential changes in natural unforced variability of global mean surface air temperature (GMST) under global warming.

Background

Unforced GMST variability is of the same order of magnitude as current externally forced changes in GMST on decadal timescales. Thus, understanding the precise magnitude and physical mechanisms responsible for unforced GMST variability is relevant for both the attribution of past climate changes to human causes as well to the prediction of climate change on policy relevant timescales.

Much research on unforced GMST variability has used modeling experiments run under “preindustrial control” conditions or has used observed/reconstructed GMST variability associated with cooler past climates to draw conclusions for contemporary or future GMST variability. These studies can implicitly assume that the characteristics of GMST variability will remain the same as the climate warms. In our research, we demonstrate in a climate model that this assumption is likely to be flawed. Not only do we show that the magnitude of GMST variability dramatically declines with warming in our experiment, we also show that the physical mechanisms responsible for such variability become fundamentally altered. These results indicate that the ubiquitous “preindustrial control” climate modeling studies may be limited in their relevance for the study of current or future climate variability.

Talk

Posted in Climate Change | Leave a comment

Greater future global warming (still) inferred from Earth’s recent energy budget

We recently published a paper in Nature in which we leveraged observations of the Earth’s radiative energy budget to statistically constrain 21st-century climate model projections of global warming. We found that observations of the Earth’s energy budget allow us to infer generally greater central estimates of future global warming and smaller spreads about those central estimates than the raw model simulations indicate. More background on the paper can be obtained from our blog post on the research.

Last week, Nic Lewis published a critique of our work on several blogs titled A closer look shows global warming will not be greater than we thought. We welcome scientifically-grounded critiques of our work since this is the fundamental way in which science advances. In this spirit, we would like to thank Nic Lewis for his appraisal. However, we find Lewis’ central criticisms to be lacking merit. As we elaborate on below, his arguments do not undermine the findings of the study.

Brief background

Under the ‘emergent constraint’ paradigm, statistical relationships between model-simulated features of the current climate system (predictor variables), along with observations of those features, are used to constrain a predictand. In our work, the predictand is the magnitude of future global warming simulated by climate models.

We chose predictor variables that were as fundamental and comprehensive as possible while still offering the potential for a straight-forward physical connection to the magnitude of future warming. In particular, we chose the full global spatial distribution of fundamental components of Earth’s top-of-atmosphere energy budget—its outgoing (that is, reflected) shortwave radiation (OSR), outgoing longwave radiation (OLR) and net downward energy imbalance (N). We investigated three currently observable attributes of these variables—mean climatology, the magnitude of the seasonal cycle, and the magnitude of monthly variability. We chose these attributes because previous studies have indicated that behavior of the Earth’s radiative energy budget on each of these timescales can be used to infer information on fast feedbacks in the climate system. The combination of these three attributes and the three variables (OSR, OLR and N) result in a total of nine global “predictor fields”. See FAQ #3 of our previous blog post for more information on our choice of predictor variables.

We used Partial Least Squares Regression (PLSR) to relate our predictor fields to predictands of future global warming. In PLSR we can use each of the nine predictor fields individually, or we can use all nine predictor fields simultaneously (collectively). We quantified our main results with “Prediction Ratio” and “Spread Ratio” metrics. The Prediction Ratio is the ratio of our observationally-informed central estimate of warming to the previous raw model average and the Spread Ratio is the ratio of the magnitude of our constrained spread to the magnitude of the raw model spread. Prediction Ratios greater than 1 suggest greater future warming and Spread Ratios below 1 suggest a reduction in spread about the central estimate.

Lewis’ criticism

Lewis’ post expresses general skepticism of climate models and the ‘emergent constraint’ paradigm. There is much to say about both of these topics but we won’t go into them here. Instead, we will focus on Lewis’ criticism that applies specifically to our study.

We showed results associated with each of our nine predictor fields individually but we chose to emphasize the results associated with the influence of all of the predictor fields simultaneously. Lewis suggests that rather than focusing on the simultaneous predictor field, we should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. Lewis goes further to suggest that it would be useful to adjust our spatial domain in an attempt to search for an even stronger statistical relationship. Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.

This is an unusual criticism for this type of analysis. Typically, criticisms in this vein would run in the opposite direction. Specifically, studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. Studies are correctly criticized for this tactic because the more relationships that are screened, the more likely it is that a researcher will be able to find a strong statistical association by chance, even if there is no true underlying relationship. Thus, we do not agree that it would have been more appropriate for us to highlight the results associated with the predictor field with the strongest statistical relationship (smallest Spread Ratio), rather than the results associated with the simultaneous predictor field. However, even if we were to follow this suggestion, it would not change our general conclusions regarding the magnitude of future warming.

We can use our full results, summarized in the table below (all utilizing 7 PLSR components), to look at how different choices, regarding the selection of predictor fields, would affect our conclusions.

Picture1

Lewis’ post makes much of the fact that highlighting the results associated with the ‘magnitude of the seasonal cycle in OLR’, rather than the simultaneous predictor field, would reduce our central estimate of future warming in RCP8.5 from +14% to +6%. This is true but it is only one, very specific example. Asking more general questions gives a better sense of the big picture:

1) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we use the OLR seasonal cycle predictor field exclusively? It is 1.15, implying a 15% increase in the central estimate of warming.

2) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we always use the individual predictor field that had the lowest Spread Ratio for that particular RCP (boxed values)? It is 1.13, implying a 13% increase in the central estimate of warming.

3) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we just average together the results from all the individual predictor fields? It is 1.16, implying a 16% increase in the central estimate of warming.

4) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we always use the simultaneous predictor field? It is 1.15, implying a 15% increase in the central estimate of warming.

One point that is worth making here is that we do not use cross-validation in the multi-model average case (the denominator of the Spread Ratio). Each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate. We made this choice to be extra conservative but it means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. We have shown that when we supply the PLSR procedure with random data, Spread Ratios tend to be in the range of 1.1 to 1.3 (see FAQ #7 of our previous blog post, and Extended Data Fig. 4c of the paper). Nevertheless, it may be useful to ask the following question:

5) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we average together the results from only those individual predictor fields with spread ratios below 1? It is 1.15, implying a 15% increase in the central estimate of warming.

So, all five of these general methods produce about a 15% increase in the central estimate of future warming.

Lewis also suggests that our results may be sensitive to choices of standardization technique. We standardized the predictors at the level of the predictor field because we wanted to retain information on across-model differences in the spatial structure of the magnitude of predictor variables. However, we can rerun the results when everything is standardized at the grid-level and ask the same questions as above.

Picture2

1b) What is the mean Prediction Ratio across the end-of-century RCPs if we use the OLR seasonal cycle predictor field exclusively? It is 1.15, implying a 15% increase in the central estimate of warming.

2b) What is the mean Prediction Ratio across the end-of-century RCPs if we always use the single predictor field that had the lowest Spread Ratio (boxed values)? It is 1.12, implying a 12% increase in the central estimate of warming.

3b) What is the mean Prediction Ratio across the end-of-century RCPs if we just average together the results from all the predictor fields? It is 1.14, implying a 14% increase in the central estimate of warming.

4b) What is the mean Prediction Ratio across the end-of-century RCPs if we always use the simultaneous predictor field? It is 1.14, implying a 14% increase in the central estimate of warming.

5b) What is the mean Prediction Ratio across the end-of-century RCP predictands if we average together the results from only those individual predictor fields with Spread Ratios below 1? It is 1.14, implying a 14% increase in the central estimate of warming.

Conclusion

There are several reasonable ways to summarize our results and they all imply greater future global warming in line with the values we highlighted in the paper. The only way to argue otherwise is to search out specific examples that run counter to the general results.

 


 

Appendix: Example using synthetic data

Despite the fact that our results are robust to various methodological choices, it is useful to expand upon why we used the simultaneous predictor instead of the particular predictor that happened to produce the lowest Spread Ratio on any given predictand. The general idea can be illustrated with an example using synthetic data in which the precise nature of the predictor-predictand relationships are defined ahead of time. For this purpose, I have created synthetic data with the same dimensions as the data discussed in our study and in Lewis’ blog post:

1) A synthetic predictand vector of 36 “future warming” values corresponding to imaginary output from 36 climate models. In this case, the “future warming” values are just 36 random numbers pulled from a Gaussian distribution.

2) A synthetic set of nine predictor fields (37 latitudes by 72 longitudes) associated with each of the 36 models. Each model’s nine synthetic predictor fields start with that model’s predictand value entered at every grid location. Thus, at this preliminary stage, every location in every predictor field is a perfect predictor of future warming. That is, the across-model correlation between the predictor and the “future warming” predictand is 1 and the regression slope is also 1.

The next step in creating the synthetic predictor fields is to add noise in order to obscure the predictor-predictand relationship somewhat. The first level of noise that is added is a spatially correlated field of weighing factors for each of the nine predictor maps. These weighing factor maps randomly enhance or damp the local magnitude of the map’s values (weighing factors can be positive or negative). After these weighing factors have been applied, every location for every predictor field still has a perfect across-model correlation (or perfect negative correlation) between the predictor and predictand but the regression slopes vary across space according to the magnitude of the weighing factors. The second level of noise that is added are spatially correlated fields of random numbers that are specific for each of the 9X36=324 predictor maps. At this point, everything is standardized to unit variance.

The synthetic data’s predictor-predictand relationship can be summarized in the plot below which shows the local across-model correlation coefficient (between predictor and predictand) for each of the nine predictor fields. These plots are similar to the type of thing that you would see using the real model data that we used in our study. Specifically, in both cases, there are swaths of relatively high correlations and anti-correlations with plenty of low-correlation area in between. All these predictor fields were produced the same way and the only differences arise from the two layers of random noise that were added. Thus, we know that any apparent differences between the predictor fields arose by random chance.

Picture3

Next, we can feed this synthetic data into the same PLSR procedure that we used in our study to see what it produces. The Spread Ratios are shown in the bar graphs below. Spread Ratios are shown for each of the nine predictor fields individually as well for the case where all nine predictor fields are used simultaneously. The top plot shows results without the use of cross-validation while the bottom plot shows results with the use of cross-validation.

Picture4

In the case without cross-validation, there is no guard against over-fitting. Thus, PLSR is able to utilize the many degrees of freedom in the predictor fields to create coefficients that fit predictors to the predictand exceptionally well. This is why the Spread Ratios are so small in the top bar plot. The mean Spread Ratio for the nine predictor fields in the top bar plot is 0.042, implying that the PLSR procedure was able to reduce the spread of the predictand by about 96%. Notably, using all the predictor fields simultaneously results in a three-orders-of-magnitude smaller Spread Ratio than using any of the predictor fields individually. This indicates that when there is no guard against over-fitting, much stronger relationships can be achieved by providing the PLSR procedure with more information.

However, PLSR is more than capable of over-fitting predictors to predictands and thus these small Spread Ratios are not to be taken seriously. In our work, we guard against over-fitting by using cross-validation (see FAQ #1 of our blog post). The Spread Ratios for the synthetic data using cross-validation are shown in the lower bar graph in the figure above. It is apparent that cross-validation makes a big difference. With cross-validation, the mean Spread Ratio across the nine individual predictor fields is 0.8, meaning that the average predictor field could help reduce the spread in the predictand by about 20%. Notably, a lower Spread Ratio of 0.54, is achieved when all nine predictor maps are used collectively (a 46% reduction in spread). Since there is much redundancy across the nine predictor fields, the simultaneous predictor field doesn’t increase skill very drastically but it is still better than the average of the individual predictor fields (this is a very consistent result when the entire exercise is re-run many times).

Importantly, we can even see that one particular predictor field (predictor field 2) achieved a lower Spread Ratio than the simultaneous predictor field. This brings us to the central question: Is predictor field 2 particularly special or inherently more useful as a predictor than the simultaneous predictor field? We created these nine synthetic predictor fields specifically so that they all contained roughly the same amount of information and any differences that arose, came about simply by random chance. There is an element of luck at play because the number of models (37) is small. Thus, cross-validation can produce appreciable Spread Ratio variability from predictor to predictor simply by chance. Combining the predictors reduces the Spread Ratio, but only marginally due to large redundancies in the predictors.

We apply this same logic to the results from our paper. As we stated above, our results showed that the simultaneous predictor field for the RCP 8.5 scenario shows a Spread Ratio of 0.67. Similar to the synthetic data case, eight of the nine individual predictor fields yielded Spread Ratios above this value but a single predictor field (the OLR seasonal cycle) yielded a smaller Spread Ratio. Lewis’ post argues that we should focus entirely on the OLR seasonal cycle because of this. However, just as in the synthetic data case, our interpretation is that the OLR seasonal cycle predictor may have just gotten lucky and we should not take its superior skill too seriously.

Posted in Climate Change | 11 Comments

Greater future global warming inferred from Earth’s recent energy budget

We have a paper out in Nature titled “Greater future global warming inferred from Earth’s recent energy budget”.

The Carnegie press release can be found here and Coverage from the Washington Post can be found here.

A video abstract summarizing the study is below.

The study addresses one of the key questions in climate science: How much global warming should we expect for a given increase in the atmospheric concentration of greenhouse gases?

One strategy for attempting to answer this question is to use mathematical models of the global climate system called global climate models. Basically, you can simulate an increase in greenhouse gas concentrations in a climate model and have it calculate, based on our best physical understanding of the climate system, how much the planet should warm. There are somewhere between 30 and 40 prominent global climate models and they all project different amounts of global warming for given change in greenhouse gas concentrations. Different models project different amounts of warming primarily because there is not a consensus on how to best model many key aspects of the climate system.

To be more specific, if we were to assume that humans will continue to increases greenhouse gas emissions substantially throughout the 21st century (the RCP8.5 future emissions scenario), climate models tell us that we can expect anywhere from about 3.2°C to 5.9°C (5.8°F to 10.6°F) of global warming above pre-industrial levels by 2100. This means that for identical changes in greenhouse gas concentrations (more technically, identical changes in radiative forcing), climate models simulate a range of global warming that differs by almost a factor of 2.

The primary goal of our study was to narrow this range of model uncertainty and to assess whether the upper or lower end of the range is more likely. We utilize the idea that the models that are going to be the most skillful in their projections of future warming should also be the most skillful in other contexts like simulating the recent past. Thus, if there is a relationship between how well models simulate the recent past and how much warming models simulate in the future, then we should be able to use this relationship, along with observations of the recent past, to narrow the range of future warming projections (this general technique falls under the “emergent constraint” paradigm, see e.g., Hall and Qu [2006] or Klein and Hall [2015]). The principal theme here is that models and observations together give us a more complete picture of reality than models can give us alone.

So, what variables are most appropriate to use to evaluate climate models in this context? Global warming is fundamentally a result of a global energy imbalance at the top of the atmosphere so we chose to assess models in their ability to simulate various aspects of the Earth’s top-of-atmosphere energy budget. We used three variables in particular: reflected solar radiation, outgoing infrared radiation, and the net energy balance. Also, we used three attributes of these variables: their average (AKA climatological) values, the average magnitude of their seasonal variability and the average magnitude of their month-to-month variability. These three variables and three attributes combine to make nine features of the climate system that we used to evaluate the climate models (see below for more information on our decision to use these nine features).

We found that that there is indeed a relationship between the way that climate models simulate these nine features over the recent past, and how much warming they simulate in the future. Importantly, models that match observations the best over the recent past, tend to simulate more 21st-century warming than the average model. This indicates that we should expect greater warming than previously calculated for any given emissions scenario, or it means that we need to reduce greenhouse gas emissions more than previously thought to achieve any given temperature stabilization target.

Using the steepest future emissions scenario as an example (the RCP8.5 emissions scenario), the figure below shows the comparison of the raw-model projections used by the Intergovernmental Panel on Climate Change, to our projections that incorporate information from observations.

 

Brown Caldeira 2017 Nature

Figure 2d

It is also noteworthy that the observationally-informed best-estimate for end-of-21st-century warming under the RCP 4.5 scenario is approximately the same as the raw best estimate for the RCP 6.0 scenario. This indicates that even if society were to decarbonize at a rate consistent with the RCP 4.5 pathway (which equates to ~800 gigatonnes less cumulative CO2 emissions than the RCP 6.0 pathway), we should still expect global temperatures to approximately follow the trajectory previously associated with RCP 6.0.

So why do models with the most skill at simulating the recent past tend to project more future warming? It has long been known that the large spread in model-simulated global warming results mostly from uncertainty in the behavior of feedbacks in the climate system like the cloud feedback.

So clouds, for example, reflect the Sun’s energy back to space and this has a large cooling effect on the planet. As the Earth warms due to increases in greenhouse gases, some climate models simulate that this cooling effect from clouds will become stronger, canceling out some of the warming from increases in greenhouse gases. Other models, however, simulate that this cooling effect from clouds will become weaker, which would enhance the initial warming due to increases in greenhouse gases.

Our work is consistent with many previous studies that show that models that warm the most, do so mostly because they simulate a reduction in the cooling effect from clouds.  Thus, our study indicates that models that simulate the Earth’s recent energy budget with the most fidelity also simulate a larger reduction in the cooling effect from clouds in the future and thus more future warming.

One point that is worth bringing up is that it is sometimes argued that climate model-projected global warming should be taken less seriously on the grounds that climate models are imperfect in their simulation of the current climate. Our study confirms important model-observation discrepancies and ample room for climate model improvement. However, we show that models that simulate the current climate with the most skill, tend to be models that project above-average global warming over the remainder of the 21st-century. Thus, it makes little sense to dismiss the most severe global warming projections because of model deficiencies. On the contrary, our results suggest that model shortcomings can likely be used to dismiss the least severe projections.

Questions regarding specifics of the study

Below are answers to some specific questions that we anticipate interested readers might have. This discussion is more technical than the text above.

1) How exactly are the constrained projection ranges derived and how do you guard against over-fitting?

In order to assess the skill by which the statistical relationships identified in the study help inform future warming, we employ a technique called cross-validation. In the main text, we show results for ‘hold-one-out’ cross-validation and in the Extended Data, we show results for ‘4-fold’ cross-validation.

Under ‘hold-one-out’ cross-validation, each climate model takes a turn acting as a test model with the remaining models designated as training models. The test model is held out of the procedure and the training models are used to define the statistical relationship between the energy budget features (the predictor variables) and future warming (the predictand). Then, the test model is treated as if it where the observations in the sense that we use the statistical relationship from the training models as well as the test model’s simulated energy budget features to “predict” the amount of future warming that we would expect for the test model. Unlike the true observations, the amount of future warming for the test model is actually known. This means that we can quantify how well the statistical procedure did at predicting the precise amount of future warming for the test model.

We allow every model to act as the test model once so that we can obtain a distribution of errors between the magnitude of statistically-predicted and ‘actual’ future warming. This distribution is used to quantify the constrained projection spread. A visualization of this procedure is shown in the video below:

 

2) Does your constrained spread represent the full range of uncertainty for future warming?

No.

First, it is important to note that most of the uncertainty associated with the magnitude of future warming is attributable to uncertainty in the amount of greenhouse gases humans will actually emit in the future. Our study does not address this uncertainty and instead focuses on the range of warming that we should expect for a given change in radiative forcing.

Secondly, the range of modeled global warming projections for a given change in radiative forcing does not represent the true full uncertainty. This is because there are a finite number of models, they are not comprehensive, and they do not sample the full uncertainty space of various physical processes. For example, a rapid nonlinear melting of the Greenland and Antarctic ice sheets has some plausibility (e.g., Hansen et al. 2016) but this is not represented in any of the models studied here and thus it has an effective probability of zero in both the raw unconstrained and observationally-informed projections. Because of considerations like this, the raw model spread is best thought of as a lower bound on total uncertainty (Caldwell et al., 2016) and thus our observationally-informed spread represents a reduction in this lower bound rather than a reduction in the upper bound.

3) Why did you use these particular energy budget features as your predictor variables?

Overall, we chose predictor variables that were of the most fundamental and comprehensive nature as possible, that still offered the potential for a straight-forward physical connection to the magnitude of future warming. In particular, we did not want to ‘data mine’ in an effort to find any variable with a high across-model correlation between its contemporary value and the magnitude of future warming. Doing so would have resulted in relationships that would be very likely to be spurious (e.g., Caldwell et al, 2014).

Additionally, we chose to emphasize broad and fundamental predictor variables in order to avoid the ceteris paribus (all else being equal) assumptions that are often evoked when more specific predictor variables are used. For example, it might be the case that models with larger mean surface ice albedo in a given location have larger positive surface ice albedo feedbacks in that location. This would indicate that ceteris paribus, these models should produce more warming. However, it might be the case that there is some across-model compensation from another climate process such that the ceteris paribus assumption is not satisfied and these models do not actually produce more warming than average. For example, maybe models with more mean surface ice albedo in the given location tend to have less mean cloud albedo and less positive cloud albedo feedbacks with warming. Practically speaking, it is the net energy flux that is going to matter for the change in temperature not the precise partitioning of the net flux into its individual subcomponents. Thus, in an attempt to account for potential compensation across space and across processes, we used globally complete, aggregate measures of the earth’s energy budget as our predictor variables. In the context of the above example, this would mean using total reflected shortwave radiation as the predictor variables rather than the reflected shortwave radiation from only one subcomponent of the energy budget like surface ice albedo.

To be more specific, we had five primary objectives in mind when we choose the features that we used to serve as predictor variables to inform future warming projections.

Objective 1: The features should have a relatively straight-forward connection to physical processes that will influence the magnitude of projected global warming.

The central premise that underlies our study is that climate models that are going to be the most skillful in their projections of future warming should also be the most skillful in other contexts like simulating the recent past. However, it should be relatively apparent why there would be a relationship between how well a model simulates a given variable over the recent past and how much warming that model simulates in the future.

Uncertainty in model-projected global warming originates primarily from differences in how models simulate the Earth’s top-of-atmosphere radiative energy budget and its adjustment to warming. Thus, we specifically choose to use the Earth’s net top-of-atmosphere energy budget and its two most fundamental components (it’s reflected shortwave radiation and outgoing longwave radiation) as predictor variables. This made it much easier to assess why relationships might emerge between the predictor variables and future warming than it would have been if we were opened to using any variables in which some positive across-model correlation could be found with future warming.

Objective 2: The features should represent processes as fundamental to the climate system as possible.

We used three attributes of the energy budget variables: the mean climatology, the magnitude of the seasonal cycle, and the magnitude of monthly variability.

We choose these variables in order to try to keep the predictors as simple and fundamental as possible. We did not want to ‘data mine’ for more specific features that might have more apparent predictive power because it would be likely that such predictive power would be illusory (e.g., Caldwell et al, 2014). Furthermore, these choices were informed by previous studies which have indicated that seasonal and monthly variability in properties of Earth’s climate system can be useful as predictors of future warming because behavior on these timescales is relatable to the behavior of long-term radiative feedbacks. Average (or climatological) predictors were used because the mean state of the climate system can affect the strength of radiative feedbacks primarily because the mean state influences how much potential there is for change. 

Objective 3: The features should have global spatial coverage.

The climate system is dynamically linked through horizontal energy transport so modeled processes at a given location inevitably influences modeled processes elsewhere. This means that there may be compensation in a given energy budget field across space. For example, suppose that models with greater mean (climatological) albedo, have larger albedo feedbacks. Further, suppose that models with greater mean albedo over location X, tend to have less mean albedo over location Y. If we were to restrict our attention to location X, we would be tempted to say that models with more mean albedo at location X should warm more in response to forcing. However, this would only be the case if the ceteris paribus assumption holds.

Since the magnitude of global warming will depend on the global change in albedo, it is important to account for any potential compensation across space. Thus, we required that the features that we used as predictor variables be globally complete fields so that any spatial compensation between processes could be accounted for.

Objective 4: The features should represent the net influence of many processes simultaneously.

In addition to considering compensation within a given process in space, it was also a goal of ours to consider possible compensation amongst processes at a given location. For example, suppose again that models with greater mean (climatological) albedo, have larger albedo feedbacks. Further, suppose that models with greater mean cloud albedo, tend to have less mean surface snow/ice albedo. If we were to restrict our attention to cloud albedo, we would be tempted to say that models with more mean cloud albedo should warm more in response to forcing. Again, this would only be the case if the ceteris paribus assumption holds.

Since the magnitude of global warming will depend on the net influence of many processes on the energy budget, it is important to account for any potential compensation across processes. Thus, rather than looking at specific subcomponents of the energy budget (e.g., cloud albedo), we use the net energy imbalance and only its two most fundamental components (its shortwave and longwave components) as predictor variables.

Objective 5: The features should be measurable with sufficiently small observational uncertainty.

Our procedure required that the observational uncertainty in the predictor variables was smaller than the across-model spreads. This was essential so that it would be possible to use observations to discriminate between well and poor performing models. This objective was met for the top-of-atmosphere energy flux variables that we used from the CERES EBAF satellite product but it would not have been met for, e.g, surface heat fluxes over a large portion of the planet.

4) Why did you choose global temperature response and not a more specific physical metric as your predictand?

Our ultimate goal was to constrain the magnitude of future warming. Others have argued that it is easier to draw physical connections if the predictand is something more specific than global temperature like an aspect of the magnitude of the cloud feedback (e.g., Klein and Hall, 2015). For example, it is more straight-forward to relate the magnitude of the seasonal cycle in cloud albedo at some location to the magnitude of long-term cloud albedo feedback in that location, than it is to relate the magnitude of the seasonal cycle in cloud albedo to the magnitude of global warming. We agree with this. However, models with more-positive cloud albedo feedbacks in a given location will be the models that warm more only if the ceteris paribus assumption holds. It could be the case that models with more-positive cloud albedo feedbacks in a given location tend to have less-positive cloud albedo feedbacks elsewhere or tend to have more-negative feedbacks in other processes.

Thus, it should be recognized that using a specific predictand like the magnitude of the local cloud albedo feedback can make it easier to draw a physical connection between predictor and predictand but this can come at the cost of actually being able to constrain the ultimate variable of interest. Since our goal was to constrain global temperature change, we felt that it was most practical to use global temperature change as our predictand even if this made drawing a physical connection less straightforward.

5) Are your results sensitive to the use of alternative predictors or predictands?

One of the more striking aspects of our study is the qualitative insensitivity of the results to the use of differing predictors and predictands. Our findings of generally reduced spreads and increased mean warming are robust to which of the nine predictor fields are used (or if they are used simultaneously) and robust to which to the ten predictands is targeted (mean warming over the years 2046-2065 and 2081-2100 for RCP 2.6, RCP 4.5, RCP 6.0 and RCP 8.5, as well as equilibrium climate sensitivity, and net feedback strength).

6) Why did you use the statistical technique that you used?

We used Partial Least Squares (PLS) regression to relate simulated features of the Earth’s energy budget over the recent past to the magnitude of model-simulated future warming. PLS regression is applicable to partial correlation problems analogously to the more widely used Multiple Linear Regression (MLR). As discussed above, we wanted to relate globally complete energy budget fields (our predictor matrices) to the magnitude of future warming (our predictand vector). Because of the high degree of spatial autocorrelation in the energy budget fields, the columns in the predictor matrix end up being highly collinear which makes MLR inappropriate to the problem. PLS, however, offers a solution to this issue by creating linear combinations of the columns in the predictor matrix (PLS components) that represent a large portion of the predictor matrix’s variability. The procedure is similar to Principle Component Analysis (PCA) common in climate science but instead of seeking components that explain the maximum variability in some matrix itself, PLS seeks components in the predictor matrix that explain the covariability between the predictor matrix and the predictand vector.

7) How do you know that statistical procedure itself isn’t producing the results that you are seeing?

In conjunction with cross-validation, we perform three additional experiments designed to expose any systematic biases in our methodology. These three experiments involve supplying the statistical procedure with data that should not produce any constraint on the magnitude of future global warming. In one experiment, we substitute the described energy budget features with global surface air temperature (SAT) annual anomalies for each model. Since annual SAT anomaly fields are dominated by chaotic unforced variability, the across-model relationship of these patterns for any given year is unlikely to be related to the magnitude of future warming.

In a second experiment, we substitute the original global warming predictand vector with versions of the vector that have had its values randomly reordered or scrambled. Thus, these scrambled predictand vectors have the same statistical moments as the original vector but any real across-model relationship between predictors and predictands should be eliminated on average.

Finally, in a third experiment, we use both the SAT anomaly fields and the scrambled predictand vectors as the predictors and predictands respectively.

In contrast to the main results between the energy budget predictor fields and the magnitude of future global warming, the three experiments described above all demonstrate no skill in constraining future warming. This indicates that the findings reported in our study are a result of real underlying relationships between the predictors and predictands and are not an artifact of the statistical procedure itself.

8) Why are the values in Table 1 of the paper slightly different from those implied in Figure 1? 

Our results generally show warmer central estimates and smaller ranges in the projections of future warming. However, there are multiple ways to compare our results with raw/previous model results. One way would be to compare our results with what is reported in the last IPCC report (Chapter 12 in Assessment Report 5). This is probably the most useful from the perspective of a casual observer and it is the comparison shown in our Table 1. One issue with this comparison is that it is not a perfect apples-to-apples comparison because we used a slightly different suite of climate models than those used in the IPCC report (see our Supplementary Table 1 and IPCC AR5 Chapter 12). Since many casual observers will read the abstract and look at Table 1, we wanted the numerical values in these two places to match. So, the numerical values in the abstract (the ones reported in degrees Celsius) correspond to our results compared to what was reported previously in IPCC AR5.

It is also useful to make the apples-to-apples comparison where our observationally-informed results are compared to raw model results using the exact same models. This is what is done using the “spread ratio” and “prediction ratio” discussed in the paper’s text and shown in Figure 1. These dimensionless values (the ones reported in percent changes) also appear in our abstract. This was done so that the spread ratio and prediction ratio numbers in the abstract would be consistent with those seen in Fig. 1.

So to expand/clarify the numbers reported in the abstract, there are two sets of comparisons that are relevant:

Under the comparison where our results were compared directly with that from IPCC AR5, the observationally informed warming projection for the end of the twenty-first century for the RCP8.5 scenario is about 12 percent warmer (+~0.5 degrees Celsius) with a reduction of about 43 percent in the two standard deviation spread (-~1.2 degrees Celsius) relative to the raw model projections.

Under the comparison where the exact same models are used, the observationally informed warming projection for the end of the twenty-first century for the RCP8.5 scenario is about 15 percent warmer (+~0.6 degrees Celsius) with a reduction of about a third in the two standard deviation spread (-~0.8 degrees Celsius) relative to the raw model projections.

Posted in Climate Change | 45 Comments

Reducing greenhouse gas emissions helps the economy

Potential solutions to climate change are often framed as a tradeoff between reducing our impact on the environment and harming the economy. More specifically, it is thought that we can reduce our climate-change-related impact by reducing emissions of greenhouse gases but that this will inevitably harm the economy by making energy more expensive. Under this framing, it is natural for people to strongly disagree about climate-policy prescriptions since individuals will inevitably diverge in the relative value they place on environmental vs. economic concerns.

However, the issue of whether or not it is in society’s collective best-interest to reduce greenhouse gas emissions is not as complicated and subjective as the above framing makes it seem. In fact, as long as climate change costs the economy anything (and the cost increases steadily with emissions), then it is in our collective best interest economically to reduce emissions. In other words, you don’t need to care about any non-monetizable environmental impact in order to be in favor of reducing greenhouse gas emissions. All you have to be in favor of is maximizing global economic production.

Richard Tol has a nice description of this in Chapter 8 of his Climate Economics textbook. The figure below is adapted from his book. It shows the relationship between global economic gain and the level of emissions of greenhouse gases.

The blue line indicates that the higher the level of greenhouse gas emissions, the more total economic benefit is accrued to individuals. However, there are diminishing returns to this economic gain, since at some point it becomes unnecessary and thus economically inefficient to emit additional greenhouse gases. In other words, emitting for the sake of emitting will not help the economy. This means that there is a level of emissions that is economically optimal from the standpoint of individuals (represented by the blue dot).

However, there are economic costs (social losses) to greenhouse gas emissions since climate change imposes negative impacts via changes in things like crop yields, mortality, worker productivity, electricity demand and damage from coastal flooding (e.g., Hsiang et al., 2017). On net, the climate change impact on the economy is very likely to be negative, especially beyond 1C of global warming. These climate-impact-related costs on the economy are illustrated in the figure with the orange line.

Reducing GHG Emissions Helps the Economy

Thus, the true relationship between global collective economic gains and global emissions of greenhouse gases is represented by the grey line which is the sum of the total private gains and the total social losses. The grey line indicates that if we emit at a level that maximizes private economic gains (where the blue dashed line intersects the grey line) this will not maximize collective net economic gains. Because there are climate-change-related costs, it is economically optimal to reduce emissions from the point that maximizes only private gains.

When thinking about the cost-benefit analysis of climate change policy, it is tempting to think that it would only be wise to reduce greenhouse gas emissions if the climate-related economic costs of the emissions are greater than the economic benefits. But this is not the case. In the above situation, the benefits of emissions are always greater than the costs, no matter the emissions level (this is presently the case in the real world as well; Tol, 2017). All that is required for emissions reductions to be wise is that there comes a point on the emissions curve where additional emissions increase the costs more than they increase the benefit. This point is represented by the grey dot in the figure.

This type of calculation is made more sophisticated in so-called Integrated Assessment Models that bring in the elements of time and explicitly consider the cost of reducing emissions through, say, a carbon tax. These additional elements, however, do not change the main story illustrated above: It is optimal from a collective economics standpoint to reduce greenhouse gas emissions from the point that maximizes only private gain.

It is relevant to point out that Integrated Assesment Models do not tend to show that it is best for the economy to reduce emissions as quickly and stringently as is proposed by the Paris Climate Accord (e.g., Fig. 6 in Nordhaus and Sztorc, 2013). So, our best estimates at this point indicate that being in favor of the Paris Accord implies that one is implicitly in favor of trading off economic gain for reducing environmental impact (or that one is very risk-averse to low-probability, high-impact outcomes; e.g., Cai et al., 2015).

This highlights that the recommended magnitude of emissions reductions will depend on peoples’ subjective value judgments and will thus be controversial. However, the choice of whether or not we should reduce emissions should be much less controversial. Reducing emissions from the level that maximizes private economic gains will result in higher net economic gains for society overall.

Posted in Climate Change, Uncategorized | 4 Comments

Math Models and Climate Change: S.T.E.M. Day talk at Quarry Lane School

Posted in Uncategorized | Leave a comment

The effect of population growth on climate change impacts

Global human population is currently at approximately 7.5 billion people and increasing by about 1.1% per year. However, the rate of growth of the population growth rate itself is decreasing so world population is expected to level off somewhere near 10-11 billion people around the year 2100. This leveling-off is encouraging from a sustainability perspective but an extra 3 billion people is still quite a bit for the earth system to support.

Since everything humans do requires energy and since our energy system still largely relies on fossil-fuels, the number of people on the planet has a large effect on projections of the impact of global warming. This is why having ‘one fewer child’ has been noted as the single most significant lifestyle decision people can make in terms of reducing their carbon footprint.

This raises a couple questions:

1) How much worse do we expect climate change impacts to be due to this 3 billion-person increase in population?

2) How much faster would we have to ‘decarbonize’ society in order to offset the impact of adding these 3 billion people?

We can get first-order answers to questions like these using the type of simplified equations that go into ‘Integrated Assessment Models’ like the Dynamic Integrated model of Climate and the Economy. In these simplified models of the environment and society, the impact of climate change is quantified by the ‘damage function’ which expresses the damage caused by global warming in terms of a fraction of global gross domestic product (GDP). (The damage function is extremely uncertain but its estimates are becoming more empirical).

Using this framework, we can compare the climate change impacts in a world where population grows from 7.5 billion to 10.5 billion by 2100 (scenario 1, red line below) to a scenario where global population stays steady (scenario 2, blue dashed line below). In these situations, I am considering a middle-of-the-road projection where CO2 emissions continuously drop (panel e) but not fast enough to keep up with the ambitious goals of the Paris Accord.

PTBrown_DICE_Kaya

Comparing the population (panel a) and the climate change impact (panel h), we can see that the additional 3 billion people increases the negative impact of climate change from approximately 1.3% of GDP (blue dashed in panel h) to 1.5% of GDP (red line in panel h) in 2100. This doesn’t seem like a huge difference, meaning that offsetting this difference might not be an insurmountable challenge. So what else could we change in order to offset this additional 0.2% of GDP damages?

We hear about several possible ways that society could go about decarbonizing; from consuming less, to increasing energy efficiency, to transitioning from fossil fuel to renewable energy sources. In fact, along with changes in population, these things represent all the possible avenues by which humanity can reduce carbon dioxide emissions. These features are described quantitatively with the Kaya Identity which breaks down carbon dioxide emissions as:

PTBrown_Kaya

The four terms of the Kaya Identity are plotted in the first four panels above (panels a-d).

This tells us that in order to offset the climate change impact of the 3 billion-person increase in population (a) we need to have either a smaller increase in consumption (b), a larger decrease in energy intensity (c), or a larger decrease in the amount of CO2 we emit per unit of energy (d).

Globally, material consumption – and its flip side production (b) – have been increasing by about 1.4%/year and is expected to continue to grow at a growing rate over the remainder of the century. Energy efficiency (c) has been increasing (or energy intensity has been decreasing) by about 1-2%/year and apparently, this is expected to continue until at least 2040. The carbon intensity of energy (d) – transitioning from fossil fuel to renewable energy sources – is where most policy discussions center and thus I will single-out that term here.

As the baseline, I have assumed that carbon intensity of energy will decrease by 1.5%/year, which is a middle-of-the-road estimate. So the question becomes, how much more of a decrease in carbon intensity do we need in order to offset the increase in population?

The black dashed line (scenario 3) illustrates how this can be achieved. It shows that, in order to have the same impact (panel h), carbon emissions intensity would need to shift from a growth rate of -1.5%/year to -2.1%/year. So, from a climate perspective, the impact of the additional 3 billion people can be offset by what seems like a relatively modest change in the rate of decrease in carbon emissions intensity. Hopefully, this is not an illusion and this level of decarbonization of the energy system proves to be both technologically and politically feasible.

Posted in Uncategorized | 1 Comment

Can humanity maintain perpetual economic growth?

…to understand modern economic history, you really need to understand just a single word. The word is growth. –from Sapiens (2014)

Humanity is far more materially wealthy today than any time in the past and our collective wealth continues to explode:

Picture1From Wikipedia: World Economy.

This material wealth has made it possible to achieve great increases in human well-being measured, for example, by the increase in lifespanincrease in food availabilityreduction in povertyreduction in infant mortalityreduction in a myriad of diseases, etc.

One question that has occupied the minds of environmentalists for decades, however, is: “Can this economic growth be maintained indefinitely, and what happens when we run up against physical limits to growth?”

Yuval Noah Harari’s book Sapiens has what I consider to be a profound section (Chapter 16: The Capitalists Creed) in which the fundamental aspects of the modern economy are described from first principles. Harari uses a simple example that I think is particularly illuminating in that it shows how our current system not only produces growth but depends on constant growth to survive.

Picture2

Harari describes a situation involving a general contractor, a banker, and an entrepreneurial baker who is dreaming of starting a bakery.

The contractor has $1,000,000 and decides to deposit it in the bank (‘panel a’ above). The banker thus has $1,000,000 that he would like to make more money with by investing it.

The baker thinks she has a great business plan for a bakery but she cannot make her dream a reality without a loan. She goes to the bank and pitches her plan, requesting a loan of $1,000,000 to construct her bakery. The banker is convinced that the bakery will be a good investment and loans the $1,000,000 to the baker (‘panel b’ above).

The baker then hires the contractor to build the bakery for a price of $1,000,000. The contractor follows through, making the bakery a reality (‘panel c’ above).

Here’s where it gets interesting.

The contractor now has an additional $1,000,000 which he decides to deposit in the bank, giving him a total of $2,000,000 in the bank (‘panel d’ above).

Look what just happened: the $1,000,000 simply exchanged hands from person to person, but seemingly out of nowhere, the contractor ends up with $2,000,000. An extra $1,000,000 was just created out of thin air!

Where does the extra $1,000,000 come from? It comes from the future. Specifically, the difference between the total amount of real money ($1,000,000) and the amount of money that the contractor has in his account ($2,000,000) is attributable to the future income of the baker. In other words, the banker believes that as time goes on, the bakery will be successful and the baker will repay the loan with interest. So eventually, if the contractor asks to withdraw his $2,000,000, the money will be available. The whole system is based on borrowing from the future in order to finance the present.

Harari notes that under current US banking law, banks are allowed to loan ten times more money than they actually have on their books. That means that at any given time, over 90% of banks’ assets haven’t actually been created yet – we are all just collectively assuming that they will be created in the future. This system only works if the future is always able to produce more wealth than the present, allowing the cycle to be maintained. In other words, the system requires perpetual growth or it will collapse.

The economy can grow due to the discovery of new resources, the growth of the labor force, improvements in technology or increased productivity due to increased specialization.

Over the past couple hundred years, these things have outpaced their limiting factors, allowing wealth to explode. But it is difficult to see how this can be sustained forever. We can’t grow the labor force indefinitely, there are not infinite material resources (at least not on earth) and there are limits to what specialization can achieve.

Technology is perhaps the most difficult thing to put a limit on and there are many futurists who expect technology’s exponential growth to continue to the point of a technological singularity.

Despite this, it is difficult to see how we will not eventually reach fundamental limits on growth that will undermine the very basis of the system we have created. Geoffrey West has pointed this issue out in his book Scale (touched on in his TED talk and also described in Bettencourt et al., 2007). He notes that so far, society has continuously been bailed out by innovations that have allowed us to continue to grow exponentially (actually super exponentially):

Picture3

So basically, if the wildest predictions regarding the ‘technological singularity’ do not come to pass, we will eventually run up against limits where growth can no longer be maintained and the foundation of the global economic system will be fundamentally undermined. (This is hardly a new realization and it has spawned entire fields of thought such as Post-growth and Steady-state economics)

It should be noted that environmentalists have been warning about unsustainable growth of human population and resource consumption for a long time. High-profile predictions of collapse due to unchecked growth have been made since at least Malthus’s An Essay on the Principle of Population in 1798. There were also high-profile predictions of imminent disaster in the 1960s and 1970s epitomized by Paul R. Ehrlich’s The Population Bomb in 1968 and the Club of Rome’s The Limits to Growth in 1972.

These writings warned of imminent catastrophes that have not been realized. Technological innovations and the continuous discovery of new resources have allowed the system to maintain itself, resulting in the amazing growth in wealth I highlighted above.

But does the fact that growth has been maintained historically, imply that growth can be maintained indefinitely into the future? I see no reason why the latter should follow from the former. Physical limitations of the environment and the fact that material recourses are finite would seem to suggest that growth cannot be maintained indefinitely. If that is the case, society should seriously consider how to gradually transition from our current system to a more sustainable one. This certainly seems preferable to having to build a new system from scratch after the old one has collapsed.

Posted in Climate Change | Leave a comment

Change in temperature variability with warming

We have a new paper out in Nature Climate Change on potential changes in natural unforced variability of global mean surface air temperature (GMST) under global warming.

Paper
News and Views piece
Duke press release

Unforced GMST variability is of the same order of magnitude as current externally forced changes in GMST on decadal timescales. Thus, understanding the precise magnitude and physical mechanisms responsible for unforced GMST variability is relevant for both the attribution of past climate changes to human causes as well to the prediction of climate change on policy relevant timescales.

Much research on unforced GMST variability has used modeling experiments run under “preindustrial control” conditions or has used observed/reconstructed GMST variability associated with cooler past climates to draw conclusions for contemporary or future GMST variability. These studies can implicitly assume that the characteristics of GMST variability will remain the same as the climate warms. In our research, we demonstrate in a climate model that this assumption is likely to be flawed. Not only do we show that the magnitude of GMST variability dramatically declines with warming in our experiment, we also show that the physical mechanisms responsible for such variability become fundamentally altered. These results indicate that the ubiquitous “preindustrial control” climate modeling studies may be limited in their relevance for the study of current or future climate variability.

Another principal finding of our study is that global warming may cause local temperature variability to increase over low-to-mid latitude land regions at the same time that global temperature variability dramatically decreases. This represents a cause for concern, as it is precisely these low-to-mid latitude land regions that are characterized by the highest human population density and biodiversity.

Posted in Uncategorized | 5 Comments