Greater future global warming inferred from Earth’s recent energy budget

We have a paper out in Nature titled “Greater future global warming inferred from Earth’s recent energy budget”.

The Carnegie press release can be found here and Coverage from the Washington Post can be found here.

A video abstract summarizing the study is below.

The study addresses one of the key questions in climate science: How much global warming should we expect for a given increase in the atmospheric concentration of greenhouse gases?

One strategy for attempting to answer this question is to use mathematical models of the global climate system called global climate models. Basically, you can simulate an increase in greenhouse gas concentrations in a climate model and have it calculate, based on our best physical understanding of the climate system, how much the planet should warm. There are somewhere between 30 and 40 prominent global climate models and they all project different amounts of global warming for given change in greenhouse gas concentrations. Different models project different amounts of warming primarily because there is not a consensus on how to best model many key aspects of the climate system.

To be more specific, if we were to assume that humans will continue to increases greenhouse gas emissions substantially throughout the 21st century (the RCP8.5 future emissions scenario), climate models tell us that we can expect anywhere from about 3.2°C to 5.9°C (5.8°F to 10.6°F) of global warming above pre-industrial levels by 2100. This means that for identical changes in greenhouse gas concentrations (more technically, identical changes in radiative forcing), climate models simulate a range of global warming that differs by almost a factor of 2.

The primary goal of our study was to narrow this range of model uncertainty and to assess whether the upper or lower end of the range is more likely. We utilize the idea that the models that are going to be the most skillful in their projections of future warming should also be the most skillful in other contexts like simulating the recent past. Thus, if there is a relationship between how well models simulate the recent past and how much warming models simulate in the future, then we should be able to use this relationship, along with observations of the recent past, to narrow the range of future warming projections (this general technique falls under the “emergent constraint” paradigm, see e.g., Hall and Qu [2006] or Klein and Hall [2015]). The principal theme here is that models and observations together give us a more complete picture of reality than models can give us alone.

So, what variables are most appropriate to use to evaluate climate models in this context? Global warming is fundamentally a result of a global energy imbalance at the top of the atmosphere so we chose to assess models in their ability to simulate various aspects of the Earth’s top-of-atmosphere energy budget. We used three variables in particular: reflected solar radiation, outgoing infrared radiation, and the net energy balance. Also, we used three attributes of these variables: their average (AKA climatological) values, the average magnitude of their seasonal variability and the average magnitude of their month-to-month variability. These three variables and three attributes combine to make nine features of the climate system that we used to evaluate the climate models (see below for more information on our decision to use these nine features).

We found that that there is indeed a relationship between the way that climate models simulate these nine features over the recent past, and how much warming they simulate in the future. Importantly, models that match observations the best over the recent past, tend to simulate more 21st-century warming than the average model. This indicates that we should expect greater warming than previously calculated for any given emissions scenario, or it means that we need to reduce greenhouse gas emissions more than previously thought to achieve any given temperature stabilization target.

Using the steepest future emissions scenario as an example (the RCP8.5 emissions scenario), the figure below shows the comparison of the raw-model projections used by the Intergovernmental Panel on Climate Change, to our projections that incorporate information from observations.

 

Brown Caldeira 2017 Nature

Figure 2d

It is also noteworthy that the observationally-informed best-estimate for end-of-21st-century warming under the RCP 4.5 scenario is approximately the same as the raw best estimate for the RCP 6.0 scenario. This indicates that even if society were to decarbonize at a rate consistent with the RCP 4.5 pathway (which equates to ~800 gigatonnes less cumulative CO2 emissions than the RCP 6.0 pathway), we should still expect global temperatures to approximately follow the trajectory previously associated with RCP 6.0.

So why do models with the most skill at simulating the recent past tend to project more future warming? It has long been known that the large spread in model-simulated global warming results mostly from uncertainty in the behavior of feedbacks in the climate system like the cloud feedback.

So clouds, for example, reflect the Sun’s energy back to space and this has a large cooling effect on the planet. As the Earth warms due to increases in greenhouse gases, some climate models simulate that this cooling effect from clouds will become stronger, canceling out some of the warming from increases in greenhouse gases. Other models, however, simulate that this cooling effect from clouds will become weaker, which would enhance the initial warming due to increases in greenhouse gases.

Our work is consistent with many previous studies that show that models that warm the most, do so mostly because they simulate a reduction in the cooling effect from clouds.  Thus, our study indicates that models that simulate the Earth’s recent energy budget with the most fidelity also simulate a larger reduction in the cooling effect from clouds in the future and thus more future warming.

One point that is worth bringing up is that it is sometimes argued that climate model-projected global warming should be taken less seriously on the grounds that climate models are imperfect in their simulation of the current climate. Our study confirms important model-observation discrepancies and ample room for climate model improvement. However, we show that models that simulate the current climate with the most skill, tend to be models that project above-average global warming over the remainder of the 21st-century. Thus, it makes little sense to dismiss the most severe global warming projections because of model deficiencies. On the contrary, our results suggest that model shortcomings can likely be used to dismiss the least severe projections.

Questions regarding specifics of the study

Below are answers to some specific questions that we anticipate interested readers might have. This discussion is more technical than the text above.

1) How exactly are the constrained projection ranges derived and how do you guard against over-fitting?

In order to assess the skill by which the statistical relationships identified in the study help inform future warming, we employ a technique called cross-validation. In the main text, we show results for ‘hold-one-out’ cross-validation and in the Extended Data, we show results for ‘4-fold’ cross-validation.

Under ‘hold-one-out’ cross-validation, each climate model takes a turn acting as a test model with the remaining models designated as training models. The test model is held out of the procedure and the training models are used to define the statistical relationship between the energy budget features (the predictor variables) and future warming (the predictand). Then, the test model is treated as if it where the observations in the sense that we use the statistical relationship from the training models as well as the test model’s simulated energy budget features to “predict” the amount of future warming that we would expect for the test model. Unlike the true observations, the amount of future warming for the test model is actually known. This means that we can quantify how well the statistical procedure did at predicting the precise amount of future warming for the test model.

We allow every model to act as the test model once so that we can obtain a distribution of errors between the magnitude of statistically-predicted and ‘actual’ future warming. This distribution is used to quantify the constrained projection spread. A visualization of this procedure is shown in the video below:

 

2) Does your constrained spread represent the full range of uncertainty for future warming?

No.

First, it is important to note that most of the uncertainty associated with the magnitude of future warming is attributable to uncertainty in the amount of greenhouse gases humans will actually emit in the future. Our study does not address this uncertainty and instead focuses on the range of warming that we should expect for a given change in radiative forcing.

Secondly, the range of modeled global warming projections for a given change in radiative forcing does not represent the true full uncertainty. This is because there are a finite number of models, they are not comprehensive, and they do not sample the full uncertainty space of various physical processes. For example, a rapid nonlinear melting of the Greenland and Antarctic ice sheets has some plausibility (e.g., Hansen et al. 2016) but this is not represented in any of the models studied here and thus it has an effective probability of zero in both the raw unconstrained and observationally-informed projections. Because of considerations like this, the raw model spread is best thought of as a lower bound on total uncertainty (Caldwell et al., 2016) and thus our observationally-informed spread represents a reduction in this lower bound rather than a reduction in the upper bound.

2) Why did you use these particular energy budget features as your predictor variables?

Overall, we chose predictor variables that were of the most fundamental and comprehensive nature as possible, that still offered the potential for a straight-forward physical connection to the magnitude of future warming. In particular, we did not want to ‘data mine’ in an effort to find any variable with a high across-model correlation between its contemporary value and the magnitude of future warming. Doing so would have resulted in relationships that would be very likely to be spurious (e.g., Caldwell et al, 2014).

Additionally, we chose to emphasize broad and fundamental predictor variables in order to avoid the ceteris paribus (all else being equal) assumptions that are often evoked when more specific predictor variables are used. For example, it might be the case that models with larger mean surface ice albedo in a given location have larger positive surface ice albedo feedbacks in that location. This would indicate that ceteris paribus, these models should produce more warming. However, it might be the case that there is some across-model compensation from another climate process such that the ceteris paribus assumption is not satisfied and these models do not actually produce more warming than average. For example, maybe models with more mean surface ice albedo in the given location tend to have less mean cloud albedo and less positive cloud albedo feedbacks with warming. Practically speaking, it is the net energy flux that is going to matter for the change in temperature not the precise partitioning of the net flux into its individual subcomponents. Thus, in an attempt to account for potential compensation across space and across processes, we used globally complete, aggregate measures of the earth’s energy budget as our predictor variables. In the context of the above example, this would mean using total reflected shortwave radiation as the predictor variables rather than the reflected shortwave radiation from only one subcomponent of the energy budget like surface ice albedo.

To be more specific, we had five primary objectives in mind when we choose the features that we used to serve as predictor variables to inform future warming projections.

Objective 1: The features should have a relatively straight-forward connection to physical processes that will influence the magnitude of projected global warming.

The central premise that underlies our study is that climate models that are going to be the most skillful in their projections of future warming should also be the most skillful in other contexts like simulating the recent past. However, it should be relatively apparent why there would be a relationship between how well a model simulates a given variable over the recent past and how much warming that model simulates in the future.

Uncertainty in model-projected global warming originates primarily from differences in how models simulate the Earth’s top-of-atmosphere radiative energy budget and its adjustment to warming. Thus, we specifically choose to use the Earth’s net top-of-atmosphere energy budget and its two most fundamental components (it’s reflected shortwave radiation and outgoing longwave radiation) as predictor variables. This made it much easier to assess why relationships might emerge between the predictor variables and future warming than it would have been if we were opened to using any variables in which some positive across-model correlation could be found with future warming.

Objective 2: The features should represent processes as fundamental to the climate system as possible.

We used three attributes of the energy budget variables: the mean climatology, the magnitude of the seasonal cycle, and the magnitude of monthly variability.

We choose these variables in order to try to keep the predictors as simple and fundamental as possible. We did not want to ‘data mine’ for more specific features that might have more apparent predictive power because it would be likely that such predictive power would be illusory (e.g., Caldwell et al, 2014). Furthermore, these choices were informed by previous studies which have indicated that seasonal and monthly variability in properties of Earth’s climate system can be useful as predictors of future warming because behavior on these timescales is relatable to the behavior of long-term radiative feedbacks. Average (or climatological) predictors were used because the mean state of the climate system can affect the strength of radiative feedbacks primarily because the mean state influences how much potential there is for change. 

Objective 3: The features should have global spatial coverage.

The climate system is dynamically linked through horizontal energy transport so modeled processes at a given location inevitably influences modeled processes elsewhere. This means that there may be compensation in a given energy budget field across space. For example, suppose that models with greater mean (climatological) albedo, have larger albedo feedbacks. Further, suppose that models with greater mean albedo over location X, tend to have less mean albedo over location Y. If we were to restrict our attention to location X, we would be tempted to say that models with more mean albedo at location X should warm more in response to forcing. However, this would only be the case if the ceteris paribus assumption holds.

Since the magnitude of global warming will depend on the global change in albedo, it is important to account for any potential compensation across space. Thus, we required that the features that we used as predictor variables be globally complete fields so that any spatial compensation between processes could be accounted for.

Objective 4: The features should represent the net influence of many processes simultaneously.

In addition to considering compensation within a given process in space, it was also a goal of ours to consider possible compensation amongst processes at a given location. For example, suppose again that models with greater mean (climatological) albedo, have larger albedo feedbacks. Further, suppose that models with greater mean cloud albedo, tend to have less mean surface snow/ice albedo. If we were to restrict our attention to cloud albedo, we would be tempted to say that models with more mean cloud albedo should warm more in response to forcing. Again, this would only be the case if the ceteris paribus assumption holds.

Since the magnitude of global warming will depend on the net influence of many processes on the energy budget, it is important to account for any potential compensation across processes. Thus, rather than looking at specific subcomponents of the energy budget (e.g., cloud albedo), we use the net energy imbalance and only its two most fundamental components (its shortwave and longwave components) as predictor variables.

Objective 5: The features should be measurable with sufficiently small observational uncertainty.

Our procedure required that the observational uncertainty in the predictor variables was smaller than the across-model spreads. This was essential so that it would be possible to use observations to discriminate between well and poor performing models. This objective was met for the top-of-atmosphere energy flux variables that we used from the CERES EBAF satellite product but it would not have been met for, e.g, surface heat fluxes over a large portion of the planet.

3) Why did you choose global temperature response and not a more specific physical metric as your predictand?

Our ultimate goal was to constrain the magnitude of future warming. Others have argued that it is easier to draw physical connections if the predictand is something more specific than global temperature like an aspect of the magnitude of the cloud feedback (e.g., Klein and Hall, 2015). For example, it is more straight-forward to relate the magnitude of the seasonal cycle in cloud albedo at some location to the magnitude of long-term cloud albedo feedback in that location, than it is to relate the magnitude of the seasonal cycle in cloud albedo to the magnitude of global warming. We agree with this. However, models with more-positive cloud albedo feedbacks in a given location will be the models that warm more only if the ceteris paribus assumption holds. It could be the case that models with more-positive cloud albedo feedbacks in a given location tend to have less-positive cloud albedo feedbacks elsewhere or tend to have more-negative feedbacks in other processes.

Thus, it should be recognized that using a specific predictand like the magnitude of the local cloud albedo feedback can make it easier to draw a physical connection between predictor and predictand but this can come at the cost of actually being able to constrain the ultimate variable of interest. Since our goal was to constrain global temperature change, we felt that it was most practical to use global temperature change as our predictand even if this made drawing a physical connection less straightforward.

4) Are your results sensitive to the use of alternative predictors or predictands?

One of the more striking aspects of our study is the qualitative insensitivity of the results to the use of differing predictors and predictands. Our findings of generally reduced spreads and increased mean warming are robust to which of the nine predictor fields are used (or if they are used simultaneously) and robust to which to the ten predictands is targeted (mean warming over the years 2046-2065 and 2081-2100 for RCP 2.6, RCP 4.5, RCP 6.0 and RCP 8.5, as well as equilibrium climate sensitivity, and net feedback strength).

5) Why did you use the statistical technique that you used?

We used Partial Least Squares (PLS) regression to relate simulated features of the Earth’s energy budget over the recent past to the magnitude of model-simulated future warming. PLS regression is applicable to partial correlation problems analogously to the more widely used Multiple Linear Regression (MLR). As discussed above, we wanted to relate globally complete energy budget fields (our predictor matrices) to the magnitude of future warming (our predictand vector). Because of the high degree of spatial autocorrelation in the energy budget fields, the columns in the predictor matrix end up being highly collinear which makes MLR inappropriate to the problem. PLS, however, offers a solution to this issue by creating linear combinations of the columns in the predictor matrix (PLS components) that represent a large portion of the predictor matrix’s variability. The procedure is similar to Principle Component Analysis (PCA) common in climate science but instead of seeking components that explain the maximum variability in some matrix itself, PLS seeks components in the predictor matrix that explain the covariability between the predictor matrix and the predictand vector.

6) How do you know that statistical procedure itself isn’t producing the results that you are seeing?

In conjunction with cross-validation, we perform three additional experiments designed to expose any systematic biases in our methodology. These three experiments involve supplying the statistical procedure with data that should not produce any constraint on the magnitude of future global warming. In one experiment, we substitute the described energy budget features with global surface air temperature (SAT) annual anomalies for each model. Since annual SAT anomaly fields are dominated by chaotic unforced variability, the across-model relationship of these patterns for any given year is unlikely to be related to the magnitude of future warming.

In a second experiment, we substitute the original global warming predictand vector with versions of the vector that have had its values randomly reordered or scrambled. Thus, these scrambled predictand vectors have the same statistical moments as the original vector but any real across-model relationship between predictors and predictands should be eliminated on average.

Finally, in a third experiment, we use both the SAT anomaly fields and the scrambled predictand vectors as the predictors and predictands respectively.

In contrast to the main results between the energy budget predictor fields and the magnitude of future global warming, the three experiments described above all demonstrate no skill in constraining future warming. This indicates that the findings reported in our study are a result of real underlying relationships between the predictors and predictands and are not an artifact of the statistical procedure itself.

7) Why are the values in Table 1 of the paper slightly different from those implied in Figure 1? 

Our results generally show warmer central estimates and smaller ranges in the projections of future warming. However, there are multiple ways to compare our results with raw/previous model results. One way would be to compare our results with what is reported in the last IPCC report (Chapter 12 in Assessment Report 5). This is probably the most useful from the perspective of a casual observer and it is the comparison shown in our Table 1. One issue with this comparison is that it is not a perfect apples-to-apples comparison because we used a slightly different suite of climate models than those used in the IPCC report (see our Supplementary Table 1 and IPCC AR5 Chapter 12). Since many casual observers will read the abstract and look at Table 1, we wanted the numerical values in these two places to match. So, the numerical values in the abstract (the ones reported in degrees Celsius) correspond to our results compared to what was reported previously in IPCC AR5.

It is also useful to make the apples-to-apples comparison where our observationally-informed results are compared to raw model results using the exact same models. This is what is done using the “spread ratio” and “prediction ratio” discussed in the paper’s text and shown in Figure 1. These dimensionless values (the ones reported in percent changes) also appear in our abstract. This was done so that the spread ratio and prediction ratio numbers in the abstract would be consistent with those seen in Fig. 1.

So to expand/clarify the numbers reported in the abstract, there are two sets of comparisons that are relevant:

Under the comparison where our results were compared directly with that from IPCC AR5, the observationally informed warming projection for the end of the twenty-first century for the RCP8.5 scenario is about 12 percent warmer (+~0.5 degrees Celsius) with a reduction of about 43 percent in the two standard deviation spread (-~1.2 degrees Celsius) relative to the raw model projections.

Under the comparison where the exact same models are used, the observationally informed warming projection for the end of the twenty-first century for the RCP8.5 scenario is about 15 percent warmer (+~0.6 degrees Celsius) with a reduction of about a third in the two standard deviation spread (-~0.8 degrees Celsius) relative to the raw model projections.

Advertisements
Posted in Climate Change | 21 Comments

Reducing greenhouse gas emissions helps the economy

Potential solutions to climate change are often framed as a tradeoff between reducing our impact on the environment and harming the economy. More specifically, it is thought that we can reduce our climate-change-related impact by reducing emissions of greenhouse gases but that this will inevitably harm the economy by making energy more expensive. Under this framing, it is natural for people to strongly disagree about climate-policy prescriptions since individuals will inevitably diverge in the relative value they place on environmental vs. economic concerns.

However, the issue of whether or not it is in society’s collective best-interest to reduce greenhouse gas emissions is not as complicated and subjective as the above framing makes it seem. In fact, as long as climate change costs the economy anything (and the cost increases steadily with emissions), then it is in our collective best interest economically to reduce emissions. In other words, you don’t need to care about any non-monetizable environmental impact in order to be in favor of reducing greenhouse gas emissions. All you have to be in favor of is maximizing global economic production.

Richard Tol has a nice description of this in Chapter 8 of his Climate Economics textbook. The figure below is adapted from his book. It shows the relationship between global economic gain and the level of emissions of greenhouse gases.

The blue line indicates that the higher the level of greenhouse gas emissions, the more total economic benefit is accrued to individuals. However, there are diminishing returns to this economic gain, since at some point it becomes unnecessary and thus economically inefficient to emit additional greenhouse gases. In other words, emitting for the sake of emitting will not help the economy. This means that there is a level of emissions that is economically optimal from the standpoint of individuals (represented by the blue dot).

However, there are economic costs (social losses) to greenhouse gas emissions since climate change imposes negative impacts via changes in things like crop yields, mortality, worker productivity, electricity demand and damage from coastal flooding (e.g., Hsiang et al., 2017). On net, the climate change impact on the economy is very likely to be negative, especially beyond 1C of global warming. These climate-impact-related costs on the economy are illustrated in the figure with the orange line.

Reducing GHG Emissions Helps the Economy

Thus, the true relationship between global collective economic gains and global emissions of greenhouse gases is represented by the grey line which is the sum of the total private gains and the total social losses. The grey line indicates that if we emit at a level that maximizes private economic gains (where the blue dashed line intersects the grey line) this will not maximize collective net economic gains. Because there are climate-change-related costs, it is economically optimal to reduce emissions from the point that maximizes only private gains.

When thinking about the cost-benefit analysis of climate change policy, it is tempting to think that it would only be wise to reduce greenhouse gas emissions if the climate-related economic costs of the emissions are greater than the economic benefits. But this is not the case. In the above situation, the benefits of emissions are always greater than the costs, no matter the emissions level (this is presently the case in the real world as well; Tol, 2017). All that is required for emissions reductions to be wise is that there comes a point on the emissions curve where additional emissions increase the costs more than they increase the benefit. This point is represented by the grey dot in the figure.

This type of calculation is made more sophisticated in so-called Integrated Assessment Models that bring in the elements of time and explicitly consider the cost of reducing emissions through, say, a carbon tax. These additional elements, however, do not change the main story illustrated above: It is optimal from a collective economics standpoint to reduce greenhouse gas emissions from the point that maximizes only private gain.

It is relevant to point out that Integrated Assesment Models do not tend to show that it is best for the economy to reduce emissions as quickly and stringently as is proposed by the Paris Climate Accord (e.g., Fig. 6 in Nordhaus and Sztorc, 2013). So, our best estimates at this point indicate that being in favor of the Paris Accord implies that one is implicitly in favor of trading off economic gain for reducing environmental impact (or that one is very risk-averse to low-probability, high-impact outcomes; e.g., Cai et al., 2015).

This highlights that the recommended magnitude of emissions reductions will depend on peoples’ subjective value judgments and will thus be controversial. However, the choice of whether or not we should reduce emissions should be much less controversial. Reducing emissions from the level that maximizes private economic gains will result in higher net economic gains for society overall.

Posted in Climate Change, Uncategorized | 2 Comments

Math Models and Climate Change: S.T.E.M. Day talk at Quarry Lane School

Posted in Uncategorized | Leave a comment

The effect of population growth on climate change impacts

Global human population is currently at approximately 7.5 billion people and increasing by about 1.1% per year. However, the rate of growth of the population growth rate itself is decreasing so world population is expected to level off somewhere near 10-11 billion people around the year 2100. This leveling-off is encouraging from a sustainability perspective but an extra 3 billion people is still quite a bit for the earth system to support.

Since everything humans do requires energy and since our energy system still largely relies on fossil-fuels, the number of people on the planet has a large effect on projections of the impact of global warming. This is why having ‘one fewer child’ has been noted as the single most significant lifestyle decision people can make in terms of reducing their carbon footprint.

This raises a couple questions:

1) How much worse do we expect climate change impacts to be due to this 3 billion-person increase in population?

2) How much faster would we have to ‘decarbonize’ society in order to offset the impact of adding these 3 billion people?

We can get first-order answers to questions like these using the type of simplified equations that go into ‘Integrated Assessment Models’ like the Dynamic Integrated model of Climate and the Economy. In these simplified models of the environment and society, the impact of climate change is quantified by the ‘damage function’ which expresses the damage caused by global warming in terms of a fraction of global gross domestic product (GDP). (The damage function is extremely uncertain but its estimates are becoming more empirical).

Using this framework, we can compare the climate change impacts in a world where population grows from 7.5 billion to 10.5 billion by 2100 (scenario 1, red line below) to a scenario where global population stays steady (scenario 2, blue dashed line below). In these situations, I am considering a middle-of-the-road projection where CO2 emissions continuously drop (panel e) but not fast enough to keep up with the ambitious goals of the Paris Accord.

PTBrown_DICE_Kaya

Comparing the population (panel a) and the climate change impact (panel h), we can see that the additional 3 billion people increases the negative impact of climate change from approximately 1.3% of GDP (blue dashed in panel h) to 1.5% of GDP (red line in panel h) in 2100. This doesn’t seem like a huge difference, meaning that offsetting this difference might not be an insurmountable challenge. So what else could we change in order to offset this additional 0.2% of GDP damages?

We hear about several possible ways that society could go about decarbonizing; from consuming less, to increasing energy efficiency, to transitioning from fossil fuel to renewable energy sources. In fact, along with changes in population, these things represent all the possible avenues by which humanity can reduce carbon dioxide emissions. These features are described quantitatively with the Kaya Identity which breaks down carbon dioxide emissions as:

PTBrown_Kaya

The four terms of the Kaya Identity are plotted in the first four panels above (panels a-d).

This tells us that in order to offset the climate change impact of the 3 billion-person increase in population (a) we need to have either a smaller increase in consumption (b), a larger decrease in energy intensity (c), or a larger decrease in the amount of CO2 we emit per unit of energy (d).

Globally, material consumption – and its flip side production (b) – have been increasing by about 1.4%/year and is expected to continue to grow at a growing rate over the remainder of the century. Energy efficiency (c) has been increasing (or energy intensity has been decreasing) by about 1-2%/year and apparently, this is expected to continue until at least 2040. The carbon intensity of energy (d) – transitioning from fossil fuel to renewable energy sources – is where most policy discussions center and thus I will single-out that term here.

As the baseline, I have assumed that carbon intensity of energy will decrease by 1.5%/year, which is a middle-of-the-road estimate. So the question becomes, how much more of a decrease in carbon intensity do we need in order to offset the increase in population?

The black dashed line (scenario 3) illustrates how this can be achieved. It shows that, in order to have the same impact (panel h), carbon emissions intensity would need to shift from a growth rate of -1.5%/year to -2.1%/year. So, from a climate perspective, the impact of the additional 3 billion people can be offset by what seems like a relatively modest change in the rate of decrease in carbon emissions intensity. Hopefully, this is not an illusion and this level of decarbonization of the energy system proves to be both technologically and politically feasible.

Posted in Uncategorized | 1 Comment

Can humanity maintain perpetual economic growth?

…to understand modern economic history, you really need to understand just a single word. The word is growth. –from Sapiens (2014)

Humanity is far more materially wealthy today than any time in the past and our collective wealth continues to explode:

Picture1From Wikipedia: World Economy.

This material wealth has made it possible to achieve great increases in human well-being measured, for example, by the increase in lifespanincrease in food availabilityreduction in povertyreduction in infant mortalityreduction in a myriad of diseases, etc.

One question that has occupied the minds of environmentalists for decades, however, is: “Can this economic growth be maintained indefinitely, and what happens when we run up against physical limits to growth?”

Yuval Noah Harari’s book Sapiens has what I consider to be a profound section (Chapter 16: The Capitalists Creed) in which the fundamental aspects of the modern economy are described from first principles. Harari uses a simple example that I think is particularly illuminating in that it shows how our current system not only produces growth but depends on constant growth to survive.

Picture2

Harari describes a situation involving a general contractor, a banker, and an entrepreneurial baker who is dreaming of starting a bakery.

The contractor has $1,000,000 and decides to deposit it in the bank (‘panel a’ above). The banker thus has $1,000,000 that he would like to make more money with by investing it.

The baker thinks she has a great business plan for a bakery but she cannot make her dream a reality without a loan. She goes to the bank and pitches her plan, requesting a loan of $1,000,000 to construct her bakery. The banker is convinced that the bakery will be a good investment and loans the $1,000,000 to the baker (‘panel b’ above).

The baker then hires the contractor to build the bakery for a price of $1,000,000. The contractor follows through, making the bakery a reality (‘panel c’ above).

Here’s where it gets interesting.

The contractor now has an additional $1,000,000 which he decides to deposit in the bank, giving him a total of $2,000,000 in the bank (‘panel d’ above).

Look what just happened: the $1,000,000 simply exchanged hands from person to person, but seemingly out of nowhere, the contractor ends up with $2,000,000. An extra $1,000,000 was just created out of thin air!

Where does the extra $1,000,000 come from? It comes from the future. Specifically, the difference between the total amount of real money ($1,000,000) and the amount of money that the contractor has in his account ($2,000,000) is attributable to the future income of the baker. In other words, the banker believes that as time goes on, the bakery will be successful and the baker will repay the loan with interest. So eventually, if the contractor asks to withdraw his $2,000,000, the money will be available. The whole system is based on borrowing from the future in order to finance the present.

Harari notes that under current US banking law, banks are allowed to loan ten times more money than they actually have on their books. That means that at any given time, over 90% of banks’ assets haven’t actually been created yet – we are all just collectively assuming that they will be created in the future. This system only works if the future is always able to produce more wealth than the present, allowing the cycle to be maintained. In other words, the system requires perpetual growth or it will collapse.

The economy can grow due to the discovery of new resources, the growth of the labor force, improvements in technology or increased productivity due to increased specialization.

Over the past couple hundred years, these things have outpaced their limiting factors, allowing wealth to explode. But it is difficult to see how this can be sustained forever. We can’t grow the labor force indefinitely, there are not infinite material resources (at least not on earth) and there are limits to what specialization can achieve.

Technology is perhaps the most difficult thing to put a limit on and there are many futurists who expect technology’s exponential growth to continue to the point of a technological singularity.

Despite this, it is difficult to see how we will not eventually reach fundamental limits on growth that will undermine the very basis of the system we have created. Geoffrey West has pointed this issue out in his book Scale (touched on in his TED talk and also described in Bettencourt et al., 2007). He notes that so far, society has continuously been bailed out by innovations that have allowed us to continue to grow exponentially (actually super exponentially):

Picture3

So basically, if the wildest predictions regarding the ‘technological singularity’ do not come to pass, we will eventually run up against limits where growth can no longer be maintained and the foundation of the global economic system will be fundamentally undermined. (This is hardly a new realization and it has spawned entire fields of thought such as Post-growth and Steady-state economics)

It should be noted that environmentalists have been warning about unsustainable growth of human population and resource consumption for a long time. High-profile predictions of collapse due to unchecked growth have been made since at least Malthus’s An Essay on the Principle of Population in 1798. There were also high-profile predictions of imminent disaster in the 1960s and 1970s epitomized by Paul R. Ehrlich’s The Population Bomb in 1968 and the Club of Rome’s The Limits to Growth in 1972.

These writings warned of imminent catastrophes that have not been realized. Technological innovations and the continuous discovery of new resources have allowed the system to maintain itself, resulting in the amazing growth in wealth I highlighted above.

But does the fact that growth has been maintained historically, imply that growth can be maintained indefinitely into the future? I see no reason why the latter should follow from the former. Physical limitations of the environment and the fact that material recourses are finite would seem to suggest that growth cannot be maintained indefinitely. If that is the case, society should seriously consider how to gradually transition from our current system to a more sustainable one. This certainly seems preferable to having to build a new system from scratch after the old one has collapsed.

Posted in Climate Change | Leave a comment

Change in temperature variability with warming

We have a new paper out in Nature Climate Change on potential changes in natural unforced variability of global mean surface air temperature (GMST) under global warming.

Paper
News and Views piece
Duke press release

Unforced GMST variability is of the same order of magnitude as current externally forced changes in GMST on decadal timescales. Thus, understanding the precise magnitude and physical mechanisms responsible for unforced GMST variability is relevant for both the attribution of past climate changes to human causes as well to the prediction of climate change on policy relevant timescales.

Much research on unforced GMST variability has used modeling experiments run under “preindustrial control” conditions or has used observed/reconstructed GMST variability associated with cooler past climates to draw conclusions for contemporary or future GMST variability. These studies can implicitly assume that the characteristics of GMST variability will remain the same as the climate warms. In our research, we demonstrate in a climate model that this assumption is likely to be flawed. Not only do we show that the magnitude of GMST variability dramatically declines with warming in our experiment, we also show that the physical mechanisms responsible for such variability become fundamentally altered. These results indicate that the ubiquitous “preindustrial control” climate modeling studies may be limited in their relevance for the study of current or future climate variability.

Another principal finding of our study is that global warming may cause local temperature variability to increase over low-to-mid latitude land regions at the same time that global temperature variability dramatically decreases. This represents a cause for concern, as it is precisely these low-to-mid latitude land regions that are characterized by the highest human population density and biodiversity.

Posted in Uncategorized | 5 Comments

Credibility through honesty about uncertainty

An argument that I often encounter is that uncertainties in climate science shouldn’t be publicly emphasized since that will make it harder to inspire action on proposed climate policies. It is thought that publicly highlighting a lack of knowledge/understanding on some aspect of the science provides ammunition to anthropogenic climate change skeptics and thus should be avoided.

I agree that highlighting scientific uncertainty in isolation is misleading because it fails to convey that scientists are very certain about a lot of fundamentals. However, I believe that being honest about uncertainty, and yes even highlighting it, is precisely what builds the credibility that is necessary for the public to trust science in the first place.

As an example, which string of statements (A or B) inspires more trust/credibility?

A:
A.1 The Earth has a greenhouse effect that keeps it warmer than it would be otherwise
A.2 Humans are burning fossil fuels that increase greenhouse gas concentrations
A.3 These increasing greenhouse gas concentrations are warming the planet
A.4 This warming is likely to increase stress on crops at low latitudes
A.5 Assuming current agricultural practices, this could have a substantial detrimental impact on crop prices and/or food availability in the future but crop and economic models have many known problems and are inherently uncertain

or

B:
B.1 The Earth has a greenhouse effect that keeps it warmer than it would be otherwise
B.2 Humans are burning fossil fuels that increase greenhouse gas concentrations
B.3 These increasing greenhouse gas concentrations are warming the planet dramatically
B.4 This warming will cause global crop failures and food shortages in as little as a decade or two
B.5 These food shortages will create millions of climate refugees that will destabilize the global social/political order

Clearly, A sounds more credible and it is also much more scientifically justifiable. However, I often see climate-action activists preferring to use narratives much more along the lines of B. The argument goes that people need salient, specific, alarming examples in order to get them to pay attention to the issue.

That may be the case, but I would argue that any extra attention that is garnered by using B is more than offset by the risk of losing credibility.

What happens when a dramatic prediction like B.5 does not come true? In the mind of the public, the credibility underpinning the entire chain (B.1, B.2, B.3, and B.4) is undercut. If on the other hand, uncertainty is emphasized where it is appropriate (A), the entire chain is not as vulnerable to being dismissed. In other words, concerns about crops may not materialize as envisioned (A.4 and A.5) but that wouldn’t undermine A.1, A.2 and A.3.

Anthropogenic climate change skeptics distrust scientific conclusions about climate change at least partially because they contend that uncertainties are being underemphasized. Thus, true ‘ammunition’ for anthropogenic climate change skeptics comes from deemphasizing, rather than emphasizing uncertainty.

Posted in Climate Change | Leave a comment

The fact illusion: Objective truth is elusive in (climate) science

Summary

Science is the best system humans have ever created to address questions about how the world works and no other paradigm is better at moving us towards objective truth. However, contrary to a popular notion, science can rarely be thought of as an authoritative body that simply swoops in and declares various statements as fact or fiction, true or false. Instead, science is a loosely-defined activity, conducted not by a central authority but by a myriad of competing organizations and individuals all over the world. Thus, our collective confidence in various scientific conclusions inevitably has to result from the subjective weighing of evidence rather than deference to a supreme authority.

The central reason why science has a hard time giving us the “facts” that we desire is that the world is immensely complicated and the available data is generally insufficient to allow us to rule out all alternative explanations. Another reason is simply that scientists are people and people can be swayed by internal emotions and/or external social forces that can lead them towards misplaced conclusions.

When it comes to predicting the future of the Earth’s climate and society we know the least about the things that we care the most about. A survey of predictions of the future, even (or especially) by experts, reveals that prediction is perhaps the most difficult task that humans routinely engage in.

All sides of any debate should acknowledge the fundamental limitations of our knowledge and embrace humility. If we were able to do this it would diffuse some of the tensions and move us from impasse to productive discussion that could lead us closer to the truth.

We want simple, we want “facts”

It may essentially be the case that science can tell us that it is a “fact” that water is made of hydrogen and oxygen. There are many other claims that can, for all intents and purposes, be considered facts as well. However, in contemporary discussions on contentious issues, we generally imagine that science can provide us with facts that are just as authoritative as ‘water is made of hydrogen and oxygen’. This is not the case.

The notion that science would be able to provide us with facts and definitive prescriptions on societally-relevant issues is a very attractive one. It is attractive because humans tend to be ‘complex-world-phobic’ and ‘simple-world-philic’. We are all attracted to simple models of the world that are built on unquestionable facts.

As an example of this simple-world-philic aspect of human nature, consider the relationship between people’s perception of the severity of human-caused climate change and their perception of economic benefits of green energy policies. In principle, these should be independent issues and a person’s opinion on one should not necessarily predict their opinion on the other.

Many people argue that human-caused climate change will have a catastrophic negative impact on the earth and thus human well-being and many people also argue that green energy policies (e.g., taxing carbon emissions) will have a catastrophic impact on the economy and thus human well-being. However, these arguments are hardly ever heard coming out of the same person’s mouth (quadrant D below). Similarly, many people argue that the concern over climate change is overblown and many people argue that green energy policies will be a boon to the economy by creating clean-tech jobs. But again, these are hardly ever the same people. Why is this? I think it is just a manifestation of people wanting the world to be simple. We don’t want conflicting information, nuance or shades of gray. We want nice neat conclusions, in other words, we want “facts”.

fig1

 

Is science the Vatican or The Wild West?

fig2

Since people desire facts and since most important contemporary issues rely to some degree on scientific conclusions it was perhaps inevitable that a notion of “Science” (proper noun with a capital S) would emerge where Science is thought of as the official arbiter of facts. Many people imagine that Science is an authoritative body that can swoop in, perform its magic, and definitively deem some controversial statement to be fact or fiction, true or false. I think of this view of Science as being somewhat Vatican-like in that it conceptualizes science as a hierarchical, centralized authority that should not be questioned.

This Vatican notion of Science not only comes from our innate simple-world-philia but is also taught to us in our education system. People tend to be taught science in school as if science is simply a catalog of conclusions that have been deemed to be true rather than a way of looking at the world and asking questions.

When I was in high school, I held this Vatican-like view of science. The more I learned about science the more confident I was in science’s ability to produce facts (I was moving from A to B in the diagram below).

Once I reached graduate school, however, I started doing science and I realized how messy the process is and just how complicated the real world is. I realized that the idea that Science is magic and can swoop in and declare facts was extremely naïve. This realization has caused me to move from point B to point C in the diagram below.

 

fig3

Going through graduate school also revealed to me that the Vatican notion of Science is a very poor model. It turns out that science is not governed by high priests who can authoritatively separate fact from fiction. In reality, the scientific process is decentralized with groups and organizations around the world competing with each other to come up with the best and most complete descriptions of reality. There are not strict rules that constitute how scientific questions should be framed and there are fundamental disagreements between groups and individuals on how to go about asking and answering questions. In this way, science is much more like the lawless American Wild West than it is like the Vatican: decentralized authority, no definitive rules, everyone free to make their own argument.

At first glance, it may seem like a weakness that science is more like a free-for-all than a definitive authority. However, this aspect of scientific inquiry is actually one of its greatest strengths. The world is simply too complicated, and human beings are too cognitively flawed, for some central organization to play the role of arbiter of truth. It is much easier to arrive at the truth through a free market of ideas where everyone is able to put forward their description of the world. Eventually, the descriptions that survive the most attacks from others are the ones that we have the most confidence in.

So, it may be the case that science has an over-rated ability to produce unquestionable facts, but this hardly means that science can’t tell us anything. Many ideas have been shown to be robust to so many attacks, that they can be considered to be true beyond any reasonable doubt. For climate science these include the fundamentals that the greenhouse effect exists, humans are increasing greenhouse gasses which are warming the planet substantially, and there are substantial negative impacts associated with this warming.

I find that the Vatican notion of Science is commonly held on both the political left as well as the political right in the United States and that this model frames how the politics of climate change get discussed. The primary difference between the right and the left is not on how they conceptualize science but how much legitimacy they give Science. The far left tends to articulate that imminent catastrophe from human-caused climate change is a Scientific fact. Given the perceived legitimacy of Science on the left, it is thought to be either insane or evil to question this (panel A below).

The political far-right, however, does not grant that Science has legitimately earned its authority. Rather, they tend to think of Science as a corrupt organization, built and populated by their ideological opponents (panel B below). In this view, it is quite noble to push back against fraudulent, ideologically-driven Science and doing so makes one comparable to Galileo.

The conversation regarding climate change could be defused quite a bit if people realized how flawed the Vatican notion of Science is. The left would have to discontinue the strategy of using “facts” as a bludgeon to end debate and the right would have to admit that conspiracies and collusion are simply not possible in such a decentralized process (panel C below).

Screen Shot 2017-07-31 at 3.30.14 PM

“Facts” about the future

Many aspects of climate science revolve around projections of what human and natural systems might be like in a century. It would be ideal if we could agree on “facts” about what will happen in the future but this is just wishful thinking. Much of the problem comes from the things we care most about suffering from compounding uncertainty.

Also, projections of the future are most prominently promulgated by the relevant experts and Interestingly, uncertainty may be underappreciated by experts precisely because they are experts. People often assume that there is a monotonic relationship between knowledge on a given subject and ability to predict outcomes related to that subject (i.e., people imagine going from A to B to C in the diagram below as they gain expertise/knowledge on a given subject).

fig6

However, a lot of subjects suffer from fundamental uncertainty that you simply cannot get around by increasing your expertise/knowledge. This causes the ability to predict to ‘saturate’ (point B to C’ above). Therefore, an expert may imagine that they are at point C when they are really at point C’ and thus be especially overconfident in their ability to predict.

A salient example of this phenomena is seen in sports analysts, like the American college basketball analysts that are featured prominently every March on CBS and ESPN. These analysts almost certainly have much more expertise/knowledge of American college basketball than the general public. After all, these analysts generally grew up playing countless hours of basketball, being exposed to many playing/coaching styles. Most of them played and/or coached college basketball themselves and thus they have intimate knowledge of what goes on behind the scenes. Many of them have personal connections with current players or coaches. Finally, it is their full-time job to watch games and discuss various team’s strengths and weaknesses.

You might think that all of this knowledge/expertise would translate into a supreme ability to predict the Final Four of the NCAA basketball tournament. However, every March this experiment is conducted every April we find that the predictions from the analysts scarcely do any better than the average person who casually submits a bracket to their office pool.

It turns out that for college basketball a little knowledge goes a long way and all the additional knowledge that the TV analysists have only moves them from B to C’ rather than from B to C. This is because after a little bit of knowledge you run up against fundamental uncertainty that expertise cannot help you defeat. We see this overconfidence of experts over and over again in a variety of fields (See Nate Silver’ The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t and Philip E. Tetlock’s Expert Political Judgment: How Good Is It? How Can We Know?).

It may be objected that the above analogies are irrelevant to climate science since the climate system is a physical system and is thus less complicated than a basketball tournament or political elections that involve predicting human behavior. For one thing, the effect of policy prescriptions for the mitigation of climate change do involve predicting human behavior. Regardless, even if we restrict our analysis to predictions of the physical world, it is simply not clear how much trust we can put in these predictions. This is because we have never made, for example, a 75-year-out regional drought forecast before and thus we have never had an opportunity to be humbled by poor predictions.

All this is to say that predictions of the future have to be taken as being only loosely constrained. Of course, uncertainty cuts in both directions (our predictions could be more pessimistic or optimistic than turns out to be justified). Additionally, all this uncertainty also applies to any forecast of economic calamity that would supposedly come about due to a proposed green energy policy.

Ideology and “fact” finding

Science being more like The Wild West than The Vatican makes it much more difficult to corrupt since there is no central authority from which decrees are produced. However, the collective sociopolitical attitudes of the scientists themselves can influence scientific thinking on a subject and can be another obstacle for finding the truth.

Many climate scientists are open about their environmentalist views. Most of these scientists would claim that their emotion/advocacy fundamentally stems from the science itself (panel A below). Of course, the exact opposite claim is made by opponents of climate action who assert that most mainstream climate scientists are simply liberal activists and that their scientific results spring from their ideology, not the other way around (panel B below).

fig7

The problem with this contrarian view (panel B) is that it ignores two of the most fundamental driving forces in science:

  • To ultimately be proven correct by history. Nobody wants their legacy to be that they were part of an epic scientific blunder.
  • To advance one’s career by showing the errors of other scientists

Wild West science works as a free-market endeavor in the sense that everybody is essentially free to question everyone else’s work. The best way to make a name for yourself in science is to show that some established conclusion or paradigm is incorrect. Therefore, there is actually a huge incentive to challenge any idea that makes it into the mainstream. This is why you see high-profile research that purports to find little relationship between human caused climate change and various other phenomena:

fig8

Articles with titles that contain phrases such as “little change”, “no increase”, “unlikely to increase” would be impossible in right wing’s notion of Vatican Science and would be unlikely to come about if scientists felt that their primary thrust was to advance a political agenda.

Having said that, scientists are humans and humans are social beings who are influenced by the zeitgeist of their proximate culture. Unfortunately, as political polarization has increased in the United States, many of us are becoming more and more hermetically sealed into our ideological bubbles where our ideas are not challenged and we only hear from other people who agree with us. I do worry that this phenomenon has the power to influence the collective research output and communication of climate science.

Climate scientists tend to be overwhelmingly on the left end of the political spectrum and tend to be ideologically attracted (or at least unopposed) to policies associated with climate change mitigation. For example, I have seen the following cartoon used by climate scientists to defend their advocacy for climate action policies:

fig9

The way the bullet points are phrased it is difficult to disagree with them but in reality, there are legitimate concerns about potential deleterious effects of variously proposed climate policies. This is primarily because everything we materially value is made possible through affordable energy and many climate-action policies have the effect of increasing the price of energy. Much is made of various hockey-stick like charts that show negative impacts of climate change but an honest discussion requires grappling with the positive hockey sticks that affordable energy from fossil fuels have produced (like the amazing increase in lifespan, increase in food availability, reduction in poverty, reduction in infant mortality, reduction in a myriad of diseases, ect., that have been achieved).

The primary problem with the above cartoon is that climate-action policies might not lead to a “better world” by themselves (i.e., regardless of the consequences of human caused global warming). Thus advocates of this cartoon seem unaware that part of their sympathy for climate-action policies could be due to their ideological alignment with more progressive policies in general. People on the political left tend to have more biophilic tendencies than those on the political right. They tend to be more distrustful of corporations and capitalism as a means of distributing resources. Finally, they tend to have more faith in the ability (and right) of central or governments to regulate the private sector and to redistribute resources. None of the above values have anything to do with the physical science of climate change but these values make it more likely that people on the political left will support climate-action policies that protect the environment via government regulation.

The fact that climate scientists tend to be overwhelming on the left end of the political spectrum then leads to a social situation among scientists where it is very easy to agree with the political-ideological norms of the left and it makes it more taboo to question the severity of human-caused climate change. It means that scientists will feel some pressure (at least subconsciously) that their work should support the “good side”.

The attack on climate science from explicitly political organizations and individuals makes the situation even worse. In a purely Wild West science, scientists would feel no reservations about attacking the mainstream scientific view within a community (panel A below). Constant attack from the inside is what makes the mainstream view robust and gives us confidence that it is indeed correct. However, attacks on climate science from prominent politicians and outspoken political commentators have the effect of reorienting climate scientists such that they feel it is their duty to defend (rather than attack) the mainstream view (panel B below).

This is an unfortunate situation because scientists are smart enough to construct persuasive arguments in defense of most things. Thus, orienting scientists in defense of the mainstream view gives the outward appearance of enhanced legitimacy but in reality, it makes us less confident that the conclusions are correct (because the conclusions are being subjected to less scientific attacks).

fig10

 

Thus, in Wild West science, there is a tug-of-war between wanting to conform to social norms/ideological convictions and the desire to advance your career by challenging established conclusions (figure below). I believe that desire to be proven correct and to advance one’s career by showing others to be wrong is ultimately stronger than the desire to conform to the “good” side and this is ultimately why we can have confidence in the findings of science in general and climate science in particular.

fig11

To conclude, there is no Vatican Science that can swoop in and declaratively put controversial statements into binary categories of true or false. This is especially the case for predictions about the future. Science has its flaws (as all human endeavors do) but its decentralized nature and its incentive structure make it very difficult to corrupt. Ultimately, we must all exercise our own best judgment when weighing evidence and trying to better understand the world. If we could appreciate these things, it might make our public conversations about controversial issues a little bit less toxic and a little bit more productive.

Posted in Uncategorized | 4 Comments

Climate Science for People Really in a Hurry 2 – Economics

This is a video summary of the findings in Burke et al., 2015 regarding the relationship between temperature and Gross Domestic Product (GDP) and how this relationship affects our projections of how global warming will impact the economy

Video Series:
Climate Science for People Really in a Hurry 1 – Basics

Posted in Uncategorized | 2 Comments

Modeled vs. observed global temperature: with and without ‘makeup’

modeled and observed global temperature with and without makeupGlobal average surface air temperature is one of the most well-recognized metrics of contemporary climate change – hence the term ‘global warming’. One reason for this is that many impacts of climate change are expected to be proportional to the amount of global average warming that occurs over the next several decades to centuries. This is why, for example, the Paris Accord explicitly states climate change mitigation goals in terms of global average temperature.

Projections of global temperature are often based on the output from physical global climate model simulations and thus there is great interest in the agreement (or lack thereof) between modeled and historically observed global temperature.

Official reports (like the IPCC report), tend to present the comparison of modeled and observed global temperature in a format like that shown in ‘panel a’ above. This plot shows the model-mean and the model-spread (+/- 2 standard deviations) of global average temperature since 1861 (black) compared to observations (yellow). Various possible future scenarios are also shown (red, magenta, blue, cyan) which differ due to different assumptions about how much greenhouse gasses humanity might emit.

In ‘panel a’ there appears to be quite a bit of agreement between modeled and observed global temperature from 1861 to the present and thus this seems to provide compelling visual support for climate models’ ability to simulate/project global average temperature in the future.

However, I think that it is important to point out that part of this visual support comes from some nontrivial ‘makeup’ being applied to the comparison. Firstly, these temperature time series are all expressed as anomalies relative to a 1986-2005 baseline period (and then re-zeroed to be relative to preindustrial temperatures). This has the visual effect of forcing the models to essentially agree with each other and to essentially agree with observations over this 1986-2005 time period. Secondly, the spread around the model-mean value is calculated after the anomalies are taken which has the visual effect of minimizing the range of modeled temperatures. Overall, this results in an impressively small model spread around observations over the historical record and a relatively constrained spread for each of the individual future projections.

The raw model output, without this ‘makeup’ applied, is shown in ‘panel b’ above. In ‘panel b’, the y-axis is the absolute value of simulated and observed global average temperature in Kelvin. It is still the case that observations are more-or-less in the middle of the model simulations, but it can now be seen that the range of simulated values for absolute global average temperature is pretty large (~2.5C). In fact, this range is approximately as large as the amount of warming that we might expect to see over the remainder of the 21st century.

Does this matter? from a visual perspective, ‘panel b’ seems to inspire less confidence in our projections of future warming than ‘panel a’ does. However, the relevant question is: do model biases in the absolute value of temperature have a strong relationship with potential model biases in the projection of temperature change?

It seems as though the magnitude of the model biases in global average temperature do have some relationship with the magnitude of modeled future warming. However, these biases do not matter so much that they would seriously undermine the model projections over the next century or so (see discussion around Fig. 9.42a In Ch9 of Working Group I in the 5th IPCC Report; and discussion around Fig. 2 and Appendix B in Hawkins and Sutton, 2016). Therefore, I think it is reasonable to compare modeled and observed temperature change the way it is done in ‘panel a’ as long as we don’t completely forget about ‘panel b’.

 

Posted in Climate Change | 1 Comment