Does the IPCC say we have until 2030 to avoid catastrophic global warming?

In late 2018 the Intergovernmental Panel on Climate Change (IPCC) released a report on the impacts associated with global warming of 1.5°C (2.7°F) above preindustrial levels (as of 2019 we are at about 1.0°C above pre-industrial levels) as well as the technical feasibility of limiting global warming to such a level. The media coverage of the report immediately produced a meme that continues to persist. The meme is some kind of variation of the following:

The IPCC concluded that we have until 2030 (or 12 years) to avoid catastrophic global warming

Below is a sampling of headlines from coverage that propagated this meme.

However, these headlines are essentially purveying a myth. I think it is necessary to push back against this meme for two main reasons:

1) It is false.

2) I believe that spreading this messaging will ultimately undermine the credibility of the IPCC and climate science more generally.

Taking these two points in turn:

1) The IPCC did not conclude that society has until 2030 to avoid catastrophic global warming.

First of all, the word “catastrophic” does not appear in the IPCC report. This is because the report was not tasked with defining a level of global warming which might be considered to be catastrophic (or any other alarming adjective). Rather, the report was tasked with evaluating the impacts of global warming of 1.5°C (2.7°F) above preindustrial levels, and comparing these to the impacts associated with 2.0°C (3.6°F) above preindustrial levels as well as evaluating the changes to global energy systems that would be necessary in order to limit global warming to 1.5°C.

In the report, the UN has taken the strategy of defining temperature targets and then evaluating the impacts at these targets rather than asking what temperature level might be considered to be catastrophic. This is presumably because the definition of a catastrophe will inevitably vary from country to country and person to person, and there is not robust evidence that there is some kind of universal temperature threshold where a wide range of impacts suddenly become greatly magnified. Instead, impacts seem to be on a continuum where they simply get worse with more warming.

So what did the IPCC conclude regarding the impacts of global warming of 1.5°C? The full IPCC report constituted an exhaustive literature review but the main conclusions were boiled down in the relatively concise summary for policymakers. There were six high-level impact-related conclusions:

So to summarize the summary, the IPCC’s literature review found that impacts of global warming at 2.0°C are worse than at 1.5°C.

The differences in tone between the conclusions of the actual report and the media headlines highlighted above are rather remarkable. But can some of these impacts be considered to be catastrophic even if the IPCC doesn’t use alarming language? Again, this would depend entirely on the definition of the word catastrophic.

If one defines catastrophic as a substantial decline in the extent of artic sea ice, then global warming was already catastrophic a couple decades ago. If global warming intensified a wild fire to the extent that it engulfed your home (whereas it would not have without global warming) then global warming has already been catastrophic for you.

However, I do not believe that changes in arctic sea ice extent and marginal changes in damages from forest fires (or droughts, floods etc.) are what most people envision when they think of the word catastrophic in this context. I believe that the imagery evoked in most peoples’ minds is much more at the scale of a global apocalyptic event. This idea is exemplified in Michael Barbaro’s question about the IPCC report that he asked on The New York Times’ The Daily:

“If we overshoot, if we blow past 1.5°C and 2°C degree warming, is it possible at that point that we’ve lost so much infrastructure, so much of the personnel and the resources required to fix this that it can’t be done anymore? Will there be enough of the world left to implement this in a way that could be effective?”

-Michael Barbaro, New York Times, The Daily, 10/19/2018

It is also articulated in a tweet from prominent climate science communicator Eric Holthaus:

If catastrophe is defined as global-scale devastation to human society then I do not see how it could be possible to read the IPCC report and interpret it as predicting catastrophe at 1.5°C or 2°C of warming. It simply makes no projections approaching such a level of alarm.

2. Undermining credibility.

Some will object to me pointing out that the IPCC has not predicted a global-scale societal catastrophe by 2030. They will inevitably suggest that whether or not the meme is strictly true, it is useful for motivating action on climate policy and therefore it is counterproductive to push back against it. I could not disagree more with this line of thinking.

The point of a document like the IPCC report should be to inform the public and policy makers in a dispassionate and objective way, not to make a case in order to inspire action. The fundamental reason for trusting science in general (and the IPCC in particular) is the notion that the enterprise will be objectively evaluating our best understanding of reality, not arguing for a predetermined outcome. I believe that the IPCC report has adhered to the best scientific standards but the meme of a predicted catastrophe makes it seem as though it has veered into full advocacy mode – making it appear untrustworthy.

An on-the-record prediction that may come back to haunt us

Apart from the inaccurate characterization that the IPCC has projected a catastrophe at 1.5°C, the other potentially harmful aspect of the media headlines above is that they put a timetable on the catastrophe that is very much in the near-term (2030). The year 2030 comes from the idea that we could first cross the 1.5°C threshold (at the annual mean level) in 2030, as is articulated in the report:

Now, if we immediately implement the global climate policies necessary to avoid 1.5°C of warming, then the prediction of a catastrophe will never be put to the test. However, as the IPCC report makes clear, achieving the cuts in emissions necessary to limit global warming to 1.5°C represents a truly massive effort:

Given that this effort would likely be massively expensive and represents a large technical challenge, it is unlikely to occur. This means that we are likely to pass 1.5°C of warming sometime in the 2030s, 2040s or 2050s. At this point – assuming that nothing resembling what most people would consider to be a global societal catastrophe has occurred – the catastrophe meme associated with the 2018 IPCC report will be dredged up and used as ammunition against the credibility of climate science and the IPCC. I fear that it will be used to undermine any further scientific evaluation of impacts from global warming.

In my experience, the primary reason that people skeptical of climate science come to their skepticism is that they believe climate scientists are acting as advocates rather than dispassionate evaluators of evidence. They believe climate scientists are acting as lawyers, making the case for climate action, rather than judges objectively weighing facts. The meme of a global catastrophe by 2030 seems to put a prediction on the record that is likely to be proven false and thus likely to reinforce this notion of ‘climate scientists as untrustworthy activists’ and thus harm the credibility of climate science thereafter.

Posted in Uncategorized | 14 Comments

California fires and Global Warming’s influence on lack of moisture

This autumn has been very dry in California and this has undoubtedly increased the chance of occurrence of the deadly wildfires that the state is seeing.

When assessing the influence of global warming (from human burning of fossil fuels) on these fires, it is relevant to look at climate model projections of extremely dry autumn conditions in California. Below is an animation that uses climate models to calculate the odds that any given November in California will be extremely dry.

Here, extremely dry is defined as a California statewide November that is characterized by soil moisture content three standard deviations below the mean, where the mean and standard deviation is defined over the period 1860-1900.

We can see that these extremely dry Novembers in California go from being exceptionally rare early in the period (by definition), to being more likely now (~1% chance), and much more likely by the end of the century (~7% chance).

In terms of an odds ratio, this would indicate that “extremely dry” conditions are approximately 7 times more likely now than they were at the end of the 19th century and that these “extremely dry” conditions would be approximately 50 times more likely at the end of the century under an RCP8.5 scenario.

 

*chance is calculated by looking at the frequency of California Novembers below the 3 standard deviation threshold across all CMIP5 ensemble members (70) and using a moving window of 40 years.

Posted in Uncategorized | 1 Comment

Revisiting a Claim of Reduced Climate Sensitivity Uncertainty

Nature has published a Brief Communications Arising between us (Patrick Brown, Martin Stolpe, and Ken Caldeira) and Peter Cox, Femke Nijsse, Mark Williamson and Chris Huntingford; which is in regards to their paper published earlier this year titled “Emergent constraint on equilibrium climate sensitivity from global temperature variability” (Cox et al. 2018).


Summary

  • Cox et al. (2018) used historical temperature variability to argue for a large reduction in the uncertainty range of climate sensitivity (the amount of global warming that we should expect from a doubling of atmospheric carbon dioxide) and a lowering of the central estimate of climate sensitivity.
  • We show that the use of alternative methods, that we argue are better-justified theoretically, suggest that historical temperature variability provides, at best, only a small reduction in climate sensitivity uncertainty and that it does not robustly lower or raise the central estimate of climate sensitivity.

 


Background

The Cox et al. (2018) paper is about reducing uncertainty in the amount of warming that we should expect the earth to experience for a given change in greenhouse gasses. Their abstract gives a nice background and summary of their findings:

Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO2) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO2. Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the ‘likely’ range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5–4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC ‘likely’ range) of 2.2–3.4 degrees Celsius.

Thus, the Cox et al. (2018) study found a (slight) reduction in the central estimate of climate sensitivity (2.8ºC relative to the oft-quoted central estimate of 3.0ºC) and a large reduction in the uncertainty for climate sensitivity, as they state in their press release on the paper:

While the standard ‘likely’ range of climate sensitivity has remained at 1.5-4.5ºC for the last 25 years the new study, published in leading scientific journal Nature, has reduced this range by around 60%.

Combining these two results drastically reduces the likelihood of high values of climate sensitivity. This finding was highlighted by much of the news coverage of the paper. For example, here’s the beginning of The Guardian’s story on the paper:

Earth’s surface will almost certainly not warm up four or five degrees Celsius by 2100, according to a study which, if correct, voids worst-case UN climate change predictions.

A revised calculation of how greenhouse gases drive up the planet’s temperature reduces the range of possible end-of-century outcomes by more than half, researchers said in the report, published in the journal Nature.

“Our study all but rules out very low and very high climate sensitivities,” said lead author Peter Cox, a professor at the University of Exeter.

 


Our Comment

I was very interested in the results of Cox et al. (2018) for a couple of reasons.

First, just a few weeks prior to the release of Cox et al. (2018) we had published a paper (coincidentally, also in Nature) which used a similar methodology but produced a different result (our study found evidence for climate sensitivity being on the higher end of the canonical range).

Second, the Cox et al. (2018) study is based on an area of research that I had some experience in: the relationship between short-term temperature variability and long-term climate sensitivity. The general idea that these two things should be related has been around for a while (for example, it’s covered in some depth in Gerard Roe’s 2009 review on climate sensitivity). But in 2015 Kevin Bowman suggested to me that “Fluctuation-Dissipation Theorem” might be useful for using short-term temperature variability to narrow uncertainty in climate sensitivity.  It just so happens that this is the same theoretical foundation that underlies the Cox et al. (2018) results. Following Bowman’s suggestion, I spent several months looking for a useful relationship but I was unable to find one.

Thus, when Cox et al. (2018) was published, I was naturally curious about the specifics of how they arrived at their conclusions both because their results diverged from that of our related study and because they used a particular theoretical underpinning that I had previously found to be ineffectual.

I worked with Martin Stolpe and Ken Caldeira to investigate the Cox et al. (2018) methodology in some detail and to conduct a number of sensitivity tests of their results. We felt that our experiments pointed to some issues with aspects of the study’s methodology and that lead us to submit the aforementioned comment to Nature.

In our comment, we raise two primary concerns.

First, we point out that most of the reported 60% reduction in climate sensitivity uncertainty originates not from the constraint itself but from the choice of the baseline that the revised uncertainty range is compared to. Specifically, the large reduction in uncertainty depends on their choice to compare their constrained uncertainty to the broad IPCC ‘likely’ range of 1.5ºC-4.5ºC rather than to the ‘likely’ range of the raw climate models used to inform the analysis. This choice would be justifiable if the climate models sampled the entire uncertainty range for climate sensitivity but this is not the case. The model ensemble happens to start with an uncertainty range that is about 45% smaller than the IPCC-suggested ‘true’ uncertainty range (which incorporates additional information from e.g., paleoclimate studies). Since the model ensemble embodies a smaller uncertainty range than the IPCC range, one could simply take the raw models, calculate the likely range of climate sensitivity using those models, and claim that this calculation alone “reduces” climate sensitivity uncertainty by about 45%. We contend that such a calculation would not tell us anything meaningful about true climate sensitivity. Instead, it would simply tell us that the current suite of climate models don’t adequately represent the full range of climate sensitivity uncertainty.

Thus, even if the other methodological choices of Cox et al. (2018) are accepted as is, close to 3/4ths of the reported 60% reduction in climate sensitivity uncertainty is attributable to starting from a situation in which the model ensemble samples only a fraction of the full uncertainty range in climate sensitivity.

The second issue that we raise has to do with the theoretical underpinnings of the Cox et al. (2018) constraint. Specifically, The emergent constraint presented by Cox et al. (2018), based on the Fluctuation-Dissipation Theorem, “relates the mean response to impulsive external forcing of a dynamical system to its natural unforced variability” (Leith, 1975).

In this context, climate sensitivity represents the mean response to external forcing, and the measure of variability should be applied to unforced (or internally generated) temperature variability. Cox et al. (2018) state that their constraint is founded on the premise that persistent non-random forcing has been removed:

If trends arising from net radiative forcing and ocean heat uptake can be successfully removed, the net radiative forcing term Q can be approximated by white noise. Under these circumstances, equation (1) … has standard solutions … for the lag-one-year autocorrelation of the temperature.

They suggest that linear detrending with a 55-year moving window may be optimal for the separation of forced trends from variability:

Figure 4a shows the best estimate and 66% confidence limits on ECS as a function of the width of the de-trending window. Our best estimate is relatively insensitive to the chosen window width, but the 66% confidence limits show a greater sensitivity, with the minimum in uncertainty at a window width of about 55 yr (as used in the analysis above). As Extended Data Fig. 3 shows, at this optimum window width the best-fit gradient of the emergent relationship between ECS and Ψ (= 12.1) is also very close to our theory-predicted value of 2 Q2×CO2/σQ (= 12.2). This might be expected if this window length optimally separates forced trend from variability.

Linearly detrending within a moving window is an unconventional way to separate forced from unforced variability and we argue in our comment that it is inadequate for this purpose. (In their reply to our comment Cox et al. agree with this but they contend that mixing forced and unforced variability does not present the problem that we claim it does.)

Using more conventional methods to remove forced variability, we find that the Cox et al. (2018) constraint produces central estimates of climate sensitivity that lack a consistent sign shift relative to their starting value (i.e., it is not clear if the constraint shifts the best estimate of climate sensitivity in the positive or negative direction).

We also find that the more complete removal of forced variability produces constrained confidence intervals on climate sensitivity that range from being no smaller than the raw model confidence intervals used to inform the analysis (Fig. 1d and 1e) to being about 11% smaller than the raw model range (Fig. 1f). This is compared to the 60% reduction in the size of the confidence interval reported in Cox et al., (2018).

Brown_et_al_BCA_Cox_et_al_Fig_2

Figure 1 | Comparison of central estimate and ‘likely’ range (>66%) of Equilibrium climate sensitivity over a variety of methodologies and for four observational datasets. Average changes (across the four observational datasets) in the central estimates of climate sensitivity are reported within the dashed-line range, average changes in uncertainty ranges (confidence intervals) are reported at the bottom of the figure, and r2 values of the relationship are reported at the top of the figure. Results corresponding to observations from GISTEMP, HadCRUT4, NOAA and Berkeley Earth are shown in black, red, blue and green respectively. Changes in uncertainty are reported relative to the raw model range (±0.95 standard deviations across the climate sensitivity range of CMIP5 models) used to inform the analysis (b) rather than relative to the broader IPCC range used as the baseline in Cox et al. (2018) (a).

 

Overall, we argue that historical temperature variability provides, at best, a weak constraint on climate sensitivity and that it is not clear if it suggests a higher or lower central estimate of climate sensitivity relative to the canonical 3ºC value.

For more details please see the original Cox et al. (2018) paper, our full comment and the reply to our comment by Cox et al.

Posted in Climate Change | 2 Comments

Signal, Noise and Global Warming’s Influence on Weather

Human-caused climate change from increasing greenhouse gasses is expected to influence many weather phenomena including extreme events. However, there is not yet a detectable long-term change in many of these extreme events, as is recently emphasized by Roger Pielke Jr. in The Rightful Place of Science: Disasters and Climate Change.

This means that we have a situation where there is no detectable long-term change in e.g., tropical cyclone heavy rainfall and yet we have studies that conclude that human-caused climate change made Hurricane Harvey’s rainfall 15% heavier than it would have been otherwise. This is not actually a contradiction and the video below shows why.

Posted in Climate Change | 6 Comments

The leverage of the current moment on the long-term trajectory of the climate

Below is a talk I gave at the “Bay Area Regional Climate Emergency Town Hall” in Berkeley, CA on August 24th, 2018 titled “The leverage of the current moment on the long-term trajectory of the climate”.

 

 

Posted in Uncategorized | 1 Comment

Contemporary Global Warming placed in geological context

Below is a rough comparison of contemporary Global Warming and estimates of past temperature change. This is a visualization in the vein of this plot on Wikipedia. Uncertainties increase substantially as estimates go back further in time. Time resolution also decreases further back in time so much of the high-frequency climate variability seen more recently would presumably also exist in the more distant past but is not detectable. Sources of data are below.

Hansen, J.E., and M. Sato (2012) Paleoclimate implications for human-made climate change. In Climate Change: Inferences from Paleoclimate and Regional Aspects. A. Berger, F. Mesinger, and D. Šijački, Eds. Springer, 21-48, doi:10.1007/978-3-7091-0973-1_2.

Hansen, J., R. Ruedy, M. Sato, and K. Lo (2010) Global surface temperature change, Rev. Geophys., 48, RG4004, doi:10.1029/2010RG000345.

Mann, M. E, Z. Zhang, M. K. Hughes, R. S. Bradley, S. K. Miller, S. Rutherford, Fenbiao Ni (2008) Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, PNAS, 105 (36) 13252-13257; doi: 10.1073/pnas.0805721105.

Marcott, S. A., J. D. Shakun, P. U. Clark, A. C. Mix (2013) A Reconstruction of Regional and Global Temperature for the Past 11,300 Years, Science, 339, 6124, 1198-1201
doi:10.1126/science.1228026.

Lisiecki, L. E., and M. E. Raymo (2005), A Pliocene‐Pleistocene stack of 57 globally distributed benthic δ18O records, Paleoceanography, 20, PA1003, doi:10.1029/2004PA001071.

Shakun, J. D., P. U. Clark, F. He, S. A. Marcott, A. C. Mix, Z. Liu, B. Otto-Bliesner, A. Schmittner & E. Bard (2012) Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Nature 484, 49–54, doi:10.1038/nature10915.

Winkelmann, R., A. Levermann A. Ridgwell, K. Caldeira (2015) Combustion of available fossil fuel resources sufficient to eliminate the Antarctic Ice Sheet. Science Advances, 2015: 1, 8, e1500589 doi:10.1126/sciadv.1500589.

Zachos, J., M. Pagani, L. Sloan, E. Thomas, and K. Billups, 2001: Trends, rhythms, and aberrations in global climate 65 Ma to present. Science, 292, 686-693. doi:10.1126/science.1059412

Posted in Climate Change | 7 Comments

Fundamental economics and the costs and benefits of addressing climate change

This post is a high-level summary of the central ideas underlying the evaluation of the economic costs and benefits of addressing climate change. One of the primary goals here is to explore, at a fundamental level, what is actually meant by the terms “costs of climate change” and “costs of addressing climate change”. Another goal is to gently push back against overly-simplified narratives that frame climate change policy as either a lose-lose scenario (i.e., it harms the economy with no benefit to society) or an extreme win-win scenario (i.e., it stimulates the economy by creating green jobs while saving civilization from collapse). In other words, I want to be nuanced and consider the economic trade-offs that appear to be at play. Some upshots that are expounded upon in this post are:

  • There are economic costs to climate change that increase with the magnitude of climate change.
  • There are economic costs to addressing climate change (reducing greenhouse gas emissions).
  • Both of the above costs tend to be measured in monetary values but they fundamentally represent forgone production of real goods and services.
  • The forgone production comes about because goods and services are somehow being produced less efficiently (more input per unit output).
  • The costs of climate change can be obvious (like the cost to rebuild infrastructure after a storm made worse by climate change) but even adaptation represents a cost of climate change because it implies the diversion of resources away from uses that would have produced alternative goods and services.
  • Some pathways of addressing climate change, like improving energy efficiency, represents negative costs or direct economic benefits.
  • Most pathways of addressing climate change do represent true costs, mostly because they require the diversion of resources away from the production of alternative goods and services.
  • Under idealized circumstances, without externalities, free markets tend to settle on a price and quantity that maximizes economic wellbeing.
  • Burning fossil fuels is not one of these idealized circumstances – greenhouse gas emissions represent a market failure (negative externality) because the costs of climate change are distributed across society (socialized) while the benefits of obtaining the energy are privatized.
  • Market failures are not addressed by free market forces (by definition) and generally require government solutions.
  • The optimal economic pathway for reducing greenhouse gas emissions can be calculated. Though quite uncertain, these calculations tend to show something broadly similar to the reductions committed to by most of the nations on Earth (according to their Intended Nationally Determined Contributions) under the Paris Agreement.

Introduction

Everything that humans materially value requires energy to produce and over the past several centuries, civilization has found that the burning of fossil fuels (coal, petroleum, and natural gas) has been one of the most effective means of obtaining this energy. Harnessing energy via fossil fuel combustion releases greenhouse gasses as a byproduct which go on to alter global biogeochemistry and climate.

Humanity has not yet come close to exhausting our natural reservoirs of fossil fuels. The combustion of all available fossil fuels would likely be sufficient to raise global temperatures by more than 18°F above pre-industrial levels (Winkelmann et al., 2015) which is a magnitude and rate of change only matched by catastrophic events like the End-Cretaceous impact that caused the extinction the dinosaurs as well as ~75% of all species on the planet. Among a myriad of other consequences (e.g., Field et al. 2014), this level of sustained global warming would probably entail an eventual 200 foot rise in sea level (about the height of a 20-story building)(Winkelmann et al., 2015) which would reshape much of the world’s coastlines and require a relocation of a large fraction of world population and infrastructure.

These negative consequences indicate that it would be optimal to reduce the emissions of greenhouse gasses over time. However, reducing greenhouse gas emissions comes at its own costs, especially if it is attempted to be done too quickly. In the most extreme case of halting all greenhouse gas emissions in a matter of days or weeks, global economic manufacturing and trade would need to be virtually shut down. The required restriction on the production and transportation of food alone would likely cause a global famine. Given the presumed undesirability of these two extreme cases (eventually combusting all available fossil fuels vs. halting all fossil fuel combustion instantaneously), humanity may be inclined to follow an intermediate path (e.g., Nordhaus, 1977).

Digression on fundamental market economics

We are all familiar with the idea that prices quantify how much something costs. So if we see that an electric car costs X thousands of dollars more than a gasoline-burning car, we have a sense that reducing greenhouse gas emissions by switching to an electric car has a net cost associated with it. Also, if we see that a weather disaster, made worse by climate change, costs X billions of dollars to recover from (e.g., Holmes, 2017), we have a sense that climate change can cause economic losses. Using dollar amounts to quantify costs is convenient but these price tags themselves can’t really explain the fundamentals behind the costs. In order to understand that, it is useful to think about the economy from first principles.

To start, let’s imagine a civilization with only four people who are isolated from each other. Due to their isolation, there is no trade between them and thus each of the four people is responsible for producing all of the things that they need to survive. So each person makes their own clothes, finds and decontaminates their own water, prepares their own food, and creates their own shelter. You can imagine that these tasks would take up almost all the time and energy of each of the individuals and there would be little time for leisure or to produce anything that would be considered to be a luxury. Thus, by modern standards, each of these four people would be living in extreme poverty.

 

ptbrown_clim_econ_fig_1

Now let’s imagine that the isolating barriers between these four people are lifted and they are allowed to trade with each other. This allows individuals to focus on producing the things that they are best at producing (or best at producing at the lowest opportunity cost) and trading for their other needs and desires. In this example, one person can focus on making clothes, one person can focus on producing drinking water, one person can focus on producing food and one person can focus on building shelter. Since each person does what they do best and since they can do these things better when they devote all of their time and effort to a given task, more of each product is produced.

ptbrown_clim_econ_fig_2.png

In global macroeconomics, the total value of goods and services is measured in Gross World Product. In the case above, specialization has allowed the gross product of this hypothetical world to increase by 50%.

Specialization and trade thus make everybody better off than they would be if they were each responsible for only their own needs and desires. In practice, trade itself is made much more efficient by the use of money (rather than using a barter system) because this avoids the necessity of a double coincidence of wants.

ptbrown_clim_econ_fig_3

It is important to note that money simply represents the medium of exchange and the store of value in an economy. The prices of each good are relative to other goods and relative to the total amount of money in the economy. The wealth of the world is not measured by how much money there is. If we triple the amount of money in this world, nobody becomes better off (i.e., there are no new goods produced and real Gross World Product does not increase). All that happens is that the prices of everything go up by a factor of 3 (you get monetary inflation).

ptbrown_clim_econ_fig_4

When economist cite numbers for Gross World Product (or Gross Domestic Product) they usually correct for any monetary inflation. Thus, Gross World Product is expressed in some standardized monetary value but it refers to actual goods and services. You can increase real Gross World Product by increasing the real amount of goods and services in the economy (e.g., through increased efficiency of production through increased specialization and trade) but you can’t increase real Gross World Product by simply increasing the amount of money in the economy.

It is also interesting to note that in our hypothetical world with specialization and trade, everyone still has basic subsistence needs and these needs are still being met by people working most of the day. Each person, however, is insulated from the production of some of the items that satisfy their own basic needs because they are trading for them rather than directly producing them. When someone uses the expression that they need to go work in an office all day in order to “put food on the table” they are not simply making an observation about their proximate motivations to go to work. Instead, they are making a fundamental observation about how specialization and trade literally underpins one’s ability to survive. Unfortunately, we do not live in a paradise Garden of Eden where all our needs and desires are magically satisfied without the exertion of human effort. In the idealized situation described here, individuals can either produce for their own needs and desires or they can produce something else that they can trade in exchange for products that satisfy their needs and desires1. This is essentially what all working people are doing in modern market economies.

Now let’s imagine that one of the people in our hypothetical world, the clothes maker, invents a clothes-making robot. This has two main effects: It allows for more clothes to be produced per unit time and it liberates some of the time of the clothes maker. The clothes maker can then use their extra time for more leisure or they can choose to use some of their extra time to produce something other than clothes. Let’s say they use their extra time to invent a mode of transportation which they themselves use and can also trade to others.

ptbrown_clim_econ_fig_5

Now everyone in the society has more clothes and everyone also has a mode of transportation. The Gross World Product has grown. Specifically, the Gross World Product has grown because automation (the robot) took a job (making clothes) which liberated the time and energy of the former clothes maker and allowed them to put effort into another project2. This is one of the primary processes responsible for the explosion in global economic production of the past few centuries:

ptbrown_clim_econ_fig_6

This digression using an ultra-simplified hypothetical world, serves to make the point that when we are talking about economic well-being, we are not simply talking about price tags and wages. Instead, economic well-being has more to do with the efficiency by which societies are able to convert various raw inputs into goods and services (Gross World Product). These inputs are often called the factors of production and are sometimes divided into the categories of land (natural resources), labor (hours worked by people) and physical capital (technology used to make products). These inputs come together to produce output like, e.g., shelter.

ptbrown_clim_econ_fig_7

Now let’s imagine that the labor necessary for producing shelter gets reduced because of some improvements in technology (e.g., an improvement in technology allows some of the tasks to get automated). This means that it now takes less input (labor has been reduced) to produce the same output. Since it takes less input, the cost of the output has decreased. In this case, it makes the buyers of shelter (everyone) better off since they can now get shelter at a lower price (i.e., for trading less of the goods and services that they produce). This also has the effect of liberating labor to do other things like produce, say new communication technology3.

ptbrown_clim_econ_fig_8

 

The costs of climate change and the costs of addressing climate change 

The upshot of the above digression is that from a zoomed-out, macroeconomic perspective, a change that allows for production to become more efficient (in terms of inputs per unit output) causes more Gross World Product to be produced and thus is considered to be a net economic benefit. On the other hand, if some change makes production less efficient (in terms of input per unit output) then there is a net cost to Gross World Product and the economy. There are costs associated with climate change (sometimes called damages) and there are costs associated with addressing climate change. Let’s look at how these come about.

The costs of climate change

It is perhaps not surprising that the direct destruction of infrastructure from natural disasters constitutes economic losses. For example, climate change is expected to increase the intensity of the most intense hurricanes. Obviously, the destruction of lives and livelihoods that result from more intense hurricanes is a bad thing but some people might be tempted to imagine that there is a silver lining to this destruction. Won’t this destruction have some stimulating effect on the economy? Won’t this destruction create jobs in construction companies and in manufacturing? Won’t this all stimulate economic growth? It may be the case that a particular underemployed individual or community might benefit from these new construction projects but from a zoomed-out macroeconomic perspective, these projects represent a cost on the economy. This is because, when these destroyed homes are rebuilt, they are rebuilt at an opportunity cost: inputs that would have been used for producing something else are used instead to rebuild houses that already previously existed. Thus, rebuilding these homes does not stimulate the overall economy by, e.g., producing jobs. If this were the case, we could stimulate the economy by simply bulldozing homes every day and having people rebuild them.

 

ptbrown_clim_econ_fig_9

Climate change can also impose economic costs through pathways that are disguised as “adaptation”. Fundamentally, these costs come about because human society, as well as our natural resources, are adapted to the current climate. So even if things change, but they aren’t changing in an obviously negative direction, it is still a cost to adapt to those changes.

To take a seemly benign example, imagine that as the world warms, ski resorts must be moved from lower elevations to higher elevations and from lower latitudes to higher latitudes in order to follow the snow. Again, this change may seem as though it is economically stimulating. After all, construction of these new ski resorts will appear to create jobs. But again, from a zoomed-out macroeconomic perspective, this adaptation represents a net cost to society. The reason is that construction of the new ski resorts require inputs and the net result is not new output but rather a replacement ski resort that only gets society back to where it started (in this case, one ski resort). Furthermore, the inputs required to create the new ski resort are used at an opportunity cost: something else that would have been created with them (like say, modes of transportation) are forgone. The overall result is that there is a net loss of Gross World Product due to the adaptation.

ptbrown_clim_econ_fig_10

Some of the real-world pathways by which climate change is supposed to harm the economy are through changes in agricultural yields, sea level rise, damage from extreme events, energy demand, labor efficiency, human health and even human conflict. These phenomena are diverse but in all these cases, the true economic cost comes about because less output (goods and services) ends up being produced per unit input.

ptbrown_clim_econ_fig_11

From Hsiang et al. (2017)

The costs of addressing climate change

In order to address climate change, we need to find ways of harnessing energy without emitting greenhouse gasses. In some circumstances, this can be done at negative costs or with economic benefits. For example, increases in energy efficiency of some services, such as lighting, constitutes less input per unit output. Thus increasing energy efficiency (and holding everything else constant) would provide an economic benefit and reduce climate change at the same time.

ptbrown_clim_econ_fig_12

ptbrown_clim_econ_fig_13

Unfortunately, most of the actions associated with addressing climate change do come at an economic cost. These are typically called abatement costs or mitigation costs. Fossil fuels represent the accumulation of solar energy over millions of years that we can release simply by digging it up and burning it. Non-fossil fuel (i.e., “alternative” or “renewable”) sources of energy tend to be more expensive partly because the energy source is more diffuse or more variable and thus produces less output (Joules of useable energy) per unit input.

ptbrown_clim_econ_fig_14

Renewable energy is coming down in cost steadily as technology advances and it is already cheaper than traditional sources of energy in many specific circumstances. However, there is still a long way to go before renewable energy is at cost-parity with fossil fuel combustion in terms of being able to provide all the energy necessary to power society. This is reflected in the current price ratios of renewable energy relative to fossil fuel energy shown in red:

ptbrown_clim_econ_fig_15

From Shayegh et al. (2017)

We can readily imagine a technically-feasible future energy infrastructure that eliminates greenhouse gas emissions:

ptbrown_clim_econ_fig_16

From Davis et al. (2018)

However, achieving the above transformation comes at a cost. This is because, over the past two centuries, the global energy-producing infrastructure, along with the associated human institutions and behaviors, have matured in coordination with technologies that burn fossil fuels that emit greenhouse gasses as a byproduct. As of this writing, ~80% of the 18 terawatts necessary to power global civilization still originates from these fossil-fuel burning systems. Thus, reorganizing this infrastructure will require a great deal of resources to be reallocated from alternative uses in order to get roughly the same amount of energy that could have been produced from burning fossil fuels. This comes at an opportunity cost of forgone production in other areas and thus constitutes a cost on global economic production. The net costs on the global economy have been estimated to be between ~2% and ~10% of consumption (which is Gross World Product minus investment) by 2100 for the proposals that would limit warming to the greatest degree:

ptbrown_clim_econ_fig_17

From Edenhofer et al, (2014)

Addressing climate change through government regulation

The aforementioned costs associated with climate change itself (damages) are what justify governmental policies designed to limit climate change. This is probably best understood in the framework of total surplus from welfare economics. In this framework, the total economic well-being of society can be measured in terms of consumer surplus plus producer surplus. Consumer surplus is the difference between what a person is willing to pay for a good or service and that they actually pay. So if I am willing to pay $700 for a refrigerator and I buy one for $500, then I have a consumer surplus of $200 and feel as though I have gotten a good deal. Producer surplus is the difference between what it costs to produce a good or service and what that good or service is sold for. So if it costs $200 to produce the aforementioned refrigerator, then the producer surplus in our transaction was also $200 and the total surplus was $200+$200=$400. This is represented graphically with marginal supply and demand curves like those below.

ptbrown_clim_econ_fig_18

The demand curve illustrates that as the price goes up, there will be less demand and the supply curve illustrates that as the price goes up there will be more supply. Trade will occur as long as the producer receives producer surplus and the consumer receives consumer surplus. In a competitive free market with perfect information, the price will equilibrate at the point that maximizes total surplus (consumer plus producer surplus) and thus maximizes the total economic well-being of society. This was proven under certain assumptions in the fundamental theorem of welfare economics.

A problem occurs, however, when the costs of the transaction are not fully born by the producer. This is called a negative externality. Pollution (like human-caused CO2 emissions) is the quintessential negative externality. In a situation with a negative externality, a free market will naturally produce a price for a product that is too low and thus too much of the product will be produced. The external cost to society from CO2 emissions is quantified by the economic damages associated with climate change (and is formally quantified in policy circles with the Social Cost of Carbon).  Under this situation, the free market has failed to produce the economically efficient outcome and thus it is justifiable (if the goal is to maximize total surplus) for the government to step in and raise the price of production (through e.g., a tax) to a price that would maximize total surplus4. This is called a Pigovian tax and is one of the main policy pathways by which climate change can be addressed5.

ptbrown_clim_econ_fig_19

So there is an economic cost to climate change, there is a cost to addressing climate change and there is a mechanism (a government tax) that can correct the market failure and divert resources away from activities that emit greenhouse gasses. Can these three ingredients be taken into consideration simultaneously in order to evaluate the best pathway for society going forward? Yes, they can. This task has typically been undertaken with Integrated Assessment Models (IAMs). The three highest-profile of these models (and the three models used by the U.S. government to estimate the global Social Cost of Carbon) are FUND, PAGE and DICE. These models weigh the benefits of avoided economic damages from climate change against the costs of mitigating greenhouse gas emissions and calculate the optimal carbon tax and global greenhouse gas emissions reduction pathway such that the net present value of global social welfare is maximized (see “Opt” pathway below).

ptbrown_clim_econ_fig_20

From Nordhaus (2013)

The “Opt” pathway above results in global temperatures stabilizing at levels roughly near 3 degrees Celsius which is comparable to what would probably result from countries’ currently agreed-upon Intended Nationally Determined Contributions under the Paris Agreement6.

Conclusion

Fundamentally, economics is all about real goods and services and the efficiency by which they are produced. Economic costs are incurred when something changes such that it becomes less efficient to produce goods and services (less output per unit input). Both climate change itself (the problem) and addressing climate change (the solution) present economic costs because they require resources to be reallocated away from alternative uses in such a way that total production is less efficient. One strategy of dealing with climate change is to correct for market failures by increasing the cost of greenhouse gas emissions to their true total cost. The time dimension (and cost of alternative energy) can be taken into consideration by using Integrated Assessment Models. These models tend to calculate that the optimal economic pathway entails a reduction in greenhouse gas emissions through the remainder of the 21st century in a pathway somewhat similar to what countries have agreed upon under the Paris Agreement.

Footnotes

  1. Of course, most societies have decided that it is undesirable to allow people to completely fail in a competitive market and thus starve to death. This is particularly the case when some misfortune has made it so that a given person is not able to trade a valuable good or service. Thus, social safety nets have been implemented which essentially mandate that some portion of society subsidize another portion of society without compensation.
  2. Notice that the person who invented the clothes-making-robot is now producing more than some of the other people in the society. The other people are indicating that they value the clothes and transportation more than some of the money that they have and thus they are sending money to the clothes maker in voluntary win-win transactions. Thus, the clothes-maker will accumulate money in proportion to the real wealth that they have created for society. In other words, they are not becoming rich at the expense of other members of the society but instead, their monetary wealth is an indication that they have created real wealth that is valued by the rest of society. Thus, in an idealized market-based economy like the one being described here, monetary wealth will end up being distributed to individuals in proportion to the real physical wealth that they produce. Monetary wealth will not necessarily be distributed in proportion to how much/hard people work or how “moral”/“good” their work might be, as judged by a 3rd party.
  3. Although simplified, this is analogous to what has been going on in the real world. For example, in 1820, approximately 72% of the American workforce were farmers. Advances in technology have allowed much more food to be produced with less labor and thus US GDP has exploded over the same time period that those farm jobs were eliminated.
  4. Incidentally, the justification for government funding (subsidizing) science is the reverse of this situation. Specifically, it is the recognition that scientific discoveries represent a positive externality. Under a positive (consumer) externality, the total demand (private+external) is larger than the private demand alone and thus the free market produces too few scientific discoveries at too low a price.
  5. To the extent that fossil fuels are subsidized rather than taxed, this further distorts the market away from its optimal price and quantity. Removing these subsidies would also constitute negative mitigation costs.
  6. Of course, the Paris Agreement’s goal was not to optimize for Gross World Product. The most stringent targets can be thought of as being more optimal in terms of Natural Capital or “non-market” goods that have less tangible value than commodities that are regularly bought and sold in markets.
Posted in Climate Change | 1 Comment

Combining Physical and Statistical Models in Order to Narrow Uncertainty in Projected Global Warming

Below is a presentation I gave on our recent research published in Nature titled “Greater future global warming inferred from Earth’s recent energy budget”. This was for the Stanford University Department of Electrical Engineering and Computer Systems Colloquium (EE380). Thus, it is intended for a very technically-savvy but non-climate scientist audience.

I also gave the same talk for a San Fransico Bay Association for Computing Machinery (ACM) meetup which was held at PayPal headquarters:

Posted in Climate Change | 1 Comment

AGU Talk on potential changes in temperature variability with warming

Below is my talk from the 2017 AGU fall meeting. This talk is on a paper we published in Nature Climate Change about potential changes in natural unforced variability of global mean surface air temperature (GMST) under global warming.

Background

Unforced GMST variability is of the same order of magnitude as current externally forced changes in GMST on decadal timescales. Thus, understanding the precise magnitude and physical mechanisms responsible for unforced GMST variability is relevant for both the attribution of past climate changes to human causes as well to the prediction of climate change on policy relevant timescales.

Much research on unforced GMST variability has used modeling experiments run under “preindustrial control” conditions or has used observed/reconstructed GMST variability associated with cooler past climates to draw conclusions for contemporary or future GMST variability. These studies can implicitly assume that the characteristics of GMST variability will remain the same as the climate warms. In our research, we demonstrate in a climate model that this assumption is likely to be flawed. Not only do we show that the magnitude of GMST variability dramatically declines with warming in our experiment, we also show that the physical mechanisms responsible for such variability become fundamentally altered. These results indicate that the ubiquitous “preindustrial control” climate modeling studies may be limited in their relevance for the study of current or future climate variability.

Talk

Posted in Climate Change | Leave a comment

Greater future global warming (still) inferred from Earth’s recent energy budget

We recently published a paper in Nature in which we leveraged observations of the Earth’s radiative energy budget to statistically constrain 21st-century climate model projections of global warming. We found that observations of the Earth’s energy budget allow us to infer generally greater central estimates of future global warming and smaller spreads about those central estimates than the raw model simulations indicate. More background on the paper can be obtained from our blog post on the research.

Last week, Nic Lewis published a critique of our work on several blogs titled A closer look shows global warming will not be greater than we thought. We welcome scientifically-grounded critiques of our work since this is the fundamental way in which science advances. In this spirit, we would like to thank Nic Lewis for his appraisal. However, we find Lewis’ central criticisms to be lacking merit. As we elaborate on below, his arguments do not undermine the findings of the study.

Brief background

Under the ‘emergent constraint’ paradigm, statistical relationships between model-simulated features of the current climate system (predictor variables), along with observations of those features, are used to constrain a predictand. In our work, the predictand is the magnitude of future global warming simulated by climate models.

We chose predictor variables that were as fundamental and comprehensive as possible while still offering the potential for a straight-forward physical connection to the magnitude of future warming. In particular, we chose the full global spatial distribution of fundamental components of Earth’s top-of-atmosphere energy budget—its outgoing (that is, reflected) shortwave radiation (OSR), outgoing longwave radiation (OLR) and net downward energy imbalance (N). We investigated three currently observable attributes of these variables—mean climatology, the magnitude of the seasonal cycle, and the magnitude of monthly variability. We chose these attributes because previous studies have indicated that behavior of the Earth’s radiative energy budget on each of these timescales can be used to infer information on fast feedbacks in the climate system. The combination of these three attributes and the three variables (OSR, OLR and N) result in a total of nine global “predictor fields”. See FAQ #3 of our previous blog post for more information on our choice of predictor variables.

We used Partial Least Squares Regression (PLSR) to relate our predictor fields to predictands of future global warming. In PLSR we can use each of the nine predictor fields individually, or we can use all nine predictor fields simultaneously (collectively). We quantified our main results with “Prediction Ratio” and “Spread Ratio” metrics. The Prediction Ratio is the ratio of our observationally-informed central estimate of warming to the previous raw model average and the Spread Ratio is the ratio of the magnitude of our constrained spread to the magnitude of the raw model spread. Prediction Ratios greater than 1 suggest greater future warming and Spread Ratios below 1 suggest a reduction in spread about the central estimate.

Lewis’ criticism

Lewis’ post expresses general skepticism of climate models and the ‘emergent constraint’ paradigm. There is much to say about both of these topics but we won’t go into them here. Instead, we will focus on Lewis’ criticism that applies specifically to our study.

We showed results associated with each of our nine predictor fields individually but we chose to emphasize the results associated with the influence of all of the predictor fields simultaneously. Lewis suggests that rather than focusing on the simultaneous predictor field, we should have focused on the results associated with the single predictor field that showed the most skill: The magnitude of the seasonal cycle in OLR. Lewis goes further to suggest that it would be useful to adjust our spatial domain in an attempt to search for an even stronger statistical relationship. Thus, Lewis is arguing that we actually undersold the strength of the constraints that we reported, not that we oversold their strength.

This is an unusual criticism for this type of analysis. Typically, criticisms in this vein would run in the opposite direction. Specifically, studies are often criticized for highlighting the single statistical relationship that appears to be the strongest while ignoring or downplaying weaker relationships that could have been discussed. Studies are correctly criticized for this tactic because the more relationships that are screened, the more likely it is that a researcher will be able to find a strong statistical association by chance, even if there is no true underlying relationship. Thus, we do not agree that it would have been more appropriate for us to highlight the results associated with the predictor field with the strongest statistical relationship (smallest Spread Ratio), rather than the results associated with the simultaneous predictor field. However, even if we were to follow this suggestion, it would not change our general conclusions regarding the magnitude of future warming.

We can use our full results, summarized in the table below (all utilizing 7 PLSR components), to look at how different choices, regarding the selection of predictor fields, would affect our conclusions.

Picture1

Lewis’ post makes much of the fact that highlighting the results associated with the ‘magnitude of the seasonal cycle in OLR’, rather than the simultaneous predictor field, would reduce our central estimate of future warming in RCP8.5 from +14% to +6%. This is true but it is only one, very specific example. Asking more general questions gives a better sense of the big picture:

1) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we use the OLR seasonal cycle predictor field exclusively? It is 1.15, implying a 15% increase in the central estimate of warming.

2) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we always use the individual predictor field that had the lowest Spread Ratio for that particular RCP (boxed values)? It is 1.13, implying a 13% increase in the central estimate of warming.

3) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we just average together the results from all the individual predictor fields? It is 1.16, implying a 16% increase in the central estimate of warming.

4) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we always use the simultaneous predictor field? It is 1.15, implying a 15% increase in the central estimate of warming.

One point that is worth making here is that we do not use cross-validation in the multi-model average case (the denominator of the Spread Ratio). Each model’s own value is included in the multi-model average which gives the multi-model average an inherent advantage over the cross-validated PLSR estimate. We made this choice to be extra conservative but it means that PLSR is able to provide meaningful Prediction Ratios even when the Spread Ratio is near or slightly above 1. We have shown that when we supply the PLSR procedure with random data, Spread Ratios tend to be in the range of 1.1 to 1.3 (see FAQ #7 of our previous blog post, and Extended Data Fig. 4c of the paper). Nevertheless, it may be useful to ask the following question:

5) What is the mean Prediction Ratio across the end-of-century RCP predictands, if we average together the results from only those individual predictor fields with spread ratios below 1? It is 1.15, implying a 15% increase in the central estimate of warming.

So, all five of these general methods produce about a 15% increase in the central estimate of future warming.

Lewis also suggests that our results may be sensitive to choices of standardization technique. We standardized the predictors at the level of the predictor field because we wanted to retain information on across-model differences in the spatial structure of the magnitude of predictor variables. However, we can rerun the results when everything is standardized at the grid-level and ask the same questions as above.

Picture2

1b) What is the mean Prediction Ratio across the end-of-century RCPs if we use the OLR seasonal cycle predictor field exclusively? It is 1.15, implying a 15% increase in the central estimate of warming.

2b) What is the mean Prediction Ratio across the end-of-century RCPs if we always use the single predictor field that had the lowest Spread Ratio (boxed values)? It is 1.12, implying a 12% increase in the central estimate of warming.

3b) What is the mean Prediction Ratio across the end-of-century RCPs if we just average together the results from all the predictor fields? It is 1.14, implying a 14% increase in the central estimate of warming.

4b) What is the mean Prediction Ratio across the end-of-century RCPs if we always use the simultaneous predictor field? It is 1.14, implying a 14% increase in the central estimate of warming.

5b) What is the mean Prediction Ratio across the end-of-century RCP predictands if we average together the results from only those individual predictor fields with Spread Ratios below 1? It is 1.14, implying a 14% increase in the central estimate of warming.

Conclusion

There are several reasonable ways to summarize our results and they all imply greater future global warming in line with the values we highlighted in the paper. The only way to argue otherwise is to search out specific examples that run counter to the general results.

 


 

Appendix: Example using synthetic data

Despite the fact that our results are robust to various methodological choices, it is useful to expand upon why we used the simultaneous predictor instead of the particular predictor that happened to produce the lowest Spread Ratio on any given predictand. The general idea can be illustrated with an example using synthetic data in which the precise nature of the predictor-predictand relationships are defined ahead of time. For this purpose, I have created synthetic data with the same dimensions as the data discussed in our study and in Lewis’ blog post:

1) A synthetic predictand vector of 36 “future warming” values corresponding to imaginary output from 36 climate models. In this case, the “future warming” values are just 36 random numbers pulled from a Gaussian distribution.

2) A synthetic set of nine predictor fields (37 latitudes by 72 longitudes) associated with each of the 36 models. Each model’s nine synthetic predictor fields start with that model’s predictand value entered at every grid location. Thus, at this preliminary stage, every location in every predictor field is a perfect predictor of future warming. That is, the across-model correlation between the predictor and the “future warming” predictand is 1 and the regression slope is also 1.

The next step in creating the synthetic predictor fields is to add noise in order to obscure the predictor-predictand relationship somewhat. The first level of noise that is added is a spatially correlated field of weighing factors for each of the nine predictor maps. These weighing factor maps randomly enhance or damp the local magnitude of the map’s values (weighing factors can be positive or negative). After these weighing factors have been applied, every location for every predictor field still has a perfect across-model correlation (or perfect negative correlation) between the predictor and predictand but the regression slopes vary across space according to the magnitude of the weighing factors. The second level of noise that is added are spatially correlated fields of random numbers that are specific for each of the 9X36=324 predictor maps. At this point, everything is standardized to unit variance.

The synthetic data’s predictor-predictand relationship can be summarized in the plot below which shows the local across-model correlation coefficient (between predictor and predictand) for each of the nine predictor fields. These plots are similar to the type of thing that you would see using the real model data that we used in our study. Specifically, in both cases, there are swaths of relatively high correlations and anti-correlations with plenty of low-correlation area in between. All these predictor fields were produced the same way and the only differences arise from the two layers of random noise that were added. Thus, we know that any apparent differences between the predictor fields arose by random chance.

Picture3

Next, we can feed this synthetic data into the same PLSR procedure that we used in our study to see what it produces. The Spread Ratios are shown in the bar graphs below. Spread Ratios are shown for each of the nine predictor fields individually as well for the case where all nine predictor fields are used simultaneously. The top plot shows results without the use of cross-validation while the bottom plot shows results with the use of cross-validation.

Picture4

In the case without cross-validation, there is no guard against over-fitting. Thus, PLSR is able to utilize the many degrees of freedom in the predictor fields to create coefficients that fit predictors to the predictand exceptionally well. This is why the Spread Ratios are so small in the top bar plot. The mean Spread Ratio for the nine predictor fields in the top bar plot is 0.042, implying that the PLSR procedure was able to reduce the spread of the predictand by about 96%. Notably, using all the predictor fields simultaneously results in a three-orders-of-magnitude smaller Spread Ratio than using any of the predictor fields individually. This indicates that when there is no guard against over-fitting, much stronger relationships can be achieved by providing the PLSR procedure with more information.

However, PLSR is more than capable of over-fitting predictors to predictands and thus these small Spread Ratios are not to be taken seriously. In our work, we guard against over-fitting by using cross-validation (see FAQ #1 of our blog post). The Spread Ratios for the synthetic data using cross-validation are shown in the lower bar graph in the figure above. It is apparent that cross-validation makes a big difference. With cross-validation, the mean Spread Ratio across the nine individual predictor fields is 0.8, meaning that the average predictor field could help reduce the spread in the predictand by about 20%. Notably, a lower Spread Ratio of 0.54, is achieved when all nine predictor maps are used collectively (a 46% reduction in spread). Since there is much redundancy across the nine predictor fields, the simultaneous predictor field doesn’t increase skill very drastically but it is still better than the average of the individual predictor fields (this is a very consistent result when the entire exercise is re-run many times).

Importantly, we can even see that one particular predictor field (predictor field 2) achieved a lower Spread Ratio than the simultaneous predictor field. This brings us to the central question: Is predictor field 2 particularly special or inherently more useful as a predictor than the simultaneous predictor field? We created these nine synthetic predictor fields specifically so that they all contained roughly the same amount of information and any differences that arose, came about simply by random chance. There is an element of luck at play because the number of models (37) is small. Thus, cross-validation can produce appreciable Spread Ratio variability from predictor to predictor simply by chance. Combining the predictors reduces the Spread Ratio, but only marginally due to large redundancies in the predictors.

We apply this same logic to the results from our paper. As we stated above, our results showed that the simultaneous predictor field for the RCP 8.5 scenario shows a Spread Ratio of 0.67. Similar to the synthetic data case, eight of the nine individual predictor fields yielded Spread Ratios above this value but a single predictor field (the OLR seasonal cycle) yielded a smaller Spread Ratio. Lewis’ post argues that we should focus entirely on the OLR seasonal cycle because of this. However, just as in the synthetic data case, our interpretation is that the OLR seasonal cycle predictor may have just gotten lucky and we should not take its superior skill too seriously.

Posted in Climate Change | 13 Comments