Fundamental economics and the costs and benefits of addressing climate change

This post is a high-level summary of the central ideas underlying the evaluation of the economic costs and benefits of addressing climate change. One of the primary goals here is to explore, at a fundamental level, what is actually meant by the terms “costs of climate change” and “costs of addressing climate change”. Another goal is to gently push back against overly-simplified narratives that frame climate change policy as either a loose-loose scenario (i.e., it harms the economy with no benefit to society) or an extreme win-win scenario (i.e., it stimulates the economy by creating green jobs while saving civilization from collapse). In other words, I want to be nuanced and consider the economic trade-offs that appear to be at play. Some upshots that are expounded upon in this post are:

  • There are economic costs to climate change that increase with the magnitude of climate change.
  • There are economic costs to addressing climate change (reducing greenhouse gas emissions).
  • Both of the above costs tend to be measured in monetary values but they fundamentally represent forgone production of real goods and services.
  • The forgone production comes about because goods and services are somehow being produced less efficiently (more input per unit output).
  • The costs of climate change can be obvious (like the cost to rebuild infrastructure after a storm made worse by climate change) but even adaptation represents a cost of climate change because it implies the diversion of resources away from uses that would have produced alternative goods and services.
  • Some pathways of addressing climate change, like improving energy efficiency, represents negative costs or direct economic benefits.
  • Most pathways of addressing climate change do represent true costs, mostly because they require the diversion of resources away from the production of alternative goods and services.
  • Under idealized circumstances, without externalities, free markets tend to settle on a price and quantity that maximizes economic wellbeing.
  • Burning fossil fuels is not one of these idealized circumstances – greenhouse gas emissions represent a market failure (negative externality) because the costs of climate change are distributed across society (socialized) while the benefits of obtaining the energy are privatized.
  • Market failures are not addressed by free market forces (by definition) and generally require government solutions.
  • The optimal economic pathway for reducing greenhouse gas emissions can be calculated. Though quite uncertain, these calculations tend to show something broadly similar to the reductions committed to by most of the nations on Earth (according to their Intended Nationally Determined Contributions) under the Paris Agreement.

Introduction

Everything that humans materially value requires energy to produce and over the past several centuries, civilization has found that the burning of fossil fuels (coal, petroleum, and natural gas) has been one of the most effective means of obtaining this energy. Harnessing energy via fossil fuel combustion releases greenhouse gasses as a byproduct which go on to alter global biogeochemistry and climate.

Humanity has not yet come close to exhausting our natural reservoirs of fossil fuels. The combustion of all available fossil fuels would likely be sufficient to raise global temperatures by more than 18°F above pre-industrial levels (Winkelmann et al., 2015) which is a magnitude and rate of change only matched by catastrophic events like the End-Cretaceous impact that caused the extinction the dinosaurs as well as ~75% of all species on the planet. Among a myriad of other consequences (e.g., Field et al. 2014), this level of sustained global warming would probably entail an eventual 200 foot rise in sea level (about the height of a 20-story building)(Winkelmann et al., 2015) which would reshape much of the world’s coastlines and require a relocation of a large fraction of world population and infrastructure.

These negative consequences indicate that it would be optimal to reduce the emissions of greenhouse gasses over time. However, reducing greenhouse gas emissions comes at its own costs, especially if it is attempted to be done too quickly. In the most extreme case of halting all greenhouse gas emissions in a matter of days or weeks, global economic manufacturing and trade would need to be virtually shut down. The required restriction on the production and transportation of food alone would likely cause a global famine. Given the presumed undesirability of these two extreme cases (eventually combusting all available fossil fuels vs. halting all fossil fuel combustion instantaneously), humanity may be inclined to follow an intermediate path (e.g., Nordhaus, 1977).

Digression on fundamental market economics

We are all familiar with the idea that prices quantify how much something costs. So if we see that an electric car costs X thousands of dollars more than a gasoline-burning car, we have a sense that reducing greenhouse gas emissions by switching to an electric car has a net cost associated with it. Also, if we see that a weather disaster, made worse by climate change, costs X billions of dollars to recover from (e.g., Holmes, 2017), we have a sense that climate change can cause economic losses. Using dollar amounts to quantify costs is convenient but these price tags themselves can’t really explain the fundamentals behind the costs. In order to understand that, it is useful to think about the economy from first principles.

To start, let’s imagine a civilization with only four people who are isolated from each other. Due to their isolation, there is no trade between them and thus each of the four people is responsible for producing all of the things that they need to survive. So each person makes their own clothes, finds and decontaminates their own water, prepares their own food, and creates their own shelter. You can imagine that these tasks would take up almost all the time and energy of each of the individuals and there would be little time for leisure or to produce anything that would be considered to be a luxury. Thus, by modern standards, each of these four people would be living in extreme poverty.

 

ptbrown_clim_econ_fig_1

Now let’s imagine that the isolating barriers between these four people are lifted and they are allowed to trade with each other. This allows individuals to focus on producing the things that they are best at producing (or best at producing at the lowest opportunity cost) and trading for their other needs and desires. In this example, one person can focus on making clothes, one person can focus on producing drinking water, one person can focus on producing food and one person can focus on building shelter. Since each person does what they do best and since they can do these things better when they devote all of their time and effort to a given task, more of each product is produced.

ptbrown_clim_econ_fig_2.png

In global macroeconomics, the total value of goods and services is measured in Gross World Product. In the case above, specialization has allowed the gross product of this hypothetical world to increase by 50%.

Specialization and trade thus make everybody better off than they would be if they were each responsible for only their own needs and desires. In practice, trade itself is made much more efficient by the use of money (rather than using a barter system) because this avoids the necessity of a double coincidence of wants.

ptbrown_clim_econ_fig_3

It is important to note that money simply represents the medium of exchange and the store of value in an economy. The prices of each good are relative to other goods and relative to the total amount of money in the economy. The wealth of the world is not measured by how much money there is. If we triple the amount of money in this world, nobody becomes better off (i.e., there are no new goods produced and real Gross World Product does not increase). All that happens is that the prices of everything go up by a factor of 3 (you get monetary inflation).

ptbrown_clim_econ_fig_4

When economist cite numbers for Gross World Product (or Gross Domestic Product) they usually correct for any monetary inflation. Thus, Gross World Product is expressed in some standardized monetary value but it refers to actual goods and services. You can increase real Gross World Product by increasing the real amount of goods and services in the economy (e.g., through increased efficiency of production through increased specialization and trade) but you can’t increase real Gross World Product by simply increasing the amount of money in the economy.

It is also interesting to note that in our hypothetical world with specialization and trade, everyone still has basic subsistence needs and these needs are still being met by people working most of the day. Each person, however, is insulated from the production of some of the items that satisfy their own basic needs because they are trading for them rather than directly producing them. When someone uses the expression that they need to go work in an office all day in order to “put food on the table” they are not simply making an observation about their proximate motivations to go to work. Instead, they are making a fundamental observation about how specialization and trade literally underpins one’s ability to survive. Unfortunately, we do not live in a paradise Garden of Eden where all our needs and desires are magically satisfied without the exertion of human effort. In the idealized situation described here, individuals can either produce for their own needs and desires or they can produce something else that they can trade in exchange for products that satisfy their needs and desires1. This is essentially what all working people are doing in modern market economies.

Now let’s imagine that one of the people in our hypothetical world, the clothes maker, invents a clothes-making robot. This has two main effects: It allows for more clothes to be produced per unit time and it liberates some of the time of the clothes maker. The clothes maker can then use their extra time for more leisure or they can choose to use some of their extra time to produce something other than clothes. Let’s say they use their extra time to invent a mode of transportation which they themselves use and can also trade to others.

ptbrown_clim_econ_fig_5

Now everyone in the society has more clothes and everyone also has a mode of transportation. The Gross World Product has grown. Specifically, the Gross World Product has grown because automation (the robot) took a job (making clothes) which liberated the time and energy of the former clothes maker and allowed them to put effort into another project2. This is one of the primary processes responsible for the explosion in global economic production of the past few centuries:

ptbrown_clim_econ_fig_6

This digression using an ultra-simplified hypothetical world, serves to make the point that when we are talking about economic well-being, we are not simply talking about price tags and wages. Instead, economic well-being has more to do with the efficiency by which societies are able to convert various raw inputs into goods and services (Gross World Product). These inputs are often called the factors of production and are sometimes divided into the categories of land (natural resources), labor (hours worked by people) and physical capital (technology used to make products). These inputs come together to produce output like, e.g., shelter.

ptbrown_clim_econ_fig_7

Now let’s imagine that the labor necessary for producing shelter gets reduced because of some improvements in technology (e.g., an improvement in technology allows some of the tasks to get automated). This means that it now takes less input (labor has been reduced) to produce the same output. Since it takes less input, the cost of the output has decreased. In this case, it makes the buyers of shelter (everyone) better off since they can now get shelter at a lower price (i.e., for trading less of the goods and services that they produce). This also has the effect of liberating labor to do other things like produce, say new communication technology3.

ptbrown_clim_econ_fig_8

 

The costs of climate change and the costs of addressing climate change 

The upshot of the above digression is that from a zoomed-out, macroeconomic perspective, a change that allows for production to become more efficient (in terms of inputs per unit output) causes more Gross World Product to be produced and thus is considered to be a net economic benefit. On the other hand, if some change makes production less efficient (in terms of input per unit output) then there is a net cost to Gross World Product and the economy. There are costs associated with climate change (sometimes called damages) and there are costs associated with addressing climate change. Let’s look at how these come about.

The costs of climate change

It is perhaps not surprising that the direct destruction of infrastructure from natural disasters constitutes economic losses. For example, climate change is expected to increase the intensity of the most intense hurricanes. Obviously, the destruction of lives and livelihoods that result from more intense hurricanes is a bad thing but some people might be tempted to imagine that there is a silver lining to this destruction. Won’t this destruction have some stimulating effect on the economy? Won’t this destruction create jobs in construction companies and in manufacturing? Won’t this all stimulate economic growth? It may be the case that a particular underemployed individual or community might benefit from these new construction projects but from a zoomed-out macroeconomic perspective, these projects represent a cost on the economy. This is because, when these destroyed homes are rebuilt, they are rebuilt at an opportunity cost: inputs that would have been used for producing something else are used instead to rebuild houses that already previously existed. Thus, rebuilding these homes does not stimulate the overall economy by, e.g., producing jobs. If this were the case, we could stimulate the economy by simply bulldozing homes every day and having people rebuild them.

 

ptbrown_clim_econ_fig_9

Climate change can also impose economic costs through pathways that are disguised as “adaptation”. Fundamentally, these costs come about because human society, as well as our natural resources, are adapted to the current climate. So even if things change, but they aren’t changing in an obviously negative direction, it is still a cost to adapt to those changes.

To take a seemly benign example, imagine that as the world warms, ski resorts must be moved from lower elevations to higher elevations and from lower latitudes to higher latitudes in order to follow the snow. Again, this change may seem as though it is economically stimulating. After all, construction of these new ski resorts will appear to create jobs. But again, from a zoomed-out macroeconomic perspective, this adaptation represents a net cost to society. The reason is that construction of the new ski resorts require inputs and the net result is not new output but rather a replacement ski resort that only gets society back to where it started (in this case, one ski resort). Furthermore, the inputs required to create the new ski resort are used at an opportunity cost: something else that would have been created with them (like say, modes of transportation) are forgone. The overall result is that there is a net loss of Gross World Product due to the adaptation.

ptbrown_clim_econ_fig_10

Some of the real-world pathways by which climate change is supposed to harm the economy are through changes in agricultural yields, sea level rise, damage from extreme events, energy demand, labor efficiency, human health and even human conflict. These phenomena are diverse but in all these cases, the true economic cost comes about because less output (goods and services) ends up being produced per unit input.

ptbrown_clim_econ_fig_11

From Hsiang et al. (2017)

The costs of addressing climate change

In order to address climate change, we need to find ways of harnessing energy without emitting greenhouse gasses. In some circumstances, this can be done at negative costs or with economic benefits. For example, increases in energy efficiency of some services, such as lighting, constitutes less input per unit output. Thus increasing energy efficiency (and holding everything else constant) would provide an economic benefit and reduce climate change at the same time.

ptbrown_clim_econ_fig_12

ptbrown_clim_econ_fig_13

Unfortunately, most of the actions associated with addressing climate change do come at an economic cost. These are typically called abatement costs or mitigation costs. Fossil fuels represent the accumulation of solar energy over millions of years that we can release simply by digging it up and burning it. Non-fossil fuel (i.e., “alternative” or “renewable”) sources of energy tend to be more expensive partly because the energy source is more diffuse or more variable and thus produces less output (Joules of useable energy) per unit input.

ptbrown_clim_econ_fig_14

Renewable energy is coming down in cost steadily as technology advances and it is already cheaper than traditional sources of energy in many specific circumstances. However, there is still a long way to go before renewable energy is at cost-parity with fossil fuel combustion in terms of being able to provide all the energy necessary to power society. This is reflected in the current price ratios of renewable energy relative to fossil fuel energy shown in red:

ptbrown_clim_econ_fig_15

From Shayegh et al. (2017)

We can readily imagine a technically-feasible future energy infrastructure that eliminates greenhouse gas emissions:

ptbrown_clim_econ_fig_16

From Davis et al. (2018)

However, achieving the above transformation comes at a cost. This is because, over the past two centuries, the global energy-producing infrastructure, along with the associated human institutions and behaviors, have matured in coordination with technologies that burn fossil fuels that emit greenhouse gasses as a byproduct. As of this writing, ~80% of the 18 terawatts necessary to power global civilization still originates from these fossil-fuel burning systems. Thus, reorganizing this infrastructure will require a great deal of resources to be reallocated from alternative uses in order to get roughly the same amount of energy that could have been produced from burning fossil fuels. This comes at an opportunity cost of forgone production in other areas and thus constitutes a cost on global economic production. The net costs on the global economy have been estimated to be between ~2% and ~10% of consumption (which is Gross World Product minus investment) by 2100 for the proposals that would limit warming to the greatest degree:

ptbrown_clim_econ_fig_17

From Edenhofer et al, (2014)

Addressing climate change through government regulation

The aforementioned costs associated with climate change itself (damages) are what justify governmental policies designed to limit climate change. This is probably best understood in the framework of total surplus from welfare economics. In this framework, the total economic well-being of society can be measured in terms of consumer surplus plus producer surplus. Consumer surplus is the difference between what a person is willing to pay for a good or service and that they actually pay. So if I am willing to pay $700 for a refrigerator and I buy one for $500, then I have a consumer surplus of $200 and feel as though I have gotten a good deal. Producer surplus is the difference between what it costs to produce a good or service and what that good or service is sold for. So if it costs $200 to produce the aforementioned refrigerator, then the producer surplus in our transaction was also $200 and the total surplus was $200+$200=$400. This is represented graphically with marginal supply and demand curves like those below.

ptbrown_clim_econ_fig_18

The demand curve illustrates that as the price goes up, there will be less demand and the supply curve illustrates that as the price goes up there will be more supply. Trade will occur as long as the producer receives producer surplus and the consumer receives consumer surplus. In a competitive free market with perfect information, the price will equilibrate at the point that maximizes total surplus (consumer plus producer surplus) and thus maximizes the total economic well-being of society. This was proven under certain assumptions in the fundamental theorem of welfare economics.

A problem occurs, however, when the costs of the transaction are not fully born by the producer. This is called a negative externality. Pollution (like human-caused CO2 emissions) is the quintessential negative externality. In a situation with a negative externality, a free market will naturally produce a price for a product that is too low and thus too much of the product will be produced. The external cost to society from CO2 emissions is quantified by the economic damages associated with climate change (and is formally quantified in policy circles with the Social Cost of Carbon).  Under this situation, the free market has failed to produce the economically efficient outcome and thus it is justifiable (if the goal is to maximize total surplus) for the government to step in and raise the price of production (through e.g., a tax) to a price that would maximize total surplus4. This is called a Pigovian tax and is one of the main policy pathways by which climate change can be addressed5.

ptbrown_clim_econ_fig_19

So there is an economic cost to climate change, there is a cost to addressing climate change and there is a mechanism (a government tax) that can correct the market failure and divert resources away from activities that emit greenhouse gasses. Can these three ingredients be taken into consideration simultaneously in order to evaluate the best pathway for society going forward? Yes, they can. This task has typically been undertaken with Integrated Assessment Models (IAMs). The three highest-profile of these models (and the three models used by the U.S. government to estimate the global Social Cost of Carbon) are FUND, PAGE and DICE. These models weigh the benefits of avoided economic damages from climate change against the costs of mitigating greenhouse gas emissions and calculate the optimal carbon tax and global greenhouse gas emissions reduction pathway such that the net present value of global social welfare is maximized (see “Opt” pathway below).

ptbrown_clim_econ_fig_20

From Nordhaus (2013)

The “Opt” pathway above results in global temperatures stabilizing at levels roughly near 3 degrees Celsius which is comparable to what would probably result from countries’ currently agreed-upon Intended Nationally Determined Contributions under the Paris Agreement6.

Conclusion

Fundamentally, economics is all about real goods and services and the efficiency by which they are produced. Economic costs are incurred when something changes such that it becomes less efficient to produce goods and services (less output per unit input). Both climate change itself (the problem) and addressing climate change (the solution) present economic costs because they require resources to be reallocated away from alternative uses in such a way that total production is less efficient. One strategy of dealing with climate change is to correct for market failures by increasing the cost of greenhouse gas emissions to their true total cost. The time dimension (and cost of alternative energy) can be taken into consideration by using Integrated Assessment Models. These models tend to calculate that the optimal economic pathway entails a reduction in greenhouse gas emissions through the remainder of the 21st century in a pathway somewhat similar to what countries have agreed upon under the Paris Agreement.

Footnotes

  1. Of course, most societies have decided that it is undesirable to allow people to completely fail in a competitive market and thus starve to death. This is particularly the case when some misfortune has made it so that a given person is not able to trade a valuable good or service. Thus, social safety nets have been implemented which essentially mandate that some portion of society subsidize another portion of society without compensation.
  2. Notice that the person who invented the clothes-making-robot is now producing more than some of the other people in the society. The other people are indicating that they value the clothes and transportation more than some of the money that they have and thus they are sending money to the clothes maker in voluntary win-win transactions. Thus, the clothes-maker will accumulate money in proportion to the real wealth that they have created for society. In other words, they are not becoming rich at the expense of other members of the society but instead, their monetary wealth is an indication that they have created real wealth that is valued by the rest of society. Thus, in an idealized market-based economy like the one being described here, monetary wealth will end up being distributed to individuals in proportion to the real physical wealth that they produce. Monetary wealth will not necessarily be distributed in proportion to how much/hard people work or how “moral”/“good” their work might be, as judged by a 3rd party.
  3. Although simplified, this is analogous to what has been going on in the real world. For example, in 1820, approximately 72% of the American workforce were farmers. Advances in technology have allowed much more food to be produced with less labor and thus US GDP has exploded over the same time period that those farm jobs were eliminated.
  4. Incidentally, the justification for government funding (subsidizing) science is the reverse of this situation. Specifically, it is the recognition that scientific discoveries represent a positive externality. Under a positive (consumer) externality, the total demand (private+external) is larger than the private demand alone and thus the free market produces too few scientific discoveries at too low a price.
  5. To the extent that fossil fuels are subsidized rather than taxed, this further distorts the market away from its optimal price and quantity. Removing these subsidies would also constitute negative mitigation costs.
  6. Of course, the Paris Agreement’s goal was not to optimize for Gross World Product. The most stringent targets can be thought of as being more optimal in terms of Natural Capital or “non-market” goods that have less tangible value than commodities that are regularly bought and sold in markets.
Posted in Climate Change | 1 Comment

This video is a visual explanation of meteorological Skew-T, Log-P sounding diagrams (aka thermodynamic diagrams)

Posted in Uncategorized | Leave a comment

Why is concern about global warming so politically polarized?

As a climate scientist, I often hear it bemoaned that the public discussion of human-caused global warming is so politically polarized (Pew Research, 2019). The argument goes that global warming is simply a matter of pure science and thus there should be no divisions of opinion along political lines. Since it tends to be the political Right that opposes policies designed to address global warming, the reason for the political division is often placed solely on the ideological stubbornness of the Right.

This is a common theme in research on political divides regarding scientific questions. These divides are often studied from the perspective of researchers on the Left who, rather self-servingly, frame the research question as something like “Our side came to it’s conclusions from pure reason, so what exactly makes the people who disagree with us so biased and ideologically motivated?” I would put works like The Republican Brain: The Science of Why They Deny Science — and Reality in this category.

Works like The Republican Brain correctly point out that those most dismissive of global warming tend to be on the Right, but they incorrectly assume that the Left’s position is therefore informed by dispassionate logic. If the Left was motivated by pure reason then the Left would not be just as likely as the Right to deny science on the safety of vaccines and genetically modified foods. Additionally, as Mooney has argued elsewhere, the Left is more eager than the Right to deny mainstream science when it doesn’t support a blank-slate view of human nature. This suggests that fidelity to science and logic are not what motivates the Left’s concern about global warming.

Rather than thinking of the political divide on global warming as being the result of logic vs. dogma, a much better explanation is that people tend to accept conclusions, be they scientific or otherwise, that support themes, ideologies, and narratives that are a preexisting component of their worldview (e.g., Washburn and Skitka, 2017). It just so happens that the themes, ideologies, and narratives associated with human-caused global warming and its proposed solutions align well with archetypal worldviews of the Left and create great tension with archetypal worldviews of the Right.

The definitional distinction between the political Right and the political Left originates from the French Revolution and is most fundamentally about the desirability and perceived validity of social hierarchies. Definitionally, those on the Right see hierarchies as natural, meritocratic and justified while those on the Left see hierarchies more as a product of luck and exploitation. A secondary distinction, at least contemporarily in the West, is that those on the Right tend to emphasize individualism at the expense of collectivism and those on the Left prefer the reverse.

There are several aspects of the contemporary human-caused global warming narrative that align well with an anti-hierarchy, collectivist worldview. This makes the issue gratifying to the sensibilities of the Left and offending to the sensibilities of the Right.

The most fundamental of these themes is the degree to which humanity itself can be placed at the top of the hierarchy of life on the planet. Those on the Right would be more likely to articulate that it is justified to privilege the interests of humanity over the interests of other species or the “interests” of the planet as a whole (to the degree that there is such a thing). On the other hand, those on the Left would be more likely to emphasize across-species egalitarianism and advocate for reduced impact on the environment, even if it is against the interest of humans.

Within humanity, there are also at least two levels for which narratives about hierarchies influence thinking on global warming. One is the issue of developed vs. developing countries. The blame for global warming falls disproportionately on developed countries (in terms of historical greenhouse gas emissions) and thus proposed solutions often call on developed countries to bear the brunt of the cost of reducing emissions going forward. (Additionally, it is argued that developed countries have the luxury of being able to afford the associated increases in the cost of energy.) Overall, the solutions proposed for global warming imply that wealthy countries owe a debt to the rest of humanity that should come due sooner rather than later.

Those on the Right are more likely to see the wealth of developed countries as being rightfully earned through their own industriousness while those on the Left are more likely to view the disproportionate wealth of different countries as being fundamentally unjust and likely originating from exploitation. Thus, the story that wealthy countries are to blame for the global warming problem and that the solution is to penalize wealthy countries and subsidize poor countries is one that aligns well with preexisting narratives on the Left but not those on the Right. An accentuating factor is the tendency of the Right to be more in favor of national autonomy and thus opposed to global governance and especially international redistribution.

The third level for which hierarchy narratives couple with political divides on global warming relates to the wealth of corporations and individuals. On the Right, the story of oil and gas companies (as well as electric utilities that utilize fossil fuels) is one of innovation and wealth creation: The smartest and most deserving people and organizations found the most efficient ways to transform idle fossil fuel resources into the power that runs society and greatly enhances human wellbeing. Under such a narrative, it is fundamentally unjust to point a finger of blame at those entities (both corporations and individuals) that have done so much for human progress. The counter-narrative from the Left is that greedy corporations and individuals exploited natural resources for their own gain at the expense of the planet and the general public. Under this narrative, policies that blame and punish those in the fossil fuel industry are seen as bringing about a cosmic justice that is necessary for them to atone for their sins.

The other major overlapping theme that defines the divide between the Left and the Right on global warming is the degree to which collectivism is emphasized compared to individualism. Global warming is fundamentally a tragedy of the commons problem in which logical agents act in such a way that ends up being in the worst interest for everyone in the long term. These types of ‘collective-action problems’ almost necessarily call for top-down government intervention and thus they are inevitably associated with collectivism at the expense of individualism. Also, global warming’s long term nature calls for the embracement of collectivism across generations. Again, this natural alignment of the global warming problem with collectivist themes makes the issue much more palatable for the Left than for the Right.

In addition to these fundamental ideological issues, there are a number of more circumstantial characteristics that’s I believe have contributed to polarization regarding global warming.

One is that, in the U.S. at least, Al Gore was the primary actor that brought global warming into the national consciousness. If one wanted the issue to be “non-political” one couldn’t have conceived of a worse person than a former vice president and presidential nominee to be the main flagbearer for the movement.

Also, there is the longstanding claim by those on the Right that the global warming issue is just a Trojan Horse intended as an excuse to bring about all the desired policies of the Left. Books like This Changes Everything: Capitalism vs. The Climate and plans like the Green New Deal do little to dispel this narrative. For example, the Green New Deal Resolution contained the following proposals:

“Providing all people of the United States with— (i) high-quality health care; (ii) affordable, safe, and adequate housing; (iii) economic security; and (iv) access to clean water, clean air, healthy and affordable food, and nature.”

“Guaranteeing a job with a family-sustaining wage, adequate family and medical leave, paid vacations, and retirement security to all people of the United States.”

“Providing resources, training, and high-quality education, including higher education, to all people of the United States, with a focus on frontline and vulnerable communities, so those communities may be full and equal participants in the Green New Deal mobilization”.

These are objectives that clearly satisfy goals of the Left but it is much less clear how directly related these objectives are to global warming.

So, it should really not be particularly mysterious that opinions on global warming tend to divide along political lines. It is not because one side embraces pure reason while the other remains obstinately wedded to political dogmatism. It is simply that the problem and its proposed solutions align more comfortably with the dogma of one side than the other. That does not mean, however, that the Left is equally out-of-step with the science of global warming as the Right. It really is the case that the Right is more likely to deny the most well-established aspects of the science. But, if skeptical conservatives are to be convinced, the Left must learn to reframe the issue in a way that is more palatable to their worldview.

Posted in Uncategorized | 27 Comments

Daily, Seasonal, Annual and Decadal Temperature Variability on a Single Graph

New York Daily Seasonal Annual and Decadal Temperature

The graph above is a record of temperature from 1950-2017 for New York City.

What is unique about this graph is that it shows daily, seasonal, annual and decadal temperature variability on a single Y-axis, revealing how their magnitudes compare.

The daily temperature cycle is represented by the three colored lines in each panel, where red, black and blue represent the daily maximum, daily average and daily minimum for each season and year. For example, the red dots in the far left panel represent the average of all the daily maximum temperatures for the spring of each year.

We can see that in New York, the daily minimum temperature tends to be around 13 degrees C (23 degrees F) lower than the daily maximum temperature.

The annual temperature cycle is illustrated by the variation across the four panels with each panel representing one of the four canonical seasons.

We can see that in New York, the summer tends to be about 22 degrees C (40 degrees F) warmer than the winter.

Interannual temperature variability is illustrated by the year-to-year wiggles in each line.

We can see that in New York, there can be year-to-year swings in temperature (for a given season) of several degrees C. For example, the summer of 1999 had a daily average temperature of 24 C (75 F) and the summer of 2000 had a daily average temperature of 21 C (70 F). It is also notable that year-to-year variability in winter temperature is substantially larger than year-to-year variability in summer temperature.

Decadal temperature changes are represented by the linear trend lines. We can see long term warming which is primarily driven by increases in greenhouse gasses (i.e., this is the local manifestation of global warming). The long term warming is generally more prominent in the daily minimum temperature compared to the daily maximum temperature and more prominent in the winter compared to the summer. In other words, global warming is shrinking both the daily and seasonal temperature cycles.

In terms of absolute magnitude, the seasonal cycle is the dominant mode of variability, followed by the daily cycle, year-to-year variability and finally, long term warming.

Thus, while Global Warming is very pronounced on global spatial scales and centennial and greater timescales, we can see that, thus far, it has had a modest influence on the temperature in New York relative to the typical variability at the daily, seasonal and annual timescales.

Data here is from Berkeley Earth.

Posted in Uncategorized | 1 Comment

El Nino’s influence the upcoming season’s global land temperatures

The El-Niño Southern Oscillation (ENSO) is the preeminent mode of global climate variability on timescales of months to several years. El Niño events cause temporary elevations in global average temperatures, and in the context of background global warming from increasing greenhouse gas concentrations, El Niño events are often associated with setting new global temperature records. El Niños cause warmer than typical global average temperatures because they are associated with a great amount of heat release from the equatorial Pacific to the atmosphere which is then distributed globally. This release of heat also imprints on the structure of the atmosphere and shifts the tendencies of typical atmospheric circulations. In certain locations, advection from climatologically colder locations (e.g., flow from the north in the Northern Hemisphere) becomes more prominent than normal during El Niño events which can cause a local tendency for temperatures to cool during El Niños, despite elevated temperatures globally. The large scale atmospheric circulation is also influenced by the state of ENSO differently depending on the time of the year.

This all means that if you want to translate the state of ENSO into a seasonal forecast (e.g., a forecast for 3-month average temperatures) at a particular location, you have to be careful to examine both the specific relationship between ENSO and climate variability at the location you are interested in as well as how that relationship depends on the time of the year. This is the purpose of the Simple ENSO Regression Forecast (SERF).

The SERF is based on an ensemble of dynamical and statistical model forecasts that predict the future state of ENSO, combined with the historical relationships between the state of ENSO and concurrent local surface air temperature departures from average (as a function of location and time of the year).

At ClimateAi, we are developing considerably more sophisticated machine learning techniques for application to seasonal forecasting that are able to achieve enhanced skill over this simple method. Nevertheless, this simple method is transparent and serves as a useful benchmark for more sophisticated methods to be compared to.

Below is the Simple ENSO Regression Forecast (SERF) for the 2019 Northern Hemisphere summer and Southern Hemisphere winter (June-July-August 2019). A weak El-Niño like state is expected to persist throughout the upcoming season. This translates into an expectation for below normal temperatures over northern/central Canada, the US upper Midwest and much of Russia. Above average temperatures are expected over the US Pacific Northwest, Mexico, much of South America, Africa, India, the Middle East and Europe (see Figure 1 and Figure 2 below). One reason that the tropics shows more consistent warming is that the background global warming has a higher signal-to-noise ratio there which means it is more likely that any given season will be above its 1971-2000 average, regardless of the state of ENSO.

global_SERF_JJA_2019

Figure 1. Top) SERF forecast of the average temperature for June-July-August 2019 relative to the long term average (from 1971-2000) for each location. Bottom) Chance that the average temperature over June-July-August will be above the long term average (from 1971-2000) for June-July-August at that location.

regional_SERF_JJA_2019

Figure 2. Same as the bottom of figure 1 but zoomed in to particular regions.

Posted in Uncategorized | Leave a comment

Does the IPCC say we have until 2030 to avoid catastrophic global warming?

In late 2018 the Intergovernmental Panel on Climate Change (IPCC) released a report on the impacts associated with global warming of 1.5°C (2.7°F) above preindustrial levels (as of 2019 we are at about 1.0°C above pre-industrial levels) as well as the technical feasibility of limiting global warming to such a level. The media coverage of the report immediately produced a meme that continues to persist. The meme is some kind of variation of the following:

The IPCC concluded that we have until 2030 (or 12 years) to avoid catastrophic global warming

Below is a sampling of headlines from coverage that propagated this meme.

However, these headlines are essentially purveying a myth. I think it is necessary to push back against this meme for two main reasons:

1) It is false.

2) I believe that spreading this messaging will ultimately undermine the credibility of the IPCC and climate science more generally.

Taking these two points in turn:

1) The IPCC did not conclude that society has until 2030 to avoid catastrophic global warming.

First of all, the word “catastrophic” does not appear in the IPCC report. This is because the report was not tasked with defining a level of global warming which might be considered to be catastrophic (or any other alarming adjective). Rather, the report was tasked with evaluating the impacts of global warming of 1.5°C (2.7°F) above preindustrial levels, and comparing these to the impacts associated with 2.0°C (3.6°F) above preindustrial levels as well as evaluating the changes to global energy systems that would be necessary in order to limit global warming to 1.5°C.

In the report, the UN has taken the strategy of defining temperature targets and then evaluating the impacts at these targets rather than asking what temperature level might be considered to be catastrophic. This is presumably because the definition of a catastrophe will inevitably vary from country to country and person to person, and there is not robust evidence that there is some kind of universal temperature threshold where a wide range of impacts suddenly become greatly magnified. Instead, impacts seem to be on a continuum where they simply get worse with more warming.

So what did the IPCC conclude regarding the impacts of global warming of 1.5°C? The full IPCC report constituted an exhaustive literature review but the main conclusions were boiled down in the relatively concise summary for policymakers. There were six high-level impact-related conclusions:

So to summarize the summary, the IPCC’s literature review found that impacts of global warming at 2.0°C are worse than at 1.5°C.

The differences in tone between the conclusions of the actual report and the media headlines highlighted above are rather remarkable. But can some of these impacts be considered to be catastrophic even if the IPCC doesn’t use alarming language? Again, this would depend entirely on the definition of the word catastrophic.

If one defines catastrophic as a substantial decline in the extent of artic sea ice, then global warming was already catastrophic a couple decades ago. If global warming intensified a wild fire to the extent that it engulfed your home (whereas it would not have without global warming) then global warming has already been catastrophic for you.

However, I do not believe that changes in arctic sea ice extent and marginal changes in damages from forest fires (or droughts, floods etc.) are what most people envision when they think of the word catastrophic in this context. I believe that the imagery evoked in most peoples’ minds is much more at the scale of a global apocalyptic event. This idea is exemplified in Michael Barbaro’s question about the IPCC report that he asked on The New York Times’ The Daily:

“If we overshoot, if we blow past 1.5°C and 2°C degree warming, is it possible at that point that we’ve lost so much infrastructure, so much of the personnel and the resources required to fix this that it can’t be done anymore? Will there be enough of the world left to implement this in a way that could be effective?”

-Michael Barbaro, New York Times, The Daily, 10/19/2018

It is also articulated in a tweet from prominent climate science communicator Eric Holthaus:

If catastrophe is defined as global-scale devastation to human society then I do not see how it could be possible to read the IPCC report and interpret it as predicting catastrophe at 1.5°C or 2°C of warming. It simply makes no projections approaching such a level of alarm.

2. Undermining credibility.

Some will object to me pointing out that the IPCC has not predicted a global-scale societal catastrophe by 2030. They will inevitably suggest that whether or not the meme is strictly true, it is useful for motivating action on climate policy and therefore it is counterproductive to push back against it. I could not disagree more with this line of thinking.

The point of a document like the IPCC report should be to inform the public and policy makers in a dispassionate and objective way, not to make a case in order to inspire action. The fundamental reason for trusting science in general (and the IPCC in particular) is the notion that the enterprise will be objectively evaluating our best understanding of reality, not arguing for a predetermined outcome. I believe that the IPCC report has adhered to the best scientific standards but the meme of a predicted catastrophe makes it seem as though it has veered into full advocacy mode – making it appear untrustworthy.

An on-the-record prediction that may come back to haunt us

Apart from the inaccurate characterization that the IPCC has projected a catastrophe at 1.5°C, the other potentially harmful aspect of the media headlines above is that they put a timetable on the catastrophe that is very much in the near-term (2030). The year 2030 comes from the idea that we could first cross the 1.5°C threshold (at the annual mean level) in 2030, as is articulated in the report:

Now, if we immediately implement the global climate policies necessary to avoid 1.5°C of warming, then the prediction of a catastrophe will never be put to the test. However, as the IPCC report makes clear, achieving the cuts in emissions necessary to limit global warming to 1.5°C represents a truly massive effort:

Given that this effort would likely be massively expensive and represents a large technical challenge, it is unlikely to occur. This means that we are likely to pass 1.5°C of warming sometime in the 2030s, 2040s or 2050s. At this point – assuming that nothing resembling what most people would consider to be a global societal catastrophe has occurred – the catastrophe meme associated with the 2018 IPCC report will be dredged up and used as ammunition against the credibility of climate science and the IPCC. I fear that it will be used to undermine any further scientific evaluation of impacts from global warming.

In my experience, the primary reason that people skeptical of climate science come to their skepticism is that they believe climate scientists are acting as advocates rather than dispassionate evaluators of evidence. They believe climate scientists are acting as lawyers, making the case for climate action, rather than judges objectively weighing facts. The meme of a global catastrophe by 2030 seems to put a prediction on the record that is likely to be proven false and thus likely to reinforce this notion of ‘climate scientists as untrustworthy activists’ and thus harm the credibility of climate science thereafter.

Posted in Uncategorized | 14 Comments

California fires and Global Warming’s influence on lack of moisture

This autumn has been very dry in California and this has undoubtedly increased the chance of occurrence of the deadly wildfires that the state is seeing.

When assessing the influence of global warming (from human burning of fossil fuels) on these fires, it is relevant to look at climate model projections of extremely dry autumn conditions in California. Below is an animation that uses climate models to calculate the odds that any given November in California will be extremely dry.

Here, extremely dry is defined as a California statewide November that is characterized by soil moisture content three standard deviations below the mean, where the mean and standard deviation is defined over the period 1860-1900.

We can see that these extremely dry Novembers in California go from being exceptionally rare early in the period (by definition), to being more likely now (~1% chance), and much more likely by the end of the century (~7% chance).

In terms of an odds ratio, this would indicate that “extremely dry” conditions are approximately 7 times more likely now than they were at the end of the 19th century and that these “extremely dry” conditions would be approximately 50 times more likely at the end of the century under an RCP8.5 scenario.

 

*chance is calculated by looking at the frequency of California Novembers below the 3 standard deviation threshold across all CMIP5 ensemble members (70) and using a moving window of 40 years.

Posted in Uncategorized | 1 Comment

Revisiting a Claim of Reduced Climate Sensitivity Uncertainty

Nature has published a Brief Communications Arising between us (Patrick Brown, Martin Stolpe, and Ken Caldeira) and Peter Cox, Femke Nijsse, Mark Williamson and Chris Huntingford; which is in regards to their paper published earlier this year titled “Emergent constraint on equilibrium climate sensitivity from global temperature variability” (Cox et al. 2018).


Summary

  • Cox et al. (2018) used historical temperature variability to argue for a large reduction in the uncertainty range of climate sensitivity (the amount of global warming that we should expect from a doubling of atmospheric carbon dioxide) and a lowering of the central estimate of climate sensitivity.
  • We show that the use of alternative methods, that we argue are better-justified theoretically, suggest that historical temperature variability provides, at best, only a small reduction in climate sensitivity uncertainty and that it does not robustly lower or raise the central estimate of climate sensitivity.

 


Background

The Cox et al. (2018) paper is about reducing uncertainty in the amount of warming that we should expect the earth to experience for a given change in greenhouse gasses. Their abstract gives a nice background and summary of their findings:

Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO2) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO2. Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the ‘likely’ range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5–4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC ‘likely’ range) of 2.2–3.4 degrees Celsius.

Thus, the Cox et al. (2018) study found a (slight) reduction in the central estimate of climate sensitivity (2.8ºC relative to the oft-quoted central estimate of 3.0ºC) and a large reduction in the uncertainty for climate sensitivity, as they state in their press release on the paper:

While the standard ‘likely’ range of climate sensitivity has remained at 1.5-4.5ºC for the last 25 years the new study, published in leading scientific journal Nature, has reduced this range by around 60%.

Combining these two results drastically reduces the likelihood of high values of climate sensitivity. This finding was highlighted by much of the news coverage of the paper. For example, here’s the beginning of The Guardian’s story on the paper:

Earth’s surface will almost certainly not warm up four or five degrees Celsius by 2100, according to a study which, if correct, voids worst-case UN climate change predictions.

A revised calculation of how greenhouse gases drive up the planet’s temperature reduces the range of possible end-of-century outcomes by more than half, researchers said in the report, published in the journal Nature.

“Our study all but rules out very low and very high climate sensitivities,” said lead author Peter Cox, a professor at the University of Exeter.

 


Our Comment

I was very interested in the results of Cox et al. (2018) for a couple of reasons.

First, just a few weeks prior to the release of Cox et al. (2018) we had published a paper (coincidentally, also in Nature) which used a similar methodology but produced a different result (our study found evidence for climate sensitivity being on the higher end of the canonical range).

Second, the Cox et al. (2018) study is based on an area of research that I had some experience in: the relationship between short-term temperature variability and long-term climate sensitivity. The general idea that these two things should be related has been around for a while (for example, it’s covered in some depth in Gerard Roe’s 2009 review on climate sensitivity). But in 2015 Kevin Bowman suggested to me that “Fluctuation-Dissipation Theorem” might be useful for using short-term temperature variability to narrow uncertainty in climate sensitivity.  It just so happens that this is the same theoretical foundation that underlies the Cox et al. (2018) results. Following Bowman’s suggestion, I spent several months looking for a useful relationship but I was unable to find one.

Thus, when Cox et al. (2018) was published, I was naturally curious about the specifics of how they arrived at their conclusions both because their results diverged from that of our related study and because they used a particular theoretical underpinning that I had previously found to be ineffectual.

I worked with Martin Stolpe and Ken Caldeira to investigate the Cox et al. (2018) methodology in some detail and to conduct a number of sensitivity tests of their results. We felt that our experiments pointed to some issues with aspects of the study’s methodology and that lead us to submit the aforementioned comment to Nature.

In our comment, we raise two primary concerns.

First, we point out that most of the reported 60% reduction in climate sensitivity uncertainty originates not from the constraint itself but from the choice of the baseline that the revised uncertainty range is compared to. Specifically, the large reduction in uncertainty depends on their choice to compare their constrained uncertainty to the broad IPCC ‘likely’ range of 1.5ºC-4.5ºC rather than to the ‘likely’ range of the raw climate models used to inform the analysis. This choice would be justifiable if the climate models sampled the entire uncertainty range for climate sensitivity but this is not the case. The model ensemble happens to start with an uncertainty range that is about 45% smaller than the IPCC-suggested ‘true’ uncertainty range (which incorporates additional information from e.g., paleoclimate studies). Since the model ensemble embodies a smaller uncertainty range than the IPCC range, one could simply take the raw models, calculate the likely range of climate sensitivity using those models, and claim that this calculation alone “reduces” climate sensitivity uncertainty by about 45%. We contend that such a calculation would not tell us anything meaningful about true climate sensitivity. Instead, it would simply tell us that the current suite of climate models don’t adequately represent the full range of climate sensitivity uncertainty.

Thus, even if the other methodological choices of Cox et al. (2018) are accepted as is, close to 3/4ths of the reported 60% reduction in climate sensitivity uncertainty is attributable to starting from a situation in which the model ensemble samples only a fraction of the full uncertainty range in climate sensitivity.

The second issue that we raise has to do with the theoretical underpinnings of the Cox et al. (2018) constraint. Specifically, The emergent constraint presented by Cox et al. (2018), based on the Fluctuation-Dissipation Theorem, “relates the mean response to impulsive external forcing of a dynamical system to its natural unforced variability” (Leith, 2075).

In this context, climate sensitivity represents the mean response to external forcing, and the measure of variability should be applied to unforced (or internally generated) temperature variability. Cox et al. (2018) state that their constraint is founded on the premise that persistent non-random forcing has been removed:

If trends arising from net radiative forcing and ocean heat uptake can be successfully removed, the net radiative forcing term Q can be approximated by white noise. Under these circumstances, equation (1) … has standard solutions … for the lag-one-year autocorrelation of the temperature.

They suggest that linear detrending with a 55-year moving window may be optimal for the separation of forced trends from variability:

Figure 4a shows the best estimate and 66% confidence limits on ECS as a function of the width of the de-trending window. Our best estimate is relatively insensitive to the chosen window width, but the 66% confidence limits show a greater sensitivity, with the minimum in uncertainty at a window width of about 55 yr (as used in the analysis above). As Extended Data Fig. 3 shows, at this optimum window width the best-fit gradient of the emergent relationship between ECS and Ψ (= 12.1) is also very close to our theory-predicted value of 2 Q2×CO2/σQ (= 12.2). This might be expected if this window length optimally separates forced trend from variability.

Linearly detrending within a moving window is an unconventional way to separate forced from unforced variability and we argue in our comment that it is inadequate for this purpose. (In their reply to our comment Cox et al. agree with this but they contend that mixing forced and unforced variability does not present the problem that we claim it does.)

Using more conventional methods to remove forced variability, we find that the Cox et al. (2018) constraint produces central estimates of climate sensitivity that lack a consistent sign shift relative to their starting value (i.e., it is not clear if the constraint shifts the best estimate of climate sensitivity in the positive or negative direction).

We also find that the more complete removal of forced variability produces constrained confidence intervals on climate sensitivity that range from being no smaller than the raw model confidence intervals used to inform the analysis (Fig. 1d and 1e) to being about 11% smaller than the raw model range (Fig. 1f). This is compared to the 60% reduction in the size of the confidence interval reported in Cox et al., (2018).

Brown_et_al_BCA_Cox_et_al_Fig_2

Figure 1 | Comparison of central estimate and ‘likely’ range (>66%) of Equilibrium climate sensitivity over a variety of methodologies and for four observational datasets. Average changes (across the four observational datasets) in the central estimates of climate sensitivity are reported within the dashed-line range, average changes in uncertainty ranges (confidence intervals) are reported at the bottom of the figure, and r2 values of the relationship are reported at the top of the figure. Results corresponding to observations from GISTEMP, HadCRUT4, NOAA and Berkeley Earth are shown in black, red, blue and green respectively. Changes in uncertainty are reported relative to the raw model range (±0.95 standard deviations across the climate sensitivity range of CMIP5 models) used to inform the analysis (b) rather than relative to the broader IPCC range used as the baseline in Cox et al. (2018) (a).

 

Overall, we argue that historical temperature variability provides, at best, a weak constraint on climate sensitivity and that it is not clear if it suggests a higher or lower central estimate of climate sensitivity relative to the canonical 3ºC value.

For more details please see the original Cox et al. (2018) paper, our full comment and the reply to our comment by Cox et al.

Posted in Climate Change | 1 Comment

Signal, Noise and Global Warming’s Influence on Weather

Human-caused climate change from increasing greenhouse gasses is expected to influence many weather phenomena including extreme events. However, there is not yet a detectable long-term change in many of these extreme events, as is recently emphasized by Roger Pielke Jr. in The Rightful Place of Science: Disasters and Climate Change.

This means that we have a situation where there is no detectable long-term change in e.g., tropical cyclone heavy rainfall and yet we have studies that conclude that human-caused climate change made Hurricane Harvey’s rainfall 15% heavier than it would have been otherwise. This is not actually a contradiction and the video below shows why.

Posted in Climate Change | 2 Comments

The leverage of the current moment on the long-term trajectory of the climate

Below is a talk I gave at the “Bay Area Regional Climate Emergency Town Hall” in Berkeley, CA on August 24th, 2018 titled “The leverage of the current moment on the long-term trajectory of the climate”.

 

 

Posted in Uncategorized | Leave a comment

Contemporary Global Warming placed in geological context

Below is a rough comparison of contemporary Global Warming and estimates of past temperature change. This is a visualization in the vein of this plot on Wikipedia. Uncertainties increase substantially as estimates go back further in time. Time resolution also decreases further back in time so much of the high-frequency climate variability seen more recently would presumably also exist in the more distant past but is not detectable. Sources of data are below.

Hansen, J.E., and M. Sato (2012) Paleoclimate implications for human-made climate change. In Climate Change: Inferences from Paleoclimate and Regional Aspects. A. Berger, F. Mesinger, and D. Šijački, Eds. Springer, 21-48, doi:10.1007/978-3-7091-0973-1_2.

Hansen, J., R. Ruedy, M. Sato, and K. Lo (2010) Global surface temperature change, Rev. Geophys., 48, RG4004, doi:10.1029/2010RG000345.

Mann, M. E, Z. Zhang, M. K. Hughes, R. S. Bradley, S. K. Miller, S. Rutherford, Fenbiao Ni (2008) Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, PNAS, 105 (36) 13252-13257; doi: 10.1073/pnas.0805721105.

Marcott, S. A., J. D. Shakun, P. U. Clark, A. C. Mix (2013) A Reconstruction of Regional and Global Temperature for the Past 11,300 Years, Science, 339, 6124, 1198-1201
doi:10.1126/science.1228026.

Lisiecki, L. E., and M. E. Raymo (2005), A Pliocene‐Pleistocene stack of 57 globally distributed benthic δ18O records, Paleoceanography, 20, PA1003, doi:10.1029/2004PA001071.

Shakun, J. D., P. U. Clark, F. He, S. A. Marcott, A. C. Mix, Z. Liu, B. Otto-Bliesner, A. Schmittner & E. Bard (2012) Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation, Nature 484, 49–54, doi:10.1038/nature10915.

Winkelmann, R., A. Levermann A. Ridgwell, K. Caldeira (2015) Combustion of available fossil fuel resources sufficient to eliminate the Antarctic Ice Sheet. Science Advances, 2015: 1, 8, e1500589 doi:10.1126/sciadv.1500589.

Zachos, J., M. Pagani, L. Sloan, E. Thomas, and K. Billups, 2001: Trends, rhythms, and aberrations in global climate 65 Ma to present. Science, 292, 686-693. doi:10.1126/science.1059412

Posted in Climate Change | 7 Comments