How much warming can we expect this century?

This article is also published on Judith Curry’s blog, Climate Etc. Dr. Curry introduced the article as A comprehensive explainer of climate sensitivity to CO2. Watts Up With That then reposted the article from Climate Etc. I wanted to also publish it here, since the articles at Climate Etc and Watts Up With That have problems with the links to and from the footnotes. There’s also missing some important background color and text color in those articles. So read it here, and if you want to make a comment, consider doing that in one of the other places, where more people are likely to see the comment.

A big Thank you to Dr. Curry for publishing! A big Thank you also to Dr. Nic Lewis for reviewing. The article is much better because of it!

Short summary

According to the Intergovernmental Panel on Climate Change (IPCC), the atmosphere’s climate sensitivity is likely between 2.5 and 4.0°C. Simply put, this means that (in the very long term) Earth’s temperature will rise between 2.5 and 4.0°C when the amount of CO2 in the atmosphere doubles.

A 2020 study (Sherwood20) greatly influenced how the IPCC calculated the climate sensitivity. Sherwood20 has been “extremely influential, including in informing the assessment of equilibrium climate sensitivity (ECS) in the 2021 IPCC Sixth Assessment Scientific Report (AR6); it was cited over twenty times in the relevant AR6 chapter“, according to Nic Lewis. A Comment in Nature confirmed this view.1)

Nic Lewis took a closer look at this study, and in September 2022, he published his own study (Lewis22) that criticizes Sherwood20. By correcting errors and using more recent data, including from AR6, Lewis22 found that the climate sensitivity may be about 30% lower than what Sherwood20 had found.

If we know what the climate sensitivity is, and if we also know approximately the amount of greenhouse gases that will be emitted going forward, then the amount of future warming that’s caused by greenhouse gases can also be estimated.

In terms of future emissions, a 2022 study (Pielke22) found that something called RCP3.4 is the most plausible emissions scenario. Traditionally, another scenario (RCP8.5), has been used as a business-as-usual scenario, but this is now widely regarded as an extremely unlikely scenario, with unrealistically high emissions.

Assuming that the climate sensitivity from Lewis22 is correct and that RCP3.4 is the most appropriate emissions scenario, then we find that global temperatures will rise by less than 1°C from 2023 to 2100 (not accounting for natural variability).

How much the Earth’s surface air temperature will rise this century depends, among other things, on how sensitive the atmosphere is to greenhouse gases such as CO2, the amount of greenhouse gases that are emitted, and natural variations. It’s hard to predict natural variations, so the focus here will be on climate sensitivity and greenhouse gas emissions (in particular CO2).

Climate sensitivity

Climate sensitivity is the amount of warming that can be expected in the Earth’s surface air temperature if the amount of CO2 in the atmosphere doubles. So if the climate sensitivity is 3°C, and the amount of CO2 in the atmosphere quickly doubles and stays at that level, then the Earth’s surface air temperature will – in the long term – rise by 3°C.2) In the long term, in this case, is more than 1000 years, but most of the temperature increase happens relatively fast.

The exact value for the climate sensitivity isn’t known, and the uncertainty range has traditionally been very large. In 1979, the so-called Charney report found the climate sensitivity to be between 1.5 and 4.5°C. 34 years later, in 2013, the IPCC reached the exact same conclusion – that it’s likely (66% probability) that the climate sensitivity is between 1.5 and 4.5°C. However, the uncertainty in the Charney report may have been underestimated. So even though the official climate sensitivity estimate didn’t change, it wouldn’t be correct to say that no progress was made during those 34 years.

In climate science, there are several different types of climate sensitivity. I won’t go into detail about the various types just yet, but I’ll have something to say about some of them later in the article – when it becomes relevant. The type of climate sensitivity referred to above – in the Charney report and by the IPCC – is called equilibrium climate sensitivity (ECS).

Why so much uncertainty? (Feedback effects)

There’s broad agreement that without so-called feedback effects, the equilibrium climate sensitivity (ECS) would be close to 1.2°C 3), which is quite low and not particularly dangerous. The reason for the great uncertainty comes from how feedback effects affect the temperature.

A feedback effect can be either positive or negative. A positive feedback effect amplifies warming, contributing to a higher climate sensitivity. A negative feedback dampens warming and contributes to a lower climate sensitivity.

The strengths of feedback effects can vary based on the atmosphere’s temperature and composition, and how much of the Earth is covered by ice and vegetation, among other things. Earth’s climate sensitivity is thus not a constant. And for this reason, the equilibrium climate sensitivity, ECS, has been defined as the long-term increase in temperature as a result of a doubling of CO2 from pre-industrial levels (which was about 284 parts per million (ppm)).

Atmospheric CO2 concentration currently stands at approximately 420 ppm, which means there’s been a near 50% increase since the second half of the 19th century.4) Since the concentration of CO2 hasn’t yet doubled (and also since the long term is a long way away), the temperature has risen less than what the climate sensitivity is. To be more precise, the temperature increase has been approximately 1.2°C over the past 150 years.

Types of feedback mechanisms

There are several different feedback mechanisms. Here are some of the most important ones:

  • Water vapor. Increased amounts of greenhouse gases in the atmosphere cause higher temperatures. A higher temperature then allows the atmosphere to hold more water vapor, and since water vapor is a strong greenhouse gas, the increased amount of water vapor in the atmosphere causes the temperature to rise even more.5) The feedback effect from water vapor is therefore said to be positive.
  • Lapse rate is how the temperature changes with altitude. The higher up you go in the lower atmosphere (troposphere), the colder it gets – on average about 6.5°C colder per kilometer up. So the lapse rate is said to be 6.5°C per kilometer in the lower atmosphere.
    The feedback from lapse rate is related to the feedback from water vapor, and the two are often considered together. More water vapor causes the temperature to rise more higher up in the atmosphere than closer to the Earth’s surface. This is because the air is generally drier higher up, and so at those altitudes the increased amounts of water vapor has a larger effect on the temperature. The increased temperature at higher altitudes then contributes to more radiation to space, which causes the Earth to cool more. This means that the feedback effect from lapse rate is negative. However, the combined effect of water vapor and lapse rate is positive.
    How temperature changes with altitude in the lower atmosphere. Image found on ScienceDirect (from the book Environmental Management).
  • Clouds. Without clouds, the temperature on Earth would be significantly higher than today, but not all clouds have a net cooling effect. Different types of clouds have a different effect on the temperature. On average, high clouds have a warming effect, while low clouds tend to have a cooling effect. When assessing whether total cloud feedback is positive or negative, one must determine whether clouds in a warmer atmosphere on average will have a greater or lesser warming effect than they do now. There is some uncertainty about this, but according to the IPCC, it’s very likely (over 90%) that the feedback effect from clouds is positive, and that therefore changes in cloud cover as a result of increased temperature will amplify the temperature increase.
  • Surface Albedo Changes. Earth’s surface albedo says how much solar radiation the Earth reflects directly back to space. Currently, it’s around 0.3, which means that the Earth reflects 30% of the incoming solar radiation. The part of the solar radiation that’s reflected does not contribute to warming.
    The Earth’s albedo can change, for example, when a larger or smaller part of the surface is covered by ice and snow. A higher temperature generally leads to less ice cover, which in turn leads to higher temperatures still, since less radiation is reflected (the albedo decreases). The albedo change resulting from changes in ice cover is a positive feedback effect.
    (Changes in albedo due to changes in cloud cover are included in the cloud feedback.)
  • Planck Feedback. A warm object radiates more than a cold object. Or in the case of the Earth: A warm planet radiates more to space than a cold planet. As the Earth warms, it radiates more energy to space, which cools the planet and reduces the rate of warming. The Planck feedback is a strongly negative feedback.6)
    Actually, the Planck feedback is already included in the calculation of how much the temperature would rise in the absence of (other) feedback effects. In this sense, the Planck feedback is different than the other feedbacks, and it may be best not to think of it as an actual feedback effect, but rather as a fundamental property of physical objects. The Planck feedback is sometimes referred to as Planck response or no-feedback response.

Different ways to calculate climate sensitivity

There are several ways to calculate climate sensitivity. We can base it on the historical record of the past 150 years, where we know approximately how temperature, greenhouse gases, aerosols etc have changed (historical evidence). Or we can estimate the strengths of the various known feedback mechanisms and sum them (process evidence). Or it can be calculated based on how much average temperature has changed since the last ice age glaciation or other warm or cold periods in the Earth’s history (paleo evidence). A fourth possibility is to use climate models – large computer programs that attempt to simulate Earth’s past and future climate under different assumptions.

Sherwood20

In 2020, a large study by 25 authors was published, and it combined the first three of the above-mentioned methods. So they did not calculate climate sensitivity from climate models directly, although the study oftentimes relies on climate models to substantiate some of their values and assumptions.

The study’s title is An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence. Steven Sherwood is lead author, and so the study is often referred to as Sherwood et al 2020. To simplify further, I’ll just call it Sherwood20.

Sherwood20 concluded that the climate sensitivity is likely (66% probability) between 2.6 and 3.9°C, with 3.1 degrees as the median value. (It’s equally likely that climate sensitivity is higher (50%) or lower (also 50%) than the median.)

The latest IPCC scientific report (AR6) put great emphasis on Sherwood20, and the IPCC, in turn, concluded that climate sensitivity is likely between 2.5 and 4.0°C, a significant narrowing of their previous uncertainty range.
(Note, however, that Sherwood20 and the IPCC focused on different types of climate sensitivities, so their respective values aren’t directly comparable.7))

Sherwood20 thoroughly examines all factors that they believe affect climate sensitivity and discusses sources of uncertainty.

Process evidence: Climate sensitivity calculated by adding up feedback effects

The feedback effects that Sherwood20 focused on were primarily the five that I listed earlier. Other feedbacks were estimated as having no net effect. To calculate the climate sensitivity based on feedback effects, the first step is to add up the strengths of each individual feedback effect, and then there’s a simple formula to convert from total feedback strength to climate sensitivity.

The cloud feedback has the largest uncertainty of the various feedback effects. This is true even though the uncertainty has been reduced in recent years.8)

Historical evidence: Climate sensitivity calculated from temperature and other data over the past 150 years

Within some margin of error, we know how the Earth’s surface air temperature has varied over the past 150 years. We also know roughly how the amount of greenhouse gases in the atmosphere has increased – at least since 1958, when the Mauna Loa observatory started measuring atmospheric CO2. But in order to calculate the climate sensitivity to CO2, we also need to know the effect that other drivers of climate change, including aerosols, have had on the temperature and ideally also how the temperature would have changed without human influence. In addition, there’s something called the pattern effect, which, along with aerosols, is what contributes most to the uncertainty in the climate sensitivity when it’s calculated from historical evidence.

  • Aerosols: Translated from Norwegian Wikipedia, aerosols are “small particles of liquid or solid in the atmosphere, but not clouds or raindrops. These can have a natural origin or be human-made. Aerosols can affect the climate in various complex ways by affecting Earth’s radiation balance and cloud formation. Studies suggest that these have been released since the start of the Industrial Revolution and have had a cooling effect.”
    The uncertainty in how aerosols affect the temperature is surprisingly large, but they likely have a net cooling effect. The main reason for the large uncertainty is a lack of knowledge about how aerosols interact with clouds.9) Along with greenhouse gases, certain aerosols are released during the combustion of fossil fuels, but with newer technologies, the release of aerosols from combustion is being reduced.
    If aerosols have a strong cooling effect, it means they’ve counteracted a significant part of the warming from greenhouse gases. If so, the climate sensitivity to CO2 must be relatively high. If the cooling effect from aerosols is smaller, it implies a lower climate sensitivity.
  • The pattern effect: Different geographical regions have experienced different amounts of warming since the 1800s.10) Following some previous work, Sherwood20 assumes that areas that have experienced little warming will eventually “catch up” with areas that have experienced more warming, and that this will lead to cloud feedback becoming more positive. However, this may not necessarily happen this century.11) There are few climate sensitivity studies prior to Sherwood20 that take the pattern effect into account, and there’s considerable uncertainty about its magnitude. As a result, the uncertainty in the climate sensitivity as calculated from historical evidence is significantly larger in Sherwood20 than in the earlier studies.

Paleo evidence: Climate sensitivity estimated from previous warm and cold periods

Sherwood20 used one past cold period and one warm period to calculate the climate sensitivity based on paleo evidence (past climates). They also looked at one additional warm period (PETM – Paleocene-Eocene Thermal Maximum, 56 million years ago), but didn’t use the results from that period when calculating their main results.

Temperature trends for the past 65 million years. Figure from Burke et al 2018. The original image also contained different future projections, but I’ve removed that part of the image. Note that there are 5 different scales on the time axis.

The cold period that Sherwood20 looked at was the coldest period in the last ice age glaciation (Last Glacial Maximum, LGM), about 20,000 years ago (20 “kyr Before Present” in the graph), when, according to the study, Earth’s temperature was 5±1°C below pre-industrial temperature (so 6±1°C colder than today).

The warm period they looked at was the mid-Pliocene Warm Period (mPWP), about 3 million years ago (3 “Myr Before Present” in the graph), when the temperature was 3±1°C higher than pre-industrial (2±1°C warmer than today).

It may not be obvious that it’s possible to calculate the atmosphere’s climate sensitivity to CO2 based on what the temperature was in previous warm or cold periods. The reason it is possible is that we can also talk about the atmosphere’s climate sensitivity in a more general sense, without specifically taking CO2 into consideration.12) I’ll try to explain.

If the Earth receives more energy than it radiates back to space, the Earth’s temperature will rise. If climate sensitivity is high, the temperature will rise by a relatively large amount. If climate sensitivity is low, the temperature will rise less.

Regardless of what non-temperature factor causes a change in the balance between incoming and outgoing energy – whether it’s due to more greenhouse gases or a stronger sun, or to ice sheets reflecting more sunlight – the result is (approximately) the same. What matters (most) is the size of the change, not what causes it.

So if we know how much warmer or colder Earth was in an earlier time period, and if we also know how much more or less energy the Earth received at that time compared to now, then it should be possible to calculate how sensitive the atmosphere is to a change in incoming energy.

When we know this general climate sensitivity, and when we also know what CO2 does to the atmosphere’s energy balance, then it’s possible to calculate the atmosphere’s climate sensitivity to CO2.

When it comes to what CO2 does to the atmosphere’s energy balance, it’s been found that a doubling of CO2 reduces radiation to space by about 4 watts per square meter (W/m2) over the entire atmosphere.13) Less radiation to space means that more energy stays in the atmosphere, raising temperatures until outgoing radiation again balances incoming radiation.

All of this means that it’s possible (in theory) that the amount of CO2 in the atmosphere was the same today and at an earlier time when temperatures were quite different from today, and even if CO2 levels were the same (and some other factor(s) caused the temperature difference), it would be possible to calculate the atmosphere’s climate sensitivity to CO2 – because we know approximately what a doubling of CO2 does to the atmosphere’s energy balance.

When scientists estimate the climate sensitivity from past warm or cold periods, they’re looking at very long time spans. This means that all the slow feedbacks have had time to take effect, and we can then find the “real” long-term climate sensitivity. Based on what I’ve written earlier, you would probably think this is the equilibrium climate sensitivity, ECS. However, in the definition of ECS, the Earth’s ice cover is kept constant, so ECS is in a way a theoretical – not a real – climate sensitivity. The real long-term climate sensitivity is called Earth system sensitivity, or ESS for short.

The climate sensitivity that Sherwood20 calculated is called effective climate sensitivity (S) and is an approximation of ECS. ECS is likely higher than S, and ESS is likely significantly higher than ECS (so S < ECS < ESS).

Even though ESS is the true very long-term climate sensitivity, S is actually the most relevant climate sensitivity for us, since we’re most interested in what will happen in the relatively near term – the next century or two. Sherwood20 writes:

Crucially, effective sensitivity (or other measures based on behavior within a century or two of applying the forcing) is more relevant to the time scales of greatest interest (i.e., the next century) than is equilibrium sensitivity[.]

As we’ve seen, Sherwood20 combined climate sensitivities from three different lines of evidence (meaning that they combined climate sensitivities that had been calculated in three different/independent ways). For historical and process evidence, Sherwood20 calculated effective climate sensitivity (S). But the type of climate sensitivity that is most easily calculated from paleo evidence is Earth system sensitivity (ESS). So to be able to directly compare, and then combine, the climate sensitivities from all three lines of evidence, they needed to convert from ESS to S.

According to Sherwood20, ESS was around 50% higher than ECS during the mPWP warm period. During the much warmer PETM, Sherwood20 assumed that ESS and ECS were approximately the same since there weren’t any large permanent ice-sheets during that warm period – and hence no significant changes in ice-cover.

For the more recent LGM, however, it was actually possible to calculate ECS directly (instead of ESS), by treating slow feedbacks as forcings rather than feedbacks.14)

Naturally, there’s significant uncertainty involved when calculating climate sensitivity based on previous warm and cold periods (paleo evidence). We don’t know what the Earth’s exact average temperature was, and we also don’t know exactly how much more or less energy the Earth received at that time compared to today (or surrounding time periods). Still, according to Sherwood20, the uncertainty in the climate sensitivity as calculated from paleo evidence isn’t necessarily greater than for the other lines of evidence.

Sherwood20’s conclusion

According to Sherwood20, “there is substantial overlap between the lines of evidence” used to calculate climate sensitivity, and the “maximum likelihood values are all fairly close”, as can be seen in the graph (b) below. (However, the median value for historical evidence has a surprisingly high value of 5.82°C).

This is Figure 20 from Sherwood20 and shows their main results. The figure shows how likely different climate sensitivities (S) are for each of their three lines of evidence – in addition to the combined likelihood (black curve). The higher the curve goes, the greater the likelihood. We see that the most likely value is just under 3°C, but the median value is 3.1°C.

Gavin Schmidt, one of the co-authors of Sherwood20, has also written a summary of the study on RealClimate.

Critique of Sherwood20

Nic Lewis is a British mathematician and physicist who entered the field of climate science after being inspired by Stephen McIntyre. McIntyre had criticized the perhaps most important study behind the hockey stick graph used in IPCC’s third assessment report from 2001. (See this earlier post I wrote, which talks about the hockey stick controversy, among other things.)

So Nic Lewis isn’t quite like other climate scientists. You may call him a “skeptical” climate scientist. Not skeptical in the sense that he doesn’t believe CO2 is a greenhouse gas, but skeptical at least in the sense that he doesn’t believe there’s a major climate emergency. In general, Lewis’ research points to a lower climate sensitivity than IPCC’s estimates.

Here’s a 2019 talk by Nic Lewis on the topic of climate sensitivity. I highly recommend it:

Lewis has published a total of 10 studies related to climate sensitivity, and Sherwood20 referenced studies where Lewis was the main (or only) author 16 times. In September 2022, Lewis published a study, Objectively Combining Climate Sensitivity Evidence, where he discusses and corrects Sherwood20. I will refer to this new study as Lewis22.

In an article that summarizes Lewis22, Lewis argues that Sherwood20’s methodology of combining different lines of evidence to calculate the climate sensitivity is sound:

This is a strong scientific approach, in that it utilizes a broad base of evidence and avoids direct dependence on [Global Climate Model] climate sensitivities. Such an approach should be able to provide more precise and reliable estimation of climate sensitivity than that in previous IPCC assessment reports.

Lewis writes in the article that since 2015, he has published several studies that describe how to combine “multiple lines of evidence regarding climate sensitivity using an Objective Bayesian statistical approach”. Although Sherwood20 was well aware of Nic Lewis’ studies, Sherwood20 had chosen a (simpler) subjective method instead. According to Lewis, the subjective method “may produce uncertainty ranges that poorly match confidence intervals”. Lewis therefore decided to replicate Sherwood20 using the objective method. He also wanted to check Sherwood20’s estimates and data.

The authors of Sherwood20 had, however, made a deliberate choice to use the subjective method. In Schmidt’s article on RealClimate, we can see that Sherwood20 thought the subjective method was more appropriate:

Attempts to avoid subjectivity (so-called ‘objective’ Bayesian approaches) end up with unjustifiable priors (things that no-one would have suggested before the calculation) whose mathematical properties are more important than their realism.

By using the objective method instead of the subjective one, and by also using an appropriate likelihood estimation method,15) the result was actually a slightly higher climate sensitivity. The median climate sensitivity increased from 3.10 to 3.23°C. Lewis comments:

As it happens, neither the use of a Subjective Bayesian method nor the flawed likelihood estimation led to significant bias in Sherwood20’s estimate of [the climate sensitivity] S when all lines of evidence were combined. Nevertheless, for there to be confidence in the results obtained, sound statistical methods that can be expected to produce reliable parameter estimation must be used.

However, after correcting some other errors and using newer data, including from IPCC’s latest scientific report from 2021 (AR6), the most likely value for the effective climate sensitivity fell to 2.25°C. By also using data that Lewis considered as better justified (not newer), the climate sensitivity was revised down by another 0.09°C, to 2.16°C.

The data changes made by Lewis22 are in part explained in the study and in part in an appendix to the study (Supporting Information, S5). In addition to discussing data values that he changed, in the appendix, Lewis also discusses some data values that he conservatively chose not to change – even though he thought Sherwood20’s values weren’t optimal. So a case could actually be made for an even lower effective climate sensitivity than the 2,16°C that Lewis found in his study.

The figure below is taken from Lewis’ summary of Lewis22 and shows Lewis’ results compared to Sherwood20’s:

In (a), (b), and (d), dashed lines represent results from Sherwood20, while solid lines are from Lewis22. In (b), we see that the three lines of evidence for calculating the climate sensitivity coincide nicely for Lewis22, while the variation is slightly larger in Sherwood20. Additionally, the uncertainty is lower (the curves are narrower) in Lewis22, especially for historical evidence (data from the past 150 years). PETM is the warm period that Sherwood20 didn’t include in the calculation of the combined climate sensitivity (PETM = Paleocene-Eocene Thermal Maximum, about 56 million years ago, when temperatures were about 12°C higher than now).

The details: Why Lewis22 found a lower climate sensitivity than Sherwood20

In this section, I’ll try to explain in more detail why Lewis22 found a lower climate sensitivity than Sherwood20. This is the most technical part of this article, and if you’re not interested in the details, you may want to skip ahead to the section on future emissions.

Values with blue text are the same as in IPCC’s latest assessment report (AR6). Values with yellow background in Lewis22 are conservative choices.16) Less conservative choices would have resulted in a lower climate sensitivity. The data changes in Lewis22 are discussed under Review and revision of S20 data-variable assumptions in Lewis22 and in section S5 of Supporting Information.

Historical evidence (data from the past 150 years)

{"type":"$$","aid":null,"font":{"family":"Arial","color":"#000000","size":"14"},"id":"3","backgroundColor":"#FFFFFF","code":"$$λ_{hist}\\,=\\,\\frac{ΔN\\,-\\,ΔF}{ΔT}$$","backgroundColorModified":false,"ts":1669929025940,"cs":"K5lUmGmv5uCdzh363AgRQA==","size":{"width":188,"height":45.5}}

{"aid":null,"font":{"family":"Arial","size":"14","color":"#000000"},"backgroundColor":"#FFFFFF","backgroundColorModified":false,"code":"$S\\,=\\,-𝛾\\,\\,\\frac{ΔF_{2xCO2}}{λ_{hist}\\,+\\,Δλ}\\,$","type":"$","id":"1","ts":1669928698859,"cs":"2dZGfsASnExSRCF9w+0xCg==","size":{"width":168.5,"height":33.00000000000002}}

  Sherwood20 Lewis22
ΔFCO2 1.731 1.724
ΔFOther well-mixed greenhouse gases 0.969 1.015
ΔFOzone 0.298 0.400
ΔFLand use -0.106 -0.150
ΔFStratospheric water vapor 0.064 0.041
ΔFBlack carbon on snow and ice 0.020 0.109
ΔFContrails og induced cirrus 0.048 As Sherwood20
ΔFSolar 0.017 0.019
ΔFVolcanic -0.113 0.044
ΔFAerosols -1.104 -0.860
ΔF (sum, difference in forcing, W/m2) 1.824 2.390
ΔN (W/m2) 0.600 ± 0.183 As Sherwood20
ΔT (or ΔTGMAT, °C) 1.03 + 0.085 0.94 ± 0.095
λhist (W/m2/°C) -1.188 -1.915
𝛾 (scaling factor) Omitted (1.00) 0.86 ± 0.09
ΔF2xCO2 (W/m2) 4,00 ± 0,30 3.93 ± 0.30
Δλ (pattern effect, W/m2/°C) 0.500 ± 0.305 0.350 ± 0.305
Climate sensitivity, S (°C) 5.82 2.16

ΔF, ΔN, and ΔTGMAT refer to differences between 1861-1880 and 2006-2018. ΔF is the difference in climate forcing (climate forcing (or radiative forcing) is something that forces the Earth’s energy balance to change, e.g. a stronger/weaker sun or more/less greenhouse gases in the atmosphere). ΔN is the change in radiative imbalance at the top of the atmosphere, measured in W/m2. A positive ΔN means that the radiative imbalance is greater now than at the end of the 19th century, and that the Earth is receiving more net energy now than then.

The exact ΔF values that Lewis22 uses can’t be found in IPCC AR6. The reason for this is that Sherwood20 and Lewis22 look at the period from 1861-1880 to 2006-2018, while the IPCC has been more interested in the period 1750 to 2019. Fortunately, though, IPCC has also included forcing values for 1850 and also for several years after 2000, so Lewis has been able to calculate ΔF values with good accuracy (derived from official IPCC values, see table AIII.3 here).

GMAT (Global Mean near-surface Air Temperature) is average air temperature above ground. GMST (Global Mean Surface Temperature) is the same but uses sea surface temperature instead of air temperature over the ocean. Sherwood20 converted ΔTGMST (0.94°C) to ΔTGMAT (1.03°C) based on results from climate models, which suggest that GMAT is higher than GMST. Lewis, however, points out that a higher GMAT than GMST hasn’t been observed in the real world, and that, according to the IPCC AR6, the best estimate median difference between GMST and GMAT is 0. Lewis22 therefore uses a value for ΔTGMAT that’s equal to ΔTGMST. (See Supporting Information, 5.2.1.)

When estimating effective climate sensitivity (S) from climate feedback (λ), a scaling factor 𝛾 (gamma) is needed (for historical and process evidence). This is because Sherwood20 used linear regression to estimate S based on the ratio of ΔN to ΔT, a relationship that isn’t strictly linear. The reason it’s not linear is that, according to most climate models, climate feedback (λ) weakens during the first decade following a sudden increase in CO2. (That λ weakens means it gets closer to 0 (less negative), which means that climate sensitivity, S, increases.)

[Sherwood20] recognize this issue, conceding a similar overestimation of S, but neglect it, asserting incorrectly that it only affects feedback estimates from [Global Climate Models]. This misconception results in [Sherwood20]’s estimates of S from Process and Historical evidence being biased high.

Lewis22 used numbers from the two most recent generations of climate models (CMIP5 and CMIP6) to determine that 𝛾 = 0.86.

More technically, 𝛾 is the ratio of {"backgroundColor":"#FFFFFF","aid":null,"id":"15","type":"$$","code":"$$\\triangle F_{\\,2xCO2}^{regress}$$","font":{"family":"Arial","size":14,"color":"#000000"},"ts":1684096423572,"cs":"ozgK391fxW05JrtdJ6cb1w==","size":{"width":85.33333333333333,"height":25.333333333333332}} to ΔF2xCO2. You can read more about this in Lewis22 under Climate Sensitivity Measures, F2xCO2 and its scaling when using Eq. (4) and Supporting Information (S1).

The reason for the relatively large change in aerosol forcing (ΔFAerosols) is quite elaborate and advanced, so for that I’ll have to refer you to the Supporting Information (5.2.3, starting from the third paragraph).

The change Lewis22 made for the pattern effect (Δλ) is in large part done because most datasets for sea surface temperature point to the so-called unforced component of the pattern effect (having to do with natural variation, see footnote 8) being very small. See Supporting Information, 5.2.4.

Process evidence (Adding up feedback effects):

{"backgroundColorModified":false,"backgroundColor":"#FFFFFF","type":"$","code":"$S\\,=\\,-𝛾\\,\\,\\frac{ΔF_{2xCO2}}{λ}$","font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"id":"1","ts":1669928251925,"cs":"0lkyjOw4SUNer4G0xfA51w==","size":{"width":160.16666666666666,"height":29.333333333333332}}

  Sherwood20 Lewis22
λWater vapor + lapse rate 1.15 ± 0.15 As Sherwood20
λCloud 0.45 ± 0.33 0.27 ± 0.33
λAlbedo 0.30 ± 0.15 As Sherwood20
λPlanck -3.20 ± 0.10 -3.25 ± 0.10
λOther 0.00 ± 0.18 As Sherwood20
λ (Sum, feedback effects, W/m2/°C) -1.30 ± 0.44 -1.53 ± 0.44
𝛾 (Scaling factor) Omitted (1.00) 0.86 ± 0.09
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
Climate sensitivity, S (°C) 3.08 2.21

Lewis adjusted the cloud feedback (λCloud) down based on data from Myers et al 2021 (a more recent study than Sherwood20), which found a lower value for low-cloud feedback over the ocean (0-60° from the equator). According to Myers et al 2021, the low-cloud feedback strength is 0.19 W/m2/°C, while Sherwood20 had used 0.37. The difference of 0.18 is how much the total cloud feedback strength was adjusted down in Lewis22. See Supporting Information (5.1.3) for more details.

According to physical expectation (calculated from a formula) and also the latest climate models (CMIP6), the Planck feedback (λPlanck) is -3.3 W/m2/°C. Sherwood20 acknowledged that the physical expectation for the Planck feedback is -3.3, but they put more weight on the previous generation of climate models (CMIP5) and used -3.2 as the value for the Planck feedback. Lewis22 adjusted the Planck feedback halfway from Sherwood20’s estimate towards the value from physical expectation and CMIP6. See Supporting Information (5.1.2).

As a curiosity, the strength of the albedo feedback here has the same numerical value as the Earth’s albedo, namely 0.30. That’s merely a coincidence.

Paleo evidence (past cold and warm periods)

1. The coldest period during the last ice age glaciation (Last Glacial Maximum, LGM)

{"code":"$λ\\,=\\,-\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{ΔF}{ΔT}\\,-\\,\\frac{α}{2}ΔT\\right)$","id":"4","font":{"family":"Arial","size":"14","color":"#000000"},"type":"$","backgroundColorModified":false,"backgroundColor":"#FFFFFF","aid":null,"ts":1670066796360,"cs":"eudXkAKdafkU2vgq7fqk7g==","size":{"width":290.5,"height":39.5}}

{"id":"5","code":"$$S\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","backgroundColorModified":false,"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","type":"$$","ts":1670066978670,"cs":"R3BkKA5gppevoibrQVAwaQ==","size":{"width":164.83333333333334,"height":46.333333333333336}}

  Sherwood20 Lewis22
ζ (how much higher ECS is than S) 0.06 ± 0.20 0.135 ± 0.10
ΔFCO2 -0.57 x ΔF2xCO2 = -2.28 -0.57 x ΔF2xCO2 = -2.24
ΔFCH4 -0.57 As Sherwood20
ΔFN2O -0.28 As Sherwood20
ΔFLand ice and sea level -3.20 -3.72
ΔFVegetation -1.10 As Sherwood20
ΔFDust (aerosol) -1.00 As Sherwood20
ΔF (difference i forcing, W/m2) -8.43 ± 2.00 -8.91 ± 2,00
ΔT (difference in temperature, °C) -5.0 ± 1.00 -4.5 ± 1,00
α (state dependence) 0.10 ± 0.10 As Sherwood20
λ (W/m2/°C) -1.522 -1.992
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
Climate sensitivity, S (°C) 2.63 1.97

Sherwood20 calculated ζ (zeta; how much higher equilibrium climate sensitivity, ECS, is than the effective climate sensitivity, S) by looking at abrupt 4xCO2 simulations – computer simulations where the atmosphere’s CO2 level is instantaneously quadrupled. Sherwood20 then divided the resulting climate forcing (ΔF4xCO2) by 2 to find the climate forcing for a doubling of CO2 (ΔF2xCO2). Lewis22 notes that the scaling factor of 2 “while popular, is difficult to justify when the actual [scaling factor] has been estimated with reasonable precision to be 2.10”. However, Lewis did not use this method to calculate ζ – instead, he extracted the ζ value (0.135) directly from the results of climate models (or, to be more precise, from long-term simulations by climate models of warming after CO2 concentration was doubled or quadrupled, finding the same value in both cases). More details can be found under Climate Sensitivity Measures in Lewis22.

ΔF is the difference in climate forcing between the coldest period of the last ice age glaciation and pre-industrial times. ΔT is the temperature difference between these periods.

Sherwood20’s ΔT estimate was 5.0°C. However, the mean ΔT value for the studies that Sherwood20 based their estimate on was only 4.2°C (after, where necessary, adjusting values given in the studies to fairly reflect an observational (proxy-based) estimate of the temperature (GMAT) change). Lewis22 therefore adjusted Sherwood20’s ΔT estimate towards that value, from 5.0 to 4.5°C. See Supporting Information, 5.3.2.

The reason for Lewis22’s revision of ΔFLand ice and sea level was that Sherwood20 had omitted albedo changes resulting from lower sea levels. (The sea level was approximately 125 meters lower during the LGM than now, so Earth’s land surface was larger during the LGM than now. Land reflects more solar radiation than water, so the Earth’s albedo might have been higher during the LGM than what Sherwood20 assumed.) See Supporting Information, 5.3.2.

α (alpha) says something about how climate sensitivity varies based on the state of Earth’s climate system. What we’re most interested in is the climate sensitivity for the current and near-future states of the climate system. Since climate sensitivity may be different for warm periods than cold periods (possibly higher in warm periods), we need to convert the climate sensitivity for any past warm or cold period to the current climate sensitivity. The α parameter is included in an attempt to translate the climate sensitivity for the LGM cold period into the current climate sensitivity.

In contrast to Sherwood20’s assumption about the state dependence, Lewis22 writes that a 2021 study by Zhu and Poulsen “found that ocean feedback caused 25% higher LGM-estimated ECS.” This would bring the LGM climate sensitivity closer to the current climate sensitivity. For this reason (and one other) Lewis thought Sherwood20’s estimate for α was questionable. Still, he retained it. See Supporting Information, 5.3.2 (last paragraph).

2. Mid-Pliocene Warm Period (mPWP)

{"type":"$$","font":{"size":"14","family":"Arial","color":"#000000"},"aid":null,"code":"$$ΔF_{CO2}\\,=\\,\\frac{\\ln\\left(\\frac{CO2}{284}\\right)}{\\ln\\left(2\\right)}\\,ΔF_{2xCO2}$$","backgroundColor":"#FFFFFF","backgroundColorModified":false,"id":"6","ts":1670067597268,"cs":"+RxVGZEvLibw9PqteRKXNg==","size":{"width":276,"height":58.75}}

{"aid":null,"id":"7","backgroundColorModified":false,"font":{"family":"Arial","color":"#000000","size":"14"},"type":"$$","backgroundColor":"#FFFFFF","code":"$$λ\\,=\\,-\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{ΔF_{CO2}\\,\\left(1\\,+\\,f_{CH4}\\right)\\,\\left(1\\,+\\,f_{ESS}\\right)}{ΔT}\\right)$$","ts":1670067745143,"cs":"IIsd4M9OVn0I9NFRaDuGSw==","size":{"width":470.6666666666667,"height":53}}

{"id":"5","code":"$$S\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","backgroundColorModified":false,"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","type":"$$","ts":1670066978670,"cs":"R3BkKA5gppevoibrQVAwaQ==","size":{"width":164.83333333333334,"height":46.333333333333336}}

{"font":{"family":"Arial","size":"14","color":"#000000"},"type":"$$","backgroundColorModified":false,"backgroundColor":"#FFFFFF","code":"$$S\\,=\\,\\frac{1}{1\\,+\\,ζ}\\,\\,\\frac{\\ln\\left(2\\right)}{\\ln\\left(\\frac{CO2}{284}\\right)}\\,\\,\\frac{ΔT}{\\left(1\\,+\\,f_{CH4}\\right)\\,\\left(1\\,+\\,f_{ESS}\\right)}$$","id":"14","aid":null,"ts":1670227191617,"cs":"YyF+Y+IdzAiOIECnYfaL9w==","size":{"width":444.6666666666667,"height":58.666666666666664}}

  Sherwood20 Lewis22
CO2 (ppm) 375 ± 25 As Sherwood20
ΔF2xCO2 (W/m2) 4.00 ± 0.30 3.93 ± 0.30
ΔFCO2 (difference i forcing from CO2, W/m2) 1.604 1.576
ζ (how much higher ECS is than S) 0.06 ± 0.20 0.135 ± 0.10
fCH4 0.40 ± 0.10 As Sherwood20
fESS 0.50 ± 0.25 0.67 ± 0.40
ΔT (°C) 3.00 ± 1.00 2.48 ± 1.25
λ (W/m2/°C) -1.190 -1.686
Climate sensitivity, S (°C) 3.36 2.33

ΔT is the difference in temperature between the mid-Pliocene Warm Period (mPWP) and pre-industrial times. A positive value for the temperature difference means that mPWP was warmer. ΔFCO2 is the difference in climate forcing (from CO2) between mPWP and pre-industrial. fCH4 is the estimated forcing change from methane (and actually also N2O/nitrous oxide) relative to the forcing change from CO2 (so if the forcing change from CO2 is 1.6, then the forcing change from CH4 (and N20) is 0.64). fESS is how much higher the climate sensitivity ESS is compared to the climate sensitivity ECS. The number 284 in the formula represents the pre-industrial CO2 level (measured in parts-per-million).

Lewis22 used a newer value for fESS than Sherwood20. Sherwood20 obtained the value of 0.50 (or 50%) from The Pliocene Model Intercomparison Project (which focuses on the Pliocene era) version 1 (PlioMIP1). The value of 0.67 (or 67%) used by Lewis22 was taken from PlioMIP2, a newer version of the PlioMIP project. See Supporting Information, 5.3.3.

The change Lewis22 made to ΔT was also based on PlioMIP2. Tropical temperatures during the mPWP were about 1.5°C higher than pre-industrial tropical temperatures. To determine the change in global temperature, Sherwood20 multiplied the change in tropical temperatures by 2 on the grounds that average global temperature has changed about twice as much as tropical temperature over the last 500,000 years. However, conditions on Earth were different during the mPWP three million years ago, with much less extensive ice sheets than at present. The PlioMIP2 project has used climate models to estimate that changes in global temperature may have been about 1.65 times higher than changes in tropical temperature during the Pliocene. Lewis22 used this value and consequently multiplied the tropical temperature change (1.5°C) by 1.65 instead of by 2. This changes ΔT from 3.00°C down to 2.48°C. See Supporting Information, 5.3.3.

3. Paleocene–Eocene Thermal Maximum (PETM):

{"code":"$$S\\,=\\,\\frac{\\frac{1}{f_{CO2nonLog}}}{\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{\\ln\\left(\\frac{CO2}{900}\\right)}{\\ln\\left(2\\right)}\\,\\frac{1\\,+\\,f_{CH4}}{ΔT}\\right)\\,-\\,β}$$","backgroundColor":"#FFFFFF","id":"8","aid":null,"backgroundColorModified":false,"font":{"color":"#000000","family":"Arial","size":"14"},"type":"$$","ts":1670070377501,"cs":"scmxEwpalDPrQFazx01POA==","size":{"width":341,"height":81.5}}

Using β = 0, we can simplify to:

{"backgroundColor":"#FFFFFF","font":{"color":"#000000","family":"Arial","size":"14"},"code":"$$S\\,=\\,\\frac{\\ln\\left(2\\right)}{\\ln\\left(\\frac{CO2}{900}\\right)}\\,\\frac{ΔT}{\\left(1\\,+\\,ζ\\\\\\right)\\,\\left(1\\,+\\,f_{CH4}\\right)\\,f_{CO2nonLog}}$$","aid":null,"id":"9","backgroundColorModified":false,"type":"$$","ts":1670103899754,"cs":"ExJeWuSt1itutLDJKNgf/w==","size":{"width":434,"height":58.666666666666664}}

  Sherwood20 Lewis22
ζ 0.06 ± 0.20 0.135 ± 0.10
ΔT (°C) 5.00 ± 2.00 As Sherwood20
fCH4 0.40 ± 0.20 As Sherwood20
CO2 (ppm) 2400 ± 700 As Sherwood20
β 0.0 ± 0.5 As Sherwood20
fCO2nonLog Omitted (1.00) 1.117
Climate sensitivity, S (°C) 2.38 1.99

ΔT refers to the difference in temperature between the Paleocene-Eocene Thermal Maximum (PETM) and the time just before and after the PETM. During the PETM, temperatures were about 13°C higher than pre-industrial temperatures. Just before and after the PETM, temperatures were about 5°C lower than this (8°C higher than pre-industrial). The number 900 in the formula represents the approximate CO2 level before and after the PETM. fCH4 is again the estimated difference in climate forcing from methane (and nitrous oxide) relative to the forcing change from CO2.

Sherwood20 assumes that the relationship between CO2 concentration and CO2 forcing is logarithmic. Lewis refers to Meinshausen et al. 2020, the results of which were adopted by the IPCC in AR6, which found that at high CO2 concentrations (such as during the PETM), the climate forcing is higher than if the relationship had been purely logarithmic. By using a formula from Meinshausen, Lewis found that the CO2 forcing during the PETM would have been some 11.7% higher than Sherwood20’s assumption of a purely logarithmic relationship. Therefore, Lewis22 used a value of 1.117 for fCO2nonLog. See Supporting Information, 5.3.4.

One would think that a higher CO2 forcing at high temperatures would imply a higher climate sensitivity during warm periods, but that’s not necessarily true, since feedback strengths may be different in warm and cold periods. However, if feedback strengths are the same in warm and cold periods, then a higher CO2 forcing implies a higher climate sensitivity.

Sherwood20 then assumes that the climate sensitivity ESS during the PETM is roughly the same as today’s equilibrium climate sensitivity, ECS. Uncertainty is accounted for with the parameter β, whose mean value is set to zero. Lewis agrees that “[a]ssuming zero slow feedbacks in the PETM (so ESS equals ECS) may be reasonable, given the lack of evidence and the absence of major ice sheets.” However, some studies (that rely on climate models) suggest that climate sensitivity during the PETM may have been higher than it is today. For this reason, Lewis thinks a positive mean value for β would be better. He nonetheless retained Sherwood20’s estimate of zero. See Supporting Information, 5.3.4.

This concludes the most technical part of this article. Next up: greenhouse gas emissions.


To determine how much the temperature will rise in the future (disregarding natural variability), it’s not enough to know what the climate sensitivity is – we also need to know approximately how much greenhouse gases will be emitted, so:

What will the emissions be?

The media and many scientists have long used something called RCP8.5 as a business-as-usual scenario for the effect of human activity (including emissions of greenhouse gases), and many still do. RCP stands for Representative Concentration Pathway, and the number (8.5 in this case) is how much greater the net energy input to the atmosphere is in the year 2100 compared to pre-industrial levels, measured in W/m2.

But RCP8.5 was never meant to be a business-as-usual scenario. In a 2019 CarbonBrief article about RCP8.5, Zeke Hausfather, another co-author of Sherwood20, writes:

The creators of RCP8.5 had not intended it to represent the most likely “business as usual” outcome, emphasising that “no likelihood or preference is attached” to any of the specific scenarios. Its subsequent use as such represents something of a breakdown in communication between energy systems modellers and the climate modelling community.

Sherwood20 also mentions that RCP8.5 should not be seen as a business-as-usual scenario, but rather as a worst-case scenario:

Note that while RCP8.5 has sometimes been presented as a “business as usual” scenario, it is better viewed as a worst case (e.g., Hausfather & Peters, 2020).

RCPs are often referred to as scenarios, which I also did earlier. But it may be better to think of an RCP as a collection of scenarios that all result in roughly the same net change in incoming energy in the year 2100. Thousands of different scenarios have been developed, and these can be used as inputs to climate models when they simulate future climates.

Plausible emissions scenarios

Roger Pielke Jr, Matthew Burgess, and Justin Ritchie published a study in early 2022 titled Plausible 2005–2050 emissions scenarios project between 2 °C and 3 °C of warming by 2100. In Pielke22, the different scenarios used in IPCC’s 2013 assessment report were categorized based on how well they were able to predict actual emissions from 2005 to 2020, in addition to how well their future emissions matched the International Energy Agency’s projections until 2050. Assuming that the scenarios that best matched actual and projected emissions will also be the ones that will be best at predicting emissions in the second half of the century, they found that RCP3.4 is the most likely (or plausible) RCP.

These scenarios (RCP3.4) are largely compatible with a temperature increase of between 2 and 3°C from pre-industrial times to 2100, with 2.2°C as the median value. Earth’s average temperature has increased by about 1.2°C since pre-industrial, so the median of 2.2°C corresponds to a temperature increase from today to 2100 of about 1.0°C.

After Pielke22 was published, Pielke Jr also looked at the scenarios used in IPCC’s latest assessment report (from 2021). He spoke about this in a talk in November 2022 (54:03-1:06:16), and, according to Pielke Jr, the median value for these newer scenarios is 2.6°C (rather than 2.2°C). This corresponds to a temperature rise of 1.4°C from today until 2100. In the following, I will use this more recent value.

In the talk, Pielke Jr says that RCP4.5 should now be considered a high-emissions scenario, while RCP8.5 and RCP6.0 are unlikely (58:12):

The high emissions scenarios are clearly implausible […]. What’s a high emissions scenario? Anything over 6 W/m2 […].

RCP 4.5 and the SSP2-4.5 are plausible high emissions scenarios. I know in the literature they’re often used to represent mitigation success. Today I think we can say based on this method that they’re in fact high-end scenarios. A business as usual – or consistent with current policy – scenario is a 3.4 W/m2 scenario. I will say that scenario is almost never studied by anyone.

Pielke22 doesn’t mention climate sensitivity explicitly, but the median equilibrium climate sensitivity (ECS) used in the latest generation of climate models is 3.74°C. ECS is likely higher than the effective climate sensitivity (S), which is the type of climate sensitivity that Sherwood20 and Lewis22 calculated. According to Sherwood, ECS is 6% higher than S. According to Lewis22, ECS is 13.5% higher. Using Lewis22’s value of 13.5%, an ECS of 3.74°C corresponds to an effective climate sensitivity (S) of 3.30°C.

If the climate sensitivity S is closer to 2.16°C, as Lewis22 found, then the temperature increase from today to 2100 will be approximately 35% lower than what Pielke Jr found. This means that the temperature increase from today will be 0.9°C instead of 1.4°C (0.9°C higher than today will be 2.1°C above pre-industrial).

An assumption in the RCP3.4 scenarios is widespread use of CO2 removal from the atmosphere in the second half of the century. Pielke22 did not assess whether that’s feasible:

Importantly, in the scenarios our analysis identifies as plausible, future decarbonization rates accelerate relative to the present, and many include substantial deployment of carbon removal technologies in the latter half of the century, the feasibility of which our analysis does not assess.

Given the recent rapid pace of technological development, I believe it to be highly likely that potent CO2 removal technologies will be developed this century. However, other methods may be more economically effective in limiting an unwanted temperature rise, e.g. manipulating the cloud cover, as Bjørn Lomborg suggests in an interview on Econlib (skip forward to 8:35 and listen for 2 minutes or read in footnote 17)).

In October 2022, The New York Times published an extensive article titled Beyond Catastrophe – A New Climate Reality Is Coming Into View. According to the author, David Wallace-Wells, recent evidence shows that the Earth is on track for a 2-3°C warming from the 1800s until 2100 instead of the previously feared 4-5°C. 2-3°C is the same as Pielke22 found.

According to The New York Times article, Hausfather contends that about half of the reduction in expected temperature rise is due to an unrealistic scenario being used previously (RCP8.5). The other half comes from “technology, markets and public policy”, including faster-than-expected development of renewable energy.

How much will temperatures rise by 2100?

Figure 1 (b) in Sherwood20 (graph (b) below) shows how much the temperature is likely to rise between 1986-2005 and 2079-2099, depending on effective climate sensitivity (S) and RCP scenario. This period is about 16 years longer than the 77 years from today until 2100, so the temperature rise for the remainder of the century will be less than the graph suggests – about 18% lower if we assume a linear temperature rise.

We can see in the graph that if RCP4.5 is the correct emissions scenario and the effective climate sensitivity is 3.1°C, then the temperature will rise by about 1.8°C between 1986-2005 and 2079-2099. To estimate the temperature rise from today until 2100, we subtract 18% from 1.8°C, resulting in an estimated increase of about 1.5°C.

Using instead Lewis22’s effective climate sensitivity of 2.16°C with the RCP4.5 scenario, we can see from the graph that the temperature increase will be approximately 1.25°C. This corresponds to a temperature rise of 1.0°C from today until 2100.

RCP3.4 is not included in the graph, but we can assume that the temperature increase for RCP3.4 will be a few tenths of a degree lower than for RCP4.5, so perhaps 0.7-0.8°C, which also agrees quite well with what Pielke Jr found (0.9°C) after we adjusted for the climate sensitivity from Lewis22.

0.8°C corresponds to a temperature rise of 2.0°C since the second half of the 19th century and is identical to the Paris agreement’s two degree target. 2.0°C is also within the New York Times interval of 2-3°C, where – as for the two degree target – pre-industrial is the starting point.

Although Lewis22’s estimate of climate sensitivity may be the best estimate as of today, it’s not the final answer. Much of the adjustment made to Sherwood20’s estimate was based on more recent data, and as newer data becomes available in the future, the effective climate sensitivity estimate of 2.16°C is going to be revised up or down.

And Nic Lewis himself points out that:

This large reduction relative to Sherwood et al. shows how sensitive climate sensitivity estimates are to input assumptions.

But he also criticizes the IPCC for significantly raising the lower end of the climate sensitivity likely range (from the previous to the latest assessment report, the lower end of the likely range was raised from 1.5 to 2.5°C):

This sensitivity to the assumptions employed implies that climate sensitivity remains difficult to ascertain, and that values between 1.5°C and 2°C are quite plausible.

It will be interesting to see what the authors of Sherwood20 have to say about Lewis22.


Footnotes

1) From the Comment in Nature (which is written by five authors, four of whom are co-authors of Sherwood20):

On the basis of [Sherwood20] and other recent findings, the AR6 authors decided to narrow the climate sensitivity they considered ‘likely’ to a similar range, of between 2.5 and 4 °C, and to a ‘very likely’ range of between 2 °C and 5 °C.

The Comment in Nature is titled Climate simulations: recognize the ‘hot model’ problem, but it’s behind a paywall. Luckily, however, it’s also published on MasterResource.

2) Zeke Hausfather has written on CarbonBrief that for CO2 levels to remain at the same high level after a doubling of CO2, it’s necessary to continue emitting CO2. If humans stop emitting CO2, the atmosphere’s CO2 level will fall relatively quickly. Temperature, however, is not expected to fall, but will likely remain constant for a few centuries (disregarding natural variability).

3) It may not be entirely correct to say that the temperature will increase by 1.2°C if there are no feedback effects. The reason is that the so-called Planck feedback is included in the formula for the “no feedback” climate sensitivity:

{"aid":null,"font":{"color":"#000000","family":"Arial","size":"14"},"code":"$$ECS_{noFeedback}\\,=\\,-\\frac{ΔF_{2xCO2}}{λ_{Planck}}$$","type":"$$","id":"13","backgroundColorModified":false,"backgroundColor":"#FFFFFF","ts":1670102750556,"cs":"w9TVSjy5fjgzhI4lBw1dwA==","size":{"width":279.25,"height":51.5}}

However, the Planck feedback can be seen as a different kind of feedback than the other feedbacks mentioned here, and it’s sometimes called the Planck response or no-feedback response. Anyway, if we insert the values from the studies we’re going to discuss in this article, then for Sherwood20 (ΔF2xCO2 = 4.00 W/m2 and λPlanck = -3.20 W/m2/°C) we get that ECSnoFeedback = 1.25°C. For the other study, Lewis22 (ΔF2xCO2 = 3.93 W/m2 and λPlanck = -3.25 W/m2/°C) we get ECSnoFeedback = 1.21°C.

4) Pre-industrial has traditionally been defined as the average of 1850-1900. Sherwood20 and Lewis22 have used the average of 1861-1880 as pre-industrial, since it is far less affected by volcanic activity. IPCC has started to use 1750.

5) This is the theory, at least. However, Andy May has shown that the relationship between temperature and the atmosphere’s water content may be more complicated. His argument is presumably based on the best available data, but he also notes that the data for atmospheric water content is somewhat poor.

6) If we add up the strengths of all the feedback effects including the Planck feedback, we get a negative number. But when the Planck feedback is not included, then the sum is very likely positive. And if this sum is positive, it means that the climate sensitivity (ECS) is higher than 1.2°C (which is what the climate sensitivity would be with no feedback effects except the Planck feedback, see footnote 2).

7) The IPCC estimated equilibrium climate sensitivity (ECS). Sherwood20, on the other hand, calculated effective climate sensitivity (S). ECS is likely higher than S – 6% higher according to Sherwood20, 13.5% higher according to Lewis22 (which is the study that corrects Sherwood20).

8) From Sherwood20:

Among these distinct feedbacks, those due to clouds remain the main source of uncertainty in λ, although the uncertainty in the other feedbacks is still important. 

λ (lambda) is the strength of a feedback effect. A positive λ means that the corresponding feedback effect increases climate sensitivity. Negative λ does the opposite. If the value of λ is known for every type of feedback, then the climate sensitivity can easily be calculated from the sum of the feedback strengths:

{"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","id":"5","code":"$$ECS\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","type":"$$","backgroundColorModified":false,"ts":1670767186833,"cs":"X7nj47G99ItTFq6sBAYn3g==","size":{"width":198.5,"height":46.333333333333336}}

9) Sherwood20 writes:

However, uncertainty in radiative forcing [during the past 150 years] is dominated by the contribution from anthropogenic aerosols, especially via their impact on clouds, which is relatively unconstrained by process knowledge or direct observations (Bellouin et al., 2020).

10) Andrew Dessler has been lead author and co-author in several studies on the pattern effect. In a couple of youtube-videos (one short and one long), you can watch his explanation of the pattern effect in relation to committed warming (however, he doesn’t use the term pattern effect in the short video).

An example Dessler uses to illustrate the pattern effect is from the oceans around Antarctica:

The existence of present day cold sea surface temperatures in these regions while the overlying atmosphere is warming due to global warming favors the buildup of low clouds over the region. These clouds reflect sunlight back to space and tend to cool the planet.

From Dessler’s short video (3:08)

When the ocean temperature eventually increases, less clouds are expected, which will lead to faster warming.

Nic Lewis (who criticized Sherwood20) has written an article which criticizes the study that Dessler talks about in the videos (Zhou et al 2021, titled Greater committed warming after accounting for the pattern effect). Although Lewis’ article, which was published on Judith Curry’s climate blog (Climate Etc), isn’t peer reviewed, he has also published a study on the pattern effect, which is peer reviewed.

The dataset for sea surface temperature (SST) used in Zhou et al implies a relatively large pattern effect. However, Lewis notes that other sea surface temperature datasets imply a much smaller pattern effect. The reason for the discrepancy is that sea surface temperature measurements historically have been quite sparse. The uncertainty is therefore substantial.

Lewis also criticizes Zhou et al for not distinguishing between the forced and unforced pattern effect. The component of the pattern effect that is forced has to do with the effect of greenhouse gases. The unforced component, on the other hand, has to do with natural variability. And the two components have different implications for future committed warming. Whereas the greenhouse gas-related component will have little effect on warming this century, the natural variations-component may have a larger effect on warming this century.

Lewis found that the natural variations-component is very close to zero if the following two conditions are met: (1) a different sea surface temperature dataset is used than the dataset Zhou et al used, and (2) a reference period is used that’s outside of the hiatus (1998-2014) – a period of relatively low temperature rise, which may have been caused by a cooling effect from natural variability. It’s thus uncertain whether the pattern effect will have any significant impact on temperatures this century.

11) IPCC on the pattern effect (latest assessment report, section 7.4.4.3):

[T]here is low confidence that these features, which have been largely absent over the historical record, will emerge this century[.]

12) From Wikipedia:

Although the term “climate sensitivity” is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

13) Sherwood20 uses the value 4,00±0,30 W/m2, while Lewis22 uses 3,93±0,30 W/m2 for the climate forcing for doubled CO2, which accords with the AR6 assessment (uncertainties here are ± 1 standard deviation).

Some skeptics argue that the atmosphere’s absorption of CO2 is saturated. This presumably means that the climate forcing for doubled CO2 would be close to zero, but according to Nic Lewis, this is wrong. The following quote is from a 2019 talk by Lewis (14:00):

Another point that is often argued is that the absorption by carbon dioxide is saturated – that it can’t get any stronger. Unfortunately, that is not the case. However, it is a logarithmic relationship, approximately, so it increases slower and slower. Roughly speaking, every time you double carbon dioxide level, you get the same increase in the effect it has in reducing outgoing radiation. And this decrease in outgoing radiation is called a radiative forcing, and it’s just under 4 W/m2 of flux for every time you double carbon dioxide. And again, this is pretty well established.

And a little earlier in the same talk (11:12):

The black is the measured levels – this is measured by satellite at the top of the atmosphere. […] And the red lines are from a specialized radiative transfer model, and you can see how accurately they reproduce the observations. And what that reflects is that this is basic radiative physics, it’s very soundly based. There’s no point in my view disputing it because the evidence is that the theory is matched by what’s actually happening.

The figure that he’s talking about is this one:

The figure shows how CO2 and other (greenhouse) gases in the atmosphere absorb infrared light from the ground at various wavelengths in the absence of clouds (above the Sahara). Without an atmosphere, the outgoing radiation would follow the top dashed line marked by the temperature 320 K (47°C).

14) Lewis writes:

A significant advantage of the LGM transition is that, unlike more distant periods, there is proxy evidence not only of changes in temperature and CO2 concentration but also of non-CO2 forcings, and that enables estimation of the effects on radiative balance of slow (ice sheet, etc.) feedbacks, which need to be treated as forcings in order to estimate ECS (and hence S) rather than ESS.

15) The method that Sherwood20 had used to calculate the likelihood of different climate sensitivities was invalid in some circumstances. Among other things, the method assumed a normal (Gaussian) distribution of all input parameters. But for historical evidence (data for the past 150 years), this wasn’t the case since the climate forcing from aerosols wasn’t normally distributed.

To triple-check that Sherwood20’s method was invalid, Lewis calculated the probability distribution using three different methods, and they all gave the same result.

The method used by Sherwood20 led to an underestimation of the probability of high climate sensitivity values:

The dashed lines here show Sherwood20’s results for historical evidence, while the solid lines show Lewis22’s correction.

Correcting this error in Sherwood20 caused the median for the combined climate sensitivity to increase from 3.10 to 3.16°C. (The further increase from 3.16 to 3.23°C, was due to Lewis applying the objective Bayesian method rather than the subjective Bayesian method.)

See Likelihood estimation for S in Lewis22, Supporting Information (S2) and Appendix B in Lewis’ summary of Lewis22 for more details.

16) Conservative choices in Lewis22 (Supporting Information) – S20 is Sherwood20:

I make no changes to S20’s assessments of other cloud feedbacks. However, I note that Lindzen and Choi (2021) cast doubt on the evidence, notably from Williams and Pierrehumbert (2017), relied upon by S20 that tropical anvil cloud feedback is not, as previously suggested (Lindzen and Choi 2011; Mauritsen and Stevens 2015), strongly negative.

The resulting median revised total cloud feedback estimate is 0.27 − almost double the 0.14 for nine CMIP6 GCMs that well represent observed interhemispheric warming (Wang et al. 2021).

S20’s GMST [=Global Mean Surface Temperature] estimate was infilled by kriging, which does not detect anisotropic features. Recently, a method that does detect anisotropic features was developed, with improved results (Vaccaro et al. 2021a,b). Infilling the same observational dataset as underlies S20’s infilled estimate, the improved method estimates a 9% lower GMST increase. Nevertheless, I retain S20’s estimate of the GMST rise, resulting in a GMAT [=Global Mean Air Temperature] ΔT estimate of 0.94 ± 0.095 [°C].

S20’s 0.60 Wm−2 estimate of the change in planetary radiative imbalance equals that per AR6. However, AR6 (Gulev et al. 2021 Figure 2.26(b)) shows that, excluding series that are outliers, the AR6 0-2000m [Ocean Heat Content] estimate is middle-of-the-range in 2018 but at its bottom in 2006, hence yielding an above average increase over that period. Nevertheless, I retain S20’s estimate.

Moreover, Golaz et al. (2019) found that an advanced [Global Climate Model] with historical aerosol [Effective Radiative Forcing] of −1.7 Wm−2, tuned on the pre industrial climate, would only produce realistic GMAT projections if the aerosol forcing is scaled down to ~−0.9 Wm−2 (and, in addition, its climate sensitivity is halved).

Conservatively, in the light of the foregoing evidence pointing to aerosol forcing being weaker than implied by simply revising B20’s βlnL−lnN estimate, I adopt a modestly weakened aerosol ERF estimate of −0.95 ± 0.55 Wm−2 over, as in B20, 1850 to 2005-15. This implies a 5–95% uncertainty range of −1.85 to −0.05 Wm−2, which has the same lower bound as AR6’s estimate, and is likewise symmetrical.

Scaled to the period 1861-1880 to 2006-2018, the median then becomes 0.86 instead of 0.95, according to Lewis22.

In two [Global Climate Models], Andrews et al. (2018) found a 0.6 weakening in [the pattern effect] when using [a newer sea-ice dataset]. Although the [newer] sea-ice dataset […] is no doubt imperfect […], its developers argue that it is an improvement on [the earlier version]. However, I consider that there is too much uncertainty involved for any sea-ice related reduction to be made when estimating the unforced Historical pattern effect.

In view of the evidence that pattern effect estimates from [Atmospheric Model Intercomparison Project II]-based simulations are likely substantially excessive, and that the unforced element is probably minor and could potentially be negative, it is difficult to justify making a significantly positive estimate for the unforced element. However, a nominal 0.1 ± 0.25 is added to the 0.25 ± 0.17 forced pattern effect estimate, which reflects the substantial uncertainty and allows not only for any unforced pattern effect but also for the possibility that some other element of the revised Historical evidence data-variable distributions might be misestimated.

I revise S20’s central LGM [=Last Glacial Maximum] cooling estimate of −5 [°C] to −4.5 [°C], primarily reflecting, less than fully, the −4.2 [°C] adjusted mean ΔTLGM estimate of the sources cited by S20, and increase the standard deviation estimate to 1.25 [°C] so as to maintain the same –7 [°C] lower bound of the 95% uncertainty range as S20’s.

S20 use the single year 1850 as their preindustrial reference period for GHG concentrations, whereas for observational estimates of temperature change preindustrial generally refers to the average over 1850−1900. For consistency, the S20 GHG [=Greenhouse Gas] forcing changes should therefore use mean 1850−1900 GHG concentrations. Doing so would change the CO2 ERF from –0.57x to –0.59x ΔF2xCO2, as well as marginally changing the CH4 and N2O ERFs. However, conservatively, I do not adjust S20’s LGM forcing estimates to be consistent with the LGM ΔT measure.

S20 adopt the estimate of vegetation forcing in the Kohler et al. (2010) comprehensive assessment of non-greenhouse gas LGM forcing changes, but use a central estimate of –1.0 Wm−2 for aerosol (dust) forcing in place of Kohler et al.’s –1.88 Wm−2. This seems questionable; Friedrich and Timmermann (2020) adopt Kohler et al.’s estimate, while pointing out that estimates of its glacial-interglacial magnitude vary from ~0.33 to ~3.3 Wm−2. I nevertheless accept S20’s estimate of dust forcing[.]

S20 assume that climate feedback in equilibrium (λ’) strengthens by α for every -1 [°C] change in ΔT, resulting in the 0.5 α TLGM2 term in (11), reducing LGM-estimated ECS. Contrariwise, Zhu and Poulsen (2021) found that ocean feedback caused 25% higher LGM-estimated [climate sensitivity] ECS. Moreover, a significant part of the reduction in mean surface air temperature at the LGM is due to ice-sheet caused increased land elevation, which would weaken λ’ compared to in non-glacial climates. Although S20’s [α = 0,1 ± 0,1] estimate appears questionable, I retain it.

Although the Tierney et. al (2019) 1.4 [°C] tropical SST warming estimate appears more reliable than S20’s 1.5 [°C], I retain the latter but multiply it by the 1.65 PlioMIP2 ratio, giving a revised GMAT ΔTmPWP of 2.48 [°C].

S20 assessed a [2400 ± 700] ppm distribution for CO2 concentration in the PETM relative to a baseline of 900 ppm, implying a [1.667 ± 0.778] ΔCO2PETM distribution. That covers, within its 90% uncertainty range, a concentration ratio range (1 + ΔCO2PETM) of 1.39 to 3.95. The CO2 concentration estimates considered by S20, even taking extremes of both their PETM and Eocene ranges, constrain (1 + ΔCO2PETM) within 1.4 to 5. Using instead that range would lower PETM based S estimates. Nevertheless, I retain S20’s ΔCO2PETM distribution.

While Meinshausen et al. assume a fixed ratio of CO2 ERF to stratospherically-adjusted radiative forcing, there is modeling evidence that fast adjustments become more positive at higher temperatures (Caballero and Huber 2013), which would further increase CO2 ERF change in the PETM. I make no adjustment for this effect.

To account for forcing from changes in CH4 concentrations, S20 apply the same 0.4 fCH4 factor to the CO2 forcing change as for the mPWP, with doubled uncertainty, although noting that the tropospheric lifetime of CH4 could be up to four times higher given sustained large inputs of CH4 into the atmosphere (Schmidt and Shindell 2003). I retain S20’s fCH4 distribution, although doing so may bias estimation of S upwards.

S20 assume that ESS [=Earth System Sensitivity] for the PETM was the same as present ECS, representing uncertainty regarding this by deducting a [0 ± 0,5] adjustment (β) from ESS feedback when estimating ECS feedback, λ’. Assuming zero slow feedbacks in the PETM (so ESS equals ECS) may be reasonable, given the lack of evidence and the absence of major ice sheets. However, Caballero and Huber (2013) and Meraner et al. (2013) both found, in modeling studies, substantially (~50%) weaker climate feedback for climates as warm as the PETM. Zhu et al (2019) found, in a state-of-the-art GCM, that ECS was over 50% higher than in present day conditions, with little of the increase being due to higher CO2 ERF. I therefore consider that it would be more realistic to use a positive central estimate for β. Nevertheless, I retain S20’s estimate.

17) Here’s (roughly) what Bjørn Lomborg said:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surely we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

Hvor mye vil temperaturen på Jorda stige dette århundret?

Kort oppsummert:

FNs klimapanel mener atmosfærens klimasensitivitet sannsynligvis er mellom 2,5 og 4,0 grader. Litt forenklet vil det si at temperaturen på lang sikt vil stige mellom 2,5 og 4,0 grader etter at atmosfærens CO2-nivå dobler seg. En studie fra 2020 (Sherwood20) hadde stor innflytelse på hvordan klimapanelet beregnet klimasensitiviteten.

En ny studie fra 2022 (Lewis22) korrigerte studien fra 2020. Ved å rette på feil og bruke nyere data, blant annet tall fra klimapanelets siste hovedrapport, kom den nye studien frem til en klimasensitivitet som var ca 30% lavere enn studien klimapanelet vektla.

Hvis man vet hvor høy klimasensitiviteten er og også omtrent hvor store fremtidige utslipp av klimagasser blir, kan man finne omtrent hvor mye temperaturen vil stige, for eksempel frem til år 2100.

Når det gjelder fremtidige utslipp, fant en studie fra 2022 (Pielke22) at noe som kalles RCP3.4 er det mest plausible utslippsscenariet. Tradisjonelt har man brukt et annet scenario, RCP8.5, som et business-as-usual scenario, men dette er det nå stor enighet om at er et ekstremt usannsynlig utslippsscenario, med urealistisk høye utslipp.

Når man antar at klimasensitiviteten fra Lewis22 er riktig klimasensitivitet og at RCP3.4 er riktig utslippsscenario, finner vi at temperaturen vil stige mindre enn 1°C fra 2023 til 2100 – når vi ser bort fra naturlig variasjon.

Hvor mye Jordas overflate-temperatur vil stige dette århundret kommer blant annet an på hvor følsom (eller sensitiv) atmosfæren er til klimagasser som CO2, hvor mye klimagasser som slippes ut, og naturlige svingninger. Det er vanskelig å vite hvordan de naturlige svingningene vil bli, så jeg vil her fokusere på klimasensitiviteten og klimagass-utslippene.

Denne artikkelen er til dels ganske teknisk, med mye tall og begreper, selv om de mest tekniske delene finnes i fotnoter – ikke i hovedteksten. Det er ikke forventet at man skal skjønne alt etter bare én gjennomlesning, så ikke bli demotivert om det er noe det er vanskelig å skjønne. Det er uansett lov å spørre hvis det er noe som ikke er forståelig. Det kan godt hende det er jeg som må skjerpe meg og forklare ting på en bedre måte.

Klimasensitivitet

Hvor mye vil temperaturen stige når CO2-innholdet i atmosfæren dobles? Svaret på dette er atmosfærens klimasensitivitet. Så hvis klimasensitiviteten er 3 grader, og CO2-innholdet i atmosfæren dobles på relativt kort tid og holder seg på det nivået, så vil temperaturen ved jordoverflaten på lang sikt stige 3 grader.1) På lang sikt er i dette tilfellet mer enn 1000 år, men de største temperaturendringene kommer tidlig.

Det har lenge vært knyttet stor usikkerhet til hvor høy klimasensitiviteten er. Den såkalte Charney-rapporten fra 1979 kom frem til at klimasensitiviteten mest sannsynlig er mellom 1,5 og 4,5 grader. Og IPCCs femte (og nest siste) hovedrapport fra 2013 konkluderte med akkurat det samme – at det var sannsynlig (dvs 66% sjanse for) at klimasensitiviteten er mellom 1,5 og 4,5 grader. Usikkerheten i Charney-rapporten kan riktignok ha vært undervurdert, så det ble nok gjort en del fremskritt i løpet av de 34 årene, selv om det ikke ga seg utslag på det offisielle klimasensitivitet-estimatet.

Det opereres for så vidt med flere forskjellige typer klimasensitivitet innen klimaforskningen. Jeg vil ikke gå noe mer inn på det ennå, men skal si litt om det senere i artikkelen når det blir relevant. Den typen klimasensitivitet som var referert til over – i Charney-rapporten og av IPCC – kalles Equillibrium Climate Sensitivity og forkortes ECS.

Hvorfor så stor usikkerhet?

Det er stor enighet om at uten såkalte tilbakekoblingsmekanismer (feedback effects), ville klimasensitiviteten (ECS) vært bare ca 1,2 grader 2). Årsaken til den store usikkerheten går på hvordan feedback-effektene påvirker temperaturen.

En feedback-effekt kan være enten positiv eller negativ. En positiv feedback-effekt forsterker oppvarmingen, og bidrar dermed til en høyere klimasensitivitet. En negativ feedback-effekt demper oppvarming og bidrar til en lavere klimasensitivitet.

Feedback-effektene kan igjen variere basert på atmosfærens temperatur og sammensetning, og hvor stor del av Jorda som er dekket av is og vegetasjon blant annet, slik at klimasensitiviteten ikke alltid vil være konstant. Man har derfor definert klimasensitiviteten ECS til å være hvor mye temperaturen til slutt vil ha steget som følge av en dobling av CO2-nivået fra førindustrielt nivå (ca 284ppm – parts per million).

CO2-nivået er nå ca 420ppm, det vil si at det har vært en økning på nærmere 50% siden andre halvdel av 1800-tallet.3) Og blant annet siden det ennå ikke har vært en dobling av CO2-nivået har temperaturen steget mindre enn hva klimasensitiviteten er. Nærmere bestemt har det vært en temperatur-økning på ca 1,2 grader siste 150 år.

Det er en del ulike feedback-mekanismer, her er noen av de viktigste:

  • Vanndamp. Økt mengde klimagasser i atmosfæren fører til økt temperatur. En høyere temperatur tillater så atmosfæren å holde på mer vanndamp, og siden vanndamp er en sterk klimagass, bidrar økt mengde vanndamp i atmosfæren til at temperaturen stiger enda litt mer. Vanndamp er altså en positiv (forsterkende) feedback-effekt.
  • Temperaturfall oppover i atmosfæren (lapse rate). Jo høyere opp i atmosfæren (troposfæren) man kommer, jo kaldere blir det – ca 6,5 grader kaldere per kilometer i snitt. Atmosfærens lapse rate er altså 6,5°C per kilometer. Det er vanlig å bruke begrepet lapse rate også på norsk, men temperaturfallshastighet kan kanskje være en grei oversettelse.
    Feedback fra lapse rate har sammenheng med feedback fra vanndamp og de to ses ofte på sammen. Mer vanndamp gir større temperaturøkning høyere oppe i atmosfæren enn ved jordoverflaten. Det er fordi det høyt oppe generelt er tørrere, og effekten på temperaturen av mer vanndamp er da større. Den økte temperaturen høyere oppe bidrar til mer utstråling til verdensrommet, og Jorda blir dermed avkjølt mer. Det igjen betyr at feedback-effekten fra lapse rate er negativ – men den samlede feedback-effekten fra vanndamp og lapse rate er positiv.
  • Skyer. Uten skyer ville temperaturen på Jorda vært vesentlig høyere enn i dag, men ikke alle skyer er avkjølende. Ulike skytyper har ulik effekt på temperaturen. Høye skyer har i gjennomsnitt en oppvarmende effekt, mens lave skyer i snitt er avkjølende. Når man skal vurdere om feedback-effekten fra skyer er positiv eller negativ, må man finne ut om en høyere temperatur i atmosfæren fører til at skyenes effekt blir mer oppvarmende enn i dag eller mer avkjølende enn i dag. Dette vet man ikke helt sikkert, men IPCC mener det er stor sannsynlighet (over 90%) for at feedback-effekten fra skyer er positiv, og at endringer i skydekke som følge av økt temperatur har en forsterkende effekt på temperaturøkningen.
  • Jordoverflate-albedo-endringer. Jordens albedo sier hvor mye solstråling som reflekteres ut igjen, og er i dag på ca 0,3. Det betyr at 30% av solstrålingen reflekteres direkte ut igjen fra Jorda eller atmosfæren. Jordens albedo kan for eksempel endre seg når en større eller mindre del av Jorda blir dekket av is og snø. Høyere temperatur fører generelt til mindre isdekke, noe som igjen fører til enda litt høyere temperatur siden Jordens albedo synker (mindre stråling reflekteres). Albedo-endringer som følge av endringer i isdekke er derfor en positiv feedback-effekt.
    (Albedo-endringer som følge av endringer i skydekke inngår i feedback-effekten for skyer.)
  • Planck-feedback. Jo varmere noe blir, jo mer varmestråling sender det ut. Så når Jorda blir varmere, sender den ut mer varmestråling. Det betyr at Jorda (som alt annet) mister energi raskere når temperaturen øker. Og det er dette som er Planck-feedback’en, og det er en sterkt negativ feedback-effekt – den begrenser mye av temperaturøkningen vi ellers ville fått fra økt drivhuseffekt.
    Planck-feedback’en er en litt annerledes feedback enn de andre feedback-effektene, og den er faktisk allerede inkludert når man beregner hvor mye temperaturen vil stige uten (andre) feedback-effekter. Så det er kanskje like greit å ikke tenke på den som en egentlig feedback-effekt.4)

Ulike måter å beregne klimasensitiviteten på

Det er flere måter å beregne klimasensitiviteten på. Man kan basere det på temperaturøkningen vi har sett de siste ca 150 årene hvor vi vet omtrent hvor mye temperaturen har steget og hvordan endringene i blant annet klimagasser har vært (historical evidence). Eller man kan beregne effektene av de ulike kjente feedback-mekanismene og summere dem (process evidence). Eller man kan beregne det ut fra hvor mye temperaturen har endret seg siden forrige istid eller andre varme eller kalde perioder i Jordens historie (paleo evidence). Man kan også bruke klimamodeller – store dataprogrammer som simulerer fortidens og fremtidens klima.

Sherwood20

I 2020 kom det ut en stor studie med 25 forfattere som kombinerte de tre førstnevnte metodene. De brukte altså ikke klimamodeller direkte, selv om studien ofte støtter seg på klimamodeller for å underbygge eller sannsynliggjøre verdiene de bruker i utregningene.

Studien har tittelen An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence. Steven Sherwood er hovedforfatter, og man refererer derfor gjerne til studien som Sherwood et al 2020. For enkelhets skyld vil jeg bare kalle den Sherwood20.

Sherwood20 konkluderte med at klimasensitiviteten sannsynligvis (66% sikkert) er mellom 2,6 og 3,9 grader, med 3,1 grader som middelverdi (median). (Det er like stor sannsynlighet (50%) for at klimasensitiviteten er høyere enn middelverdien som at den er lavere.) IPCCs siste hovedrapport la stor vekt på Sherwood20, og IPCC i sin tur konkluderte med at klimasensitiviteten sannsynligvis er mellom 2,5 og 4,0 grader.5) Dermed hadde man endelig klart å redusere usikkerhetsintervallet betraktelig.

Sherwood20 går grundig gjennom alle faktorer de mener påvirker klimasensitiviteten og diskuterer usikkerhetsmomenter.

Klimasensitivitet beregnet ved å summere feedback-effekter

Feedback-effektene som Sherwood20 fokuserte på var først og fremst de fem jeg listet opp tidligere. Andre feedback-effekter summerte de opp til å ha ingen effekt. For å beregne klimasensitiviteten på bakgrunn av feedback-effekter, summerer man først styrken av alle de ulike feedback-effektene, og så har man en enkel formel for å konvertere total feedback-effekt til klimasensitivitet.

Den største usikkerheten når det gjelder feedback-effekter kommer fra skyenes feedback-effekt. Ifølge Sherwood20 er feedback-effekten fra skyene med stor sannsynlighet positiv, men usikkerhetsintervallet er fortsatt relativt stort 6) selv om det har blitt innsnevret de siste årene.

Klimasensitivitet beregnet fra temperatur og andre data for de siste 150 årene

Man vet sånn ca hvordan temperaturen har variert på Jorda de siste 150 årene og også omtrent hvordan mengden klimagasser i atmosfæren har økt. Men for å kunne beregne klimasensitiviteten det tilsvarer, må man i tillegg vite hvor stor effekt aerosoler har hatt på temperaturen og helst også hvordan temperaturen ville variert uten utslipp av klimagasser (naturlig variasjon). I tillegg har man noe som heter mønster-effekten, som sammen med aerosoler er det som bidrar mest til usikkerheten i klimasensitiviteten når den beregnes fra historical evidence:

  • Aerosoler: Aerosoler er ifølge Wikipedia “små partikler av væske eller fast stoff i atmosfæren, men ikke skyer eller regndråper. Disse kan enten ha naturlig opphav eller være menneskeskapte. Aerosoler kan påvirke klimaet på forskjellige komplekse måter ved at de påvirker jordens strålingsbalanse og skydannelse. Studier tyder på at disse har blitt sluppet ut siden industrialiseringen startet, og har gitt en avkjølende effekt.”
    Det er overraskende stor usikkerhet knyttet til aerosolenes effekt. De har med stor sannsynlighet en kjølende effekt, men særlig fordi man mangler kunnskap om hvordan aerosolene påvirker skyene, er usikkerheten stor.7) Sammen med utslipp av klimagasser, slippes det også ut aerosoler ved forbrenning av fossile brensler, men med nyere teknologi kan vi begrense utslippene av aerosoler. Hvis aerosolene har en sterkt avkjølende effekt, betyr det at de har motvirket en del av oppvarming fra klimagasser. Noe som i så fall betyr at klimasensitiviteten må være relativt høy. Hvis aerosolene har liten effekt, peker det mot en lavere klimasensitivitet.
  • Mønster-effekten (Historical pattern effect): Ulike geografiske områder har hatt ulik grad av oppvarming siden 1800-tallet.8) Sherwood20 antar at områdene som har hatt liten oppvarming etter hvert vil “ta igjen” områdene som har blitt oppvarmet mer, selv om det ikke nødvendigvis vil skje dette århundret.9) Det er få tidligere klimasensitivitet-studier som har tatt hensyn til mønster-effekten, og det er ganske stor usikkerhet rundt hvor stor den er. Derfor blir usikkerheten i klimasensitiviteten når den beregnes ut fra temperatur og andre data for de siste ca 150 årene vesentlig større i Sherwood20 enn i de tidligere studiene.

Klimasensitivitet beregnet fra tidligere tiders varme og kalde perioder

Sherwood20 har brukt én tidligere kuldeperiode og én varmeperiode for å beregne klimasensitiviteten på bakgrunn av tidligere tiders klima. De så også på ytterligere én varmeperiode (PETM – Paleocene–Eocene Thermal Maximum), men brukte ikke resultatet fra den videre i studien.

Kuldeperioden Sherwood20 så på var den kaldeste perioden i siste isted – for ca 20.000 år siden (Last Glacial Maximum, LGM), hvor det ifølge studien var ca 5±1 grader kaldere enn førindustriell temperatur (6±1 grader kaldere enn i dag).

Varmeperioden de så på var mid-Pliocene Warm Period (mPWP) for omtrent 3 millioner år siden, da det var 3±1 grader varmere enn i vår tidsepoke (Holocen, tiden etter forrige istid – siste 12 000 år), som igjen ser ut til å være i gjennomsnitt omtrent like varmt som førindustriell temperatur – dermed blir temperaturen i mPWP ca 2±1 grader varmere enn i dag.

Jeg skjønte i utgangspunktet ikke hvordan man kunne beregne atmosfærens klimasensitivitet overfor CO2 ut fra hvor kaldt eller varmt det var forrige istid eller varmeperioder tidligere i Jordens historie. Grunnen til at det er mulig er at man kan snakke om atmosfærens klimasensitivitet mer generelt, man trenger ikke nødvendigvis å trekke inn CO2.10)

En bestemt økning av energitilførselen til Jorda vil (over tid) føre til en økning av temperaturen ved jordoverflaten. Og hvis klimasensitiviteten er høy, vil temperaturen øke relativt mye. Hvis klimasensitiviteten er lav, vil temperaturen øke mindre. Naturlig nok. Og det gjelder uavhengig av om den økte energitilførselen skyldes CO2 eller noe annet.

Man har funnet at en økning av CO2-innholdet i atmosfæren reduserer utstrålingen til verdensrommet fra Jordas atmosfære med ca 4 watt per kvadratmeter (W/m2) over hele atmosfæren for hver dobling av CO2-innholdet.11) Når utstrålingen reduseres, betyr det at mer energi blir værende igjen i atmosfæren, slik at den varmes opp inntil utstrålingen har økt så mye at den igjen balanserer innkommende energi.

En dobling av CO2-innholdet i atmosfæren vil ha samme effekt som en tilsvarende økning i solinntrålingen – altså en økning som gir en økt energitilførsel på ca 4 W/m2 over hele atmosfæren (i hvert fall tilnærmet samme effekt, se steg 4 her).

Dette betyr at hvis man kan beregne omtrent hvor mye varmere eller kaldere det var på et tidspunkt i fortiden, og man også kan beregne hvor mye mer eller mindre energi Jorden mottok på det tidspunktet enn nå, så skal det være mulig å beregne hvor sensitiv atmosfæren er til en endret energitilførsel.

Hvis man i tillegg vet omtrent hva en dobling av CO2-nivået gjør med energitilførselen til atmosfæren, kan man ut fra det beregne klimasensitiviteten til CO2 – altså hvor mange grader temperaturen vil stige når CO2-nivået dobles.

Man kan altså (i teorien) ha samme CO2-nivå i dag og på et tidligere tidspunkt da det var mye varmere eller kaldere enn nå – og selv om det ikke hadde vært endringer i CO2-nivået, ville det vært mulig å beregne klimasensitiviteten til CO2 – fordi vi vet omtrent hva en dobling av CO2-nivået gjør med energitilførselen til atmosfæren.

Når man beregner klimasensitiviteten ved å se på tidligere tiders klima, hvor man altså ser på veldig lange tidsperioder, har alle de trege feedback-mekanismene fått tid til å virke. Det vil si at man kan finne den “ekte” langsiktige klimasensitiviteten. Ut fra det jeg har skrevet tidligere, skulle man tro at dette var klimasensitiviteten ECS, men i definisjonen av ECS holdes Jordens isdekke konstant, så ECS er på en måte en teoretisk – ikke ekte – klimasensitivitet. Den ekte langsiktige klimasensitiviteten kalles Earth System Sensitivity (ESS). Riktignok kan ESS og ECS være (ganske) like hvis det ikke er endringer i isdekket.

Den klimasensitiviteten Sherwood20 beregnet kalles Effective Climate Sensitivity (S) og er en tilnærming til ECS. ECS er trolig høyere enn S, og når det er store endringer i Jordens isdekke, er ESS trolig vesentlig høyere enn ECS igjen.

Men selv om ESS er den ekte langsiktige klimasensitiviteten, kan man faktisk si at S er den mest relevante klimasensitiviteten for oss, fordi vi ikke er så interessert i hva som skjer om mer enn 1000 år. Sherwood20 skriver:

Crucially, effective sensitivity (or other measures based on behavior within a century or two of applying the forcing) is more relevant to the time scales of greatest interest (i.e., the next century) than is equilibrium sensitivity[.]

Så, siden Sherwood20 var interessert i å kombinere klimasensitivitet beregnet på forskjellige måter, måtte de konvertere klimasensitiviteten ESS (som de beregnet basert på tidligere tiders klima) til klimasensitiviteten S, som de beregnet på de andre måtene.12)

Det er naturligvis en del usikkerhet knyttet til beregning av klimasensitivitet fra tidligere tiders varme og kalde perioder. Man vet ikke nøyaktig hvor varmt eller kaldt det var, og man vet heller ikke nøyaktig hvor mye mer eller mindre energi Jorden mottok den gangen sammenlignet med i dag eller omkringliggende tidsperioder. Usikkerheten for paleo evidence er ifølge Sherwood20 likevel ikke nødvendigvis større enn for de andre måtene å beregne klimasensitiviteten på.

Hva Sherwood20 kom frem til

Ifølge Sherwood20 er det ganske godt samsvar mellom alle de tre måtene å beregne klimasensitiviteten på. Den mest sannsynlige verdien for klimasensitiviteten beregnet på hver av de tre måtene er ikke veldig langt fra hverandre, som vi kan se på den nederste grafen under (selv om middelverdien for historical evidence er hele 5,82 grader):

Dette er figur 20 fra Sherwood20 og viser selve hovedresultatet fra studien. Figuren viser hvor sannsynlig ulike klimasensitiviteter (S) er for hver av de tre måtene å beregne klimasensitiviteten på – i tillegg til den kombinerte sannsynligheten (svart kurve). Jo høyere kurven går, jo større er sannsynligheten. Vi ser at mest sannsynlige verdi er litt under 3 grader, men middelverdien er ifølge Sherwood20 3,1 grader.

Gavin Schmidt, en av medforfatterne av Sherwood20, har også skrevet et sammendrag av studien på RealClimate.

Kritikk av Sherwood20

Nic Lewis er en britisk matematiker og fysiker som entret klimaforskningens verden etter å ha blitt inspirert av Stephen McIntyre. McIntyre hadde blant annet kritisert den kanskje viktigste studien bak hockeykølle-grafen fra IPCCs tredje hovedrapport fra 2001. (Se for eksempel mitt tidligere innlegg om blant annet hockeykølle-grafen.)

Nic Lewis er derfor ikke helt som andre klimaforskere. Han kan nok kalles en “skeptisk” klimaforsker. Ikke skeptisk i betydningen at han ikke mener CO2 er en klimagass, men skeptisk i betydningen at han mener vi ikke har noen stor klimakrise. Lewis’ forskning peker generelt mot en lavere klimasensitivitet enn IPCCs estimat.

Her kan du se et foredrag fra 2019 med Nic Lewis, hvor temaet er klimasensitivitet:

Lewis har publisert til sammen 10 studier om klimasensitivitet, og Sherwood20 refererte til studier hvor Lewis var eneste forfatter eller hovedforfatter 16 ganger. I september 2022 publiserte Lewis en ny studie, Objectively combining climate sensitivity evidence, hvor han diskuterer og korrigerer Sherwood-studien. Jeg vil kalle studien Lewis22.

Lewis skriver i en artikkel som oppsummerer Lewis22 at metodikken med å kombinere ulike måter å beregne klimasensitiviteten på er god:

This is a strong scientific approach, in that it utilizes a broad base of evidence and avoids direct dependence on [Global Climate Model] climate sensitivities. Such an approach should be able to provide more precise and reliable estimation of climate sensitivity than that in previous IPCC assessment reports.

Nic Lewis skriver i artikkelen at han etter 2015 har publisert flere studier som beskriver hvordan man kan kombinere ulike typer klimasensitivitet-estimater ved hjelp av en objektiv bayesiansk statistisk metode. Selv om Sherwood20 kjente godt til Nic Lewis’ studier, hadde Sherwood20 valgt en (enklere) såkalt subjektiv metode istedenfor. Ifølge Lewis kan det resultere i at usikkerhetsintervallene blir feil (sammenfaller ikke med sannsynlighetsintervaller). Han bestemte seg derfor for å gjenskape Sherwood20 ved hjelp av den objektive metoden. Samtidig ville han sjekke Sherwood20s estimater og data.

I Schmidts artikkel på RealClimate kan man riktignok lese at Sherwood20 mente den subjektive metoden var mer passende:

Attempts to avoid subjectivity (so-called ‘objective’ Bayesian approaches) end up with unjustifiable priors (things that no-one would have suggested before the calculation) whose mathematical properties are more important than their realism.

Ved blant annet å bruke den objektive metoden istedenfor den subjektive, ble resultatet en noe høyere klimasensitivitet. Middelverdien økte fra 3,10 til 3,23 grader. Lewis kommenterer:

As it happens, neither the use of a Subjective Bayesian method nor the flawed likelihood estimation 13) led to significant bias in Sherwood20’s estimate of [the climate sensitivity] S when all lines of evidence were combined. Nevertheless, for there to be confidence in the results obtained, sound statistical methods that can be expected to produce reliable parameter estimation must be used.

Men ved å rette på noen flere feil, og også bruke nyere data, blant annet fra IPCCs siste hovedrapport fra 2021, falt den mest sannsynlige verdien for klimasensitiviteten helt ned til 2,25 grader. Ved i tillegg å bruke det Lewis mener er bedre begrunnede data (ikke nyere), ble klimasensitiviteten justert ytterligere nesten 0,1 grader ned, til 2,16 grader.

Data-endringene fra Lewis22 er delvis begrunnet i studien og delvis i et vedlegg til studien (Supporting Information, S5). I tillegg til å diskutere data-verdier han endret på, diskuterer Lewis i vedlegget også en del data-verdier hvor han konservativt valgte å ikke gjøre endringer – selv om han mente Sherwoods verdier ikke var helt optimale.14) Det hadde altså vært mulig – hvis man gikk inn for det – å argumentere for en enda lavere klimasensitivitet.

Figuren under er hentet fra Lewis’ oppsummering av Lewis22, og viser resultatene han kom frem til sammenlignet med resultatene fra Sherwood20:

I (a), (b) og (d) representerer stiplet linje resultatene fra Sherwood20, mens heltrukket linje er resultater fra Lewis22. I (b) ser vi at de tre måtene å beregne klimasensitiviteten på gir nesten samme verdi for den mest sannsynlig klimasensitiviteten i Lewis22, mens variasjonen er noe større i Sherwood20. I tillegg er usikkerheten mindre (kurvene er smalere) i Lewis22, spesielt for historical evidence (som bruker temperatur og andre data fra de siste ca 150 årene). PETM er den varmeperioden Sherwood20 så bort fra i beregningen av den kombinerte klimasensitiviteten (PETM=Paleocene–Eocene Thermal Maximum for ca 56 millioner år siden, det var da opp mot ca 12 grader varmere enn nå).

For de spesielt interesserte så kommer det her en utlisting av noen av data-verdiene som ble endret i Lewis22 sammenlignet med Sherwood20 – i tillegg til klimasensitiviteten beregnet på hver av de tre måtene. For en fullstendig oversikt, se tabell 1, 2 og 3 i Lewis22, hvor også uendrede verdier er tatt med. Se også Supporting Information (S5) for mer informasjon om endringene som ble gjort. Verdiene i tabellen er middelverdier (median) og intervallene angir ett standard-avvik (68% sannsynlig).

For de enda mer spesielt interesserte, se utregninger og forklaringer i fotnote 15).

  Sherwood20 Lewis22
Økt klimapådriv 16) fra dobling av CO2 (W/m2) 4,00 ± 0,30 3,93 ± 0,30
Planck-feedback (W/m2/°C) -3,20 ± 0,10 -3,25 ± 0,10
Feedback fra lave skyer over havet, 0-60° fra ekvator (W/m2/°C) 0,37 ± 0,20 0,19 ± 0,20
Feedback fra alle skytyper som følge av endringen i forrige linje (W/m2/°C) 0,45 ± 0,33 0,27 ± 0,33
Total feedback inkludert Planck-feedback (W/m2/°C) -1,30 ± 0,44 -1,53 ± 0,44
Skaleringsfaktor ved beregning av klimasensitivitet fra process evidence (feedback-effekter) Utelatt 0.86 ± 0.09
Mønster-effekt (Historical pattern effect, W/m2/°C) 0,50 ± 0,30 0,35 ± 0,30
Hvor mye kaldere enn førindustriell temperatur det var i den kaldeste perioden i siste istid (°C) 5,0 ± 1,0 4,5 ± 1,0
Hvor mye varmere enn førindustriell temperatur det var i mid-Pliocene Warm Period (°C) 3,00 ± 1,00 2,48 ± 1,25
Forskjell i klimapådriv mellom førindustrielt nivå og den kaldeste perioden i siste istid (W/m2) -8,43 ± 2 -8,91 ± 2
Forskjell i klimapådriv fra aerosoler mellom 1850 og 2005-2015 (W/m2). En negativ verdi betyr her at aerosolene i atmosfæren har en mer kjølende effekt i dag enn på 1800-tallet (fordi det er mer av dem). -1,179 [-2,19, -0,61] -0,95 ± 0,55
Hvor mange ganger større klimasensitiviteten ECS er enn S 1,06 ± 0,20 1,135 ± 0,10
Klimasensitivitet beregnet ved å summere feedback-effekter (process evidence, °C) 3,08 2,21
Klimasensitivitet beregnet fra temperatur og andre data siste ca 150 år (historical evidence, °C) 5,82 2,16
Klimasensitivitet beregnet ut fra tidligere tiders klima (paleo evidence, °C) 3,02 2,10

For å finne ut hvor mye temperaturen kommer til å stige i fremtiden (sett bort fra naturlig variasjon), holder det ikke å vite hvor høy klimasensitiviteten er – man må også vite omtrent hvor mye klimagasser som vil slippes ut, så:

Hvor store blir utslippene?

Media og mange forskere har lenge brukt noe som kalles RCP8.5 som et business-as-usual-scenario for effekten av menneskelig aktivitet (inkludert utslipp av klimagasser), og mange gjør det fortsatt. RCP står for Representative Concentration Pathway, og det etterfølgende tallet er hvor mye større netto energitilførsel atmosfæren mottar i år 2100 sammenlignet med førindustrielt nivå, målt i W/m2.

Men RCP8.5 var aldri ment å være et business-as-usual-scenario. I en CarbonBrief-artikkel fra 2019 om nettopp RCP8.5 skriver Zeke Hausfather, en annen av medforfatterne av Sherwood20:

The creators of RCP8.5 had not intended it to represent the most likely “business as usual” outcome, emphasising that “no likelihood or preference is attached” to any of the specific scenarios. Its subsequent use as such represents something of a breakdown in communication between energy systems modellers and the climate modelling community.

Også Sherwood20 nevner at RCP8.5 ikke skal ses på som et business-as-usual scenario, men heller som et worst case scenario:

Note that while RCP8.5 has sometimes been presented as a “business as usual” scenario, it is better viewed as a worst case (e.g., Hausfather & Peters, 2020).

(Peters er forøvrig Glen Peters, som er forskningsleder for norske CICERO Senter for klimaforskning.)

RCPene blir ofte kalt scenarier, noe jeg også gjorde over. Men det er kanskje bedre å tenke på en RCP som en samling scenarier som alle resulterer i tilnærmet samme netto endring av energitilførsel i år 2100. Det er utviklet noen tusen ulike scenarier, og disse kan brukes som input til klimamodeller når de simulerer fremtidens klima.

Roger Pielke Jr, Matthew Burgess og Justin Ritchie publiserte tidlig i 2022 en studie (Pielke22) med tittelen Plausible 2005–2050 emissions scenarios project between 2 °C and 3 °C of warming by 2100. I Pielke22 ble de ulike scenariene fra IPCCs nest siste hovedrapport (fra 2013) kategorisert ut fra hvor godt de hadde klart å forutsi faktiske utslipp fra 2005 til 2020, samt hvor godt scenarienes fremtidige utslipp stemte med Det Internasjonale Energibyråets prognoser frem til 2050. Under forutsetningen at det er scenariene som stemmer best med faktiske og antatte utslipp som også vil stemme best frem mot år 2100, fant de at RCP3.4 er den mest sannsynlige (eller plausible) RCPen.

Disse scenariene (RCP3.4) er stort sett kompatible med en temperaturøkning på mellom 2 og 3 grader fra førindustriell tid til år 2100 – med 2,2 grader som middelverdi. Gjennomsnittstemperaturen fra førindustriell tid til i dag har økt med ca 1,2 grader, så middelverdien på 2,2 grader tilsvarer en temperaturøkning fra i dag til 2100 på ca 1,0 grader.

Pilke Jr har senere også sett på scenariene fra IPCCs siste hovedrapport (fra 2021). Dette snakket han om i et foredrag i november 2022 (54:03-1:06:16), og middelverdien for disse nyere scenariene er ifølge Pielke Jr 2,6 grader (det vil si 1,4 graders temperaturøkning fra i dag til år 2100). Jeg vil bruke denne mer konservative verdien videre istedenfor verdien fra studien, siden den er nyere.

Pielke Jr sier i foredraget at RCP4.5 nå må ses på som et høy-utslippsscenario, mens RCP8.5 og RCP6.0 er usannsynlige:

The high emission scenarios are clearly implausible […]. What’s a high emission scenario? Anything over 6 W/m2 […].

RCP 4.5 and the SSP2-4.5 are plausible high emission scenarios. I know in the literature they’re often used to represent mitigation success. Today I think we can say based on this method that they’re in fact high-end scenarios. A business as usual – or consistent with current policy – scenario is a 3.4 W/m2 scenario. I will say that scenario is almost never studied by anyone.

Pielke22 nevner ikke klimasensitivitet, men middelverdien for klimamodellenes klimasensitivitet (ECS) i IPCCs siste hovedrapport er 3,74 grader. ECS er trolig høyere enn klimasensitiviteten (S) som Sherwood20 og Lewis22 beregnet. 6% høyere ifølge Sherwood20, 13,5% høyere ifølge Lewis22. Hvis vi bruker Lewis22, tilsvarer en ECS på 3,74 grader en S på 3,30 grader.

Hvis klimasensitiviteten S heller er i nærheten av 2,16 grader, som Lewis22 kom frem til, vil temperaturøkningen fra i dag til år 2100 være ca 35% lavere enn det Pielke Jr kom frem til. Det betyr at temperaturøkningen fra i dag blir 0,9 grader istedenfor 1,4 grader (0,9 grader fra i dag tilsvarer 2,1 grader fra førindustriell temperatur).

RCP3.4 forutsetter riktignok utstrakt bruk av CO2-fangst og/eller fjerning av CO2 fra atmosfæren i andre halvdel av århundret. Pielke22 skriver at de ikke har gjort noen vurdering på om det er gjennomførbart:

Importantly, in the scenarios our analysis identifies as plausible, future decarbonization rates accelerate relative to the present, and many include substantial deployment of carbon removal technologies in the latter half of the century, the feasibility of which our analysis does not assess.

Den teknologiske utviklingen har i nyere tid vært veldig rask, så min mening er at dette helt sikkert vil bli teknisk mulig, men kanskje vil det være mer effektivt å begrense en uønsket temperaturøkning på andre måter, for eksempel ved å manipulere skydekket, som Bjørn Lomborg foreslår i et intervju på Econlib (spol frem til 8:35 og hør i 2 minutter eller les i fotnote 17)).

Også New York Times hadde nylig en lang artikkel, Beyond Catastrophe – A New Climate Reality Is Coming Into View, hvor de skriver at Jorda ser ut til å være på vei mot en 2-3 graders oppvarming fra 1800-tallet, istedenfor 4-5 grader, som var frykten tidligere. 2-3 grader er altså akkurat det samme som Pielke22 konkluderte med.

I New York Times-artikkelen kommer det frem at Hausfather mener at ca halvparten av nedgangen i forventet temperaturøkning skyldes at man tidligere brukte et urealistisk scenario (RCP8.5), mens den andre halvparten kommer fra “technology, markets and public policy“, blant annet at det det har vært en raskere utvikling innen fornybar energi enn forventet.

Hvor mye vil temperaturen stige?

Figur 1 (b) i Sherwood20 (til høyre under) viser hvor mye temperaturen trolig vil stige mellom 1986-2005 og 2079-2099, avhengig av klimasensitivitet og RCP-scenario. Perioden er drøyt 16 år lenger enn de nå 77 årene frem til år 2100, så temperatur-økningen som følge av klimagass-utslipp frem til 2100 vil være lavere enn grafen antyder – ca 18% lavere hvis vi forutsetter lineær temperaturøkning.

Vi kan da lese av i grafen 18) at hvis RCP4.5 er riktig utslipps-scenario, vil temperaturen stige med ca 1,8 grader mellom 1986-2005 og 2079-2099 hvis klimasensitiviteten er 3,1 grader. For å finne hvor mye temperaturen vil stige fra i dag til år 2100, trekker vi 18% fra 1,8 grader og havner på ca 1,5 grader.

Med klimasensitiviteten fra Lewis22 på 2,16 grader og scenariet RCP4.5, leser vi av ca 1,25 grader i grafen, noe som tilsvarer en 1,0 graders temperaturøkning fra i dag frem mot år 2100.

RCP3.4 er ikke tatt med i grafen, men vi kan anta at temperaturøkningen for RCP3.4 vil være et par tidels grader lavere enn for RCP4.5, så kanskje 0,7-0,8 grader, som også stemmer ganske bra med det Pielke Jr fant etter at vi justerte med klimasensitiviteten fra Lewis22 (0,9 grader).

0,8 grader tilsvarer en temperatur-økning på 2,0 grader siden andre halvdel av 1800-tallet og er identisk med “2-graders-målet”. 2,0 grader er også akkurat innenfor New York Times-intervallet på 2-3 grader, hvor også førindustriell temperatur var utgangspunktet.

Selv om Lewis22 i dag kanskje gir det beste estimatet for klimasensitiviteten, er det ikke en fasit. Store deler av justeringen av Sherwood20s estimat skyldtes bruk av nyere data, og enda nyere data vil i fremtiden føre til at klimasensitiviteten justeres opp eller ned fra 2,16 grader.

Og Nic Lewis påpeker selv at:

This large reduction relative to Sherwood et al. shows how sensitive climate sensitivity estimates are to input assumptions.

og:

This sensitivity to the assumptions employed implies that climate sensitivity remains difficult to ascertain, and that values between 1.5°C and 2°C are quite plausible.

Det blir spennende å se hva forfatterne av Sherwood20 etter hvert har å si om Lewis22.


Fotnoter

1) Zeke Hausfather skriver på CarbonBrief at for at CO2-nivået skal holde seg konstant etter en dobling av CO2-innholdet i atmosfæren, kreves det at det fortsatt slippes ut CO2. Hvis våre netto klimagassutslipp går til 0, vil CO2 nivået i atmosfæren synke relativt raskt. Og temperaturen vil holde seg på samme nivå over lang tid hvis vi ser bort fra naturlige svingninger.

2) Det er kanskje ikke helt riktig å si at temperaturen vil stige 1,2 grader uten feedback-effekter, for den såkalte Planck-feedback’en brukes i utregningen. Klimasensitiviteten kan da beregnes på følgende måte:

{"aid":null,"font":{"color":"#000000","family":"Arial","size":"14"},"code":"$$ECS_{noFeedback}\\,=\\,-\\frac{ΔF_{2xCO2}}{λ_{Planck}}$$","type":"$$","id":"13","backgroundColorModified":false,"backgroundColor":"#FFFFFF","ts":1670102750556,"cs":"w9TVSjy5fjgzhI4lBw1dwA==","size":{"width":279.25,"height":51.5}}

Hvis vi setter inn verdier fra studiene vi etter hvert skal se på, får vi for Sherwood20 (ΔF2xCO2 = 3,93 W/m2 og λPlanck = -3,20 W/m2/°C) at ECSnoFeedback = 1,25°C. For den andre studien, Lewis22 (ΔF2xCO2 = 3,93 W/m2 og λPlanck = -3,25 W/m2/°C) blir ECSnoFeedback = 1,21°C.

3) Man har tradisjonelt brukt gjennomsnittet av 1850-1900 som førindustrielt nivå. Sherwood20 og Lewis22 bruker gjennomsnittet av 1861-1880. IPCC har begynt å bruke 1750.

4) Hvis man summerer alle feedback-effektene inkludert Planck-feedback’en, får man et negativt tall. Men summen av de andre feedback-effektene er med stor sannsynlighet positiv. Og hvis summen av de andre feedback-effektene er positiv, betyr det at klimasensitiviteten er høyere enn 1,2 grader.

5) Her har jeg strengt tatt sammenlignet to ulike typer klimasensitivitet. IPCC beregnet klimasensitiviteten ECS, mens Sherwood beregnet klimasensitiviteten S (Effective Climate Sensitivity). ECS er trolig litt høyere enn S – 6% høyere ifølge Sherwood20, 13,5% høyere ifølge Lewis22, studien som korrigerer Sherwood20.

6) Fra Sherwood20:

Among these distinct feedbacks, those due to clouds remain the main source of uncertainty in λ, although the uncertainty in the other feedbacks is still important. 

λ (lambda) sier hvor sterk en feedback-effekt er. Positiv λ betyr at den tilhørende feedback-effekten øker klimasensitiviteten, negativ λ gjør det motsatte. Hvis man vet λ for alle feedback-effektene, kan man summere dem for deretter å beregne klimasensitiviteten.

7) Sherwood20 skriver:

However, uncertainty in radiative forcing [i løpet av de siste 150 årene] is dominated by the contribution from anthropogenic aerosols, especially via their impact on clouds, which is relatively unconstrained by process knowledge or direct observations (Bellouin et al., 2020).

8) Andrew Dessler har vært hovedforfatter og medforfatter i studier om mønster-effekten. Han forklarer mønster-effekten i forbindelse med committed warming i et par youtube-videoer (én kort og én lang – selv om han ikke eksplisitt bruker uttrykket pattern effect i den korte videoen).

Et eksempel Dessler bruker for å illustrere mønster-effekten er fra havet rundt Antarktis, hvor det har vært lite oppvarming og relativt mye skyer som har hatt en avkjølende effekt. Når temperaturen øker, forventes det mindre skyer, slik at oppvarmingen vil gå fortere.

Nic Lewis (som kritiserte Sherwood20) har også publisert en studie som kritiserer studien Dessler snakker om (Zhou et al 2021) i videoene. Lewis har også skrevet et innlegg om disse studiene på Climate Etc, Judith Currys klima-blogg.

En av tingene Lewis påpeker gjelder datasettet for havtemperatur (SST – Sea Surface Temperature). Selv om datasettet som er brukt i studien tilsier en relativt stor mønster-effekt, gir andre datasett en vesentlig mindre mønstereffekt. Grunnen til forskjellene er at man har hatt relativt lav dekningsgrad for målinger av havtemperatur. Usikkerheten er derfor stor.

Lewis kritiserer også studien for ikke å skille mellom forced og unforced mønstereffekt. Den delen som er tvunget (forced) går på effekten av klimagasser, mens den utvungne (unforced) går på naturlig variasjon (internal variability). Og de to har ulike implikasjoner når det gjelder “innlåst” fremtidig oppvarming (committed warming). Mens mønstereffekten fra klimagasser vil ha liten betydning for oppvarmingen dette århundret, vil mønstereffekten fra naturlig variasjon ha større betydning frem til år 2100.

Lewis fant at mønstereffekten fra naturlig variasjon er veldig nærme null hvis man bruker et annet havtemperatur-datasett enn det Zhou et al brukte, og hvis man samtidig bruker en referanse-periode utenfor perioden ca 2000-2014, da det på grunn av naturlig variasjon var unormalt lav global oppvarming. Det er altså usikkert om mønstereffekten vil påvirke temperaturen i særlig grad dette århundret.

9) Om mønster-effekten, fra IPCCs siste hovedrapport (7.4.4.3):

[T]here is low confidence that these features, which have been largely absent over the historical record, will emerge this century[.]

10) Fra Wikipedia:

Although the term “climate sensitivity” is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

11) Sherwood20 bruker verdien 4,00±0,30 W/m2 mens Lewis22 bruker 3,93±0,30 W/m2.

En del skeptikere har argumentert for at atmosfærens CO2-opptak er mettet og at verdien dermed må være nærmere 0, men ifølge Nic Lewis er det feil. Følgende sitat er fra et foredrag han holdt i 2019 (14:00):

Another point that is often argued is that the absorption by carbon dioxide is saturated – that it can’t get any stronger. Unfortunately, that is not the case. However, it is a logarithmic relationship, approximately, so it increases slower and slower. Roughly speaking, every time you double carbon dioxide level, you get the same increase in the effect it has in reducing outgoing radiation. And this decrease in outgoing radiation is called a radiative forcing, and it’s just under 4 W/m2 of flux for every time you double carbon dioxide. And again, this is pretty well established.

Og litt tidligere i samme foredrag (11:12):

The black is the measured levels – this is measured by satellite at the top of the atmosphere. […] And the red lines are from a specialized radiative transfer model, and you can see how accurately they reproduce the observations. And what that reflects is that this is basic radiative physics, it’s very soundly based. There’s no point in my view disputing it because the evidence is that the theory is matched by what’s actually happening.

Grafen han snakker om er denne:

Grafen viser hvordan CO2 og andre gasser i atmosfæren absorberer infrarødt lys fra bakken ved ulike bølgelengder i fravær av skyer (over Sahara). Uten atmosfæren ville utstrålingen fulgt den øverste stiplede linjen markert med temperaturen 320 K (47°C).

12) Ifølge Sherwood20 er ECS 6% høyere enn S, mens ESS rundt varmeperioden mPWP var hele 50% høyere enn ECS igjen. Rundt den vesentlig varmere perioden PETM er klimasensitivitetene ESS og ECS ganske like fordi det da ikke var store isdekte områder på Jorda, og dermed heller ikke store endringer i isdekke.

Lewis skriver at for LGM kunne man beregne klimasensitiviteten ECS istedenfor ESS ved å behandle trege feedback-effekter som klimapådriv (forcings) istedenfor som feedback-effekter:

A significant advantage of the LGM transition is that, unlike more distant periods, there is proxy evidence not only of changes in temperature and CO2 concentration but also of non-CO2 forcings, and that enables estimation of the effects on radiative balance of slow (ice sheet, etc.) feedbacks, which need to be treated as forcings in order to estimate ECS (and hence S) rather than ESS.

13) Sherwood20 hadde brukt en metode for å beregne sannsynligheten for ulike klimasensitiviteter som ikke var gyldig i alle sammenhenger. Den forutsatte blant annet normal-fordeling av alle input-parametre, noe som ikke var tilfelle for Historical evidence (data for siste 150 år), der klimapådriv fra aerosoler ikke var normalfordelt.

Sherwood20s metode ga for lav sannsynlighet for høye klimasensitivitets-verdier:

Stiplet linje viser her Sherwood20s resultater for Historical evidence, mens heltrukket linje er fra Lewis22.

Denne feilen i Sherwood20 førte til at middelverdien for den kombinerte klimasensitiviteten økte fra 3,10 til 3,16 grader. (Økningen fra 3,16 til 3,23 grader skyldtes bruk av den objektive statistiske metoden.)

Se Likelihood estimation for S i Lewis22, Supporting Information (S2) og Appendix B i Lewis’ sammendrag av Lewis22 for mer informasjon.

14) Konservative valg i Lewis22 (Supporting Information) – S20 er Sherwood20:

I make no changes to S20’s assessments of other cloud feedbacks. However, I note that Lindzen and Choi (2021) cast doubt on the evidence, notably from Williams and Pierrehumbert (2017), relied upon by S20 that tropical anvil cloud feedback is not, as previously suggested (Lindzen and Choi 2011; Mauritsen and Stevens 2015), strongly negative.

The resulting median revised total cloud feedback estimate is 0.27 − almost double the 0.14 for nine CMIP6 GCMs that well represent observed interhemispheric warming (Wang et al. 2021).

S20’s GMST [=Global Mean Surface Temperature] estimate was infilled by kriging, which does not detect anisotropic features. Recently, a method that does detect anisotropic features was developed, with improved results (Vaccaro et al. 13 2021a,b). Infilling the same observational dataset as underlies S20’s infilled estimate, the improved method estimates a 9% lower GMST increase. Nevertheless, I retain S20’s estimate of the GMST rise, resulting in a GMAT [=Global Mean Air Temperature] ΔT estimate of 0.94 ± 0.095 [°C].

S20’s 0.60 Wm−2 estimate of the change in planetary radiative imbalance equals that per AR6. However, AR6 (Gulev et al. 2021 Figure 2.26(b)) shows that, excluding series that are outliers, the AR6 0-2000m [Ocean Heat Content] estimate is middle-of-the-range in 2018 but at its bottom in 2006, hence yielding an above average increase over that period. Nevertheless, I retain S20’s estimate.

Moreover, Golaz et al. (2019) found that an advanced [Global Climate Model] with historical aerosol [Effective Radiative Forcing] of −1.7 Wm−2, tuned on the pre industrial climate, would only produce realistic GMAT projections if the aerosol forcing is scaled down to ~−0.9 Wm−2 (and, in addition, its climate sensitivity is halved).

Conservatively, in the light of the foregoing evidence pointing to aerosol forcing being weaker than implied by simply revising B20’s βlnL−lnN estimate, I adopt a modestly weakened aerosol ERF estimate of −0.95 ± 0.55 Wm−2 over, as in B20, 1850 to 2005-15. This implies a 5–95% uncertainty range of −1.85 to −0.05 Wm−2, which has the same lower bound as AR6’s estimate, and is likewise symmetrical.

Skalert til perioden 1861-1880 til 2006-2018, blir middelverdien 0,86 istedenfor 0,95, ifølge Lewis22.

In two [Global Climate Models], Andrews et al. (2018) found a 0.6 weakening in [the pattern effect] when using [a newer sea-ice dataset]. Although the [newer] sea-ice dataset […] is no doubt imperfect […], its developers argue that it is an improvement on [the earlier version]. However, I consider that there is too much uncertainty involved for any sea-ice related reduction to be made when estimating the unforced Historical pattern effect.

In view of the evidence that pattern effect estimates from [Atmospheric Model Intercomparison Project II]-based simulations are likely substantially excessive, and that the unforced element is probably minor and could potentially be negative, it is difficult to justify making a significantly positive estimate for the unforced element. However, a nominal 0.1 ± 0.25 is added to the 0.25 ± 0.17 forced pattern effect estimate, which reflects the substantial uncertainty and allows not only for any unforced pattern effect but also for the possibility that some other element of the revised Historical evidence data-variable distributions might be misestimated.

I revise S20’s central LGM [=Last Glacial Maximum] cooling estimate of −5 [°C] to −4.5 [°C], primarily reflecting, less than fully, the −4.2 [°C] adjusted mean ΔTLGM estimate of the sources cited by S20, and increase the standard deviation estimate to 1.25 [°C] so as to maintain the same –7 [°C] lower bound of the 95% uncertainty range as S20’s.

S20 use the single year 1850 as their preindustrial reference period for GHG concentrations, whereas for observational estimates of temperature change preindustrial generally refers to the average over 1850−1900. For consistency, the S20 GHG [=Greenhouse Gas] forcing changes should therefore use mean 1850−1900 GHG concentrations. Doing so would change the CO2 ERF from –0.57x to –0.59x ΔF2xCO2, as well as marginally changing the CH4 and N2O ERFs. However, conservatively, I do not adjust S20’s LGM forcing estimates to be consistent with the LGM ΔT measure.

S20 adopt the estimate of vegetation forcing in the Kohler et al. (2010) comprehensive assessment of non-greenhouse gas LGM forcing changes, but use a central estimate of –1.0 Wm−2 for aerosol (dust) forcing in place of Kohler et al.’s –1.88 Wm−2. This seems questionable; Friedrich and Timmermann (2020) adopt Kohler et al.’s estimate, while pointing out that estimates of its glacial-interglacial magnitude vary from ~0.33 to ~3.3 Wm−2. I nevertheless accept S20’s estimate of dust forcing[.]

S20 assume that climate feedback in equilibrium (λ’) strengthens by α for every -1 [°C] change in ΔT, resulting in the 0.5 α TLGM2 term in (11), reducing LGM-estimated ECS. Contrariwise, Zhu and Poulsen (2021) found that ocean feedback caused 25% higher LGM-estimated [climate sensitivity] ECS. Moreover, a significant part of the reduction in mean surface air temperature at the LGM is due to ice-sheet caused increased land elevation, which would weaken λ’ compared to in non-glacial climates. Although S20’s [α = 0,1 ± 0,1] estimate appears questionable, I retain it.

Although the Tierney et. al (2019) 1.4 [°C] tropical SST warming estimate appears more reliable than S20’s 1.5 [°C], I retain the latter but multiply it by the 1.65 PlioMIP2 ratio, giving a revised GMAT ΔTmPWP of 2.48 [°C].

S20 assessed a [2400 ± 700] ppm distribution for CO2 concentration in the PETM relative to a baseline of 900 ppm, implying a [1.667 ± 0.778] ΔCO2PETM distribution. That covers, within its 90% uncertainty range, a concentration ratio range (1 + ΔCO2PETM) of 1.39 to 3.95. The CO2 concentration estimates considered by S20, even taking extremes of both their PETM and Eocene ranges, constrain (1 + ΔCO2PETM) within 1.4 to 5. Using instead that range would lower PETM based S estimates. Nevertheless, I retain S20’s ΔCO2PETM distribution.

While Meinshausen et al. assume a fixed ratio of CO2 ERF to stratospherically-adjusted radiative forcing, there is modeling evidence that fast adjustments become more positive at higher temperatures (Caballero and Huber 2013), which would further increase CO2 ERF change in the PETM. I make no adjustment for this effect.

To account for forcing from changes in CH4 concentrations, S20 apply the same 0.4 fCH4 factor to the CO2 forcing change as for the mPWP, with doubled uncertainty, although noting that the tropospheric lifetime of CH4 could be up to four times higher given sustained large inputs of CH4 into the atmosphere (Schmidt and Shindell 2003). I retain S20’s fCH4 distribution, although doing so may bias estimation of S upwards.

S20 assume that ESS [=Earth System Sensitivity] for the PETM was the same as present ECS, representing uncertainty regarding this by deducting a [0 ± 0,5] adjustment (β) from ESS feedback when estimating ECS feedback, λ’. Assuming zero slow feedbacks in the PETM (so ESS equals ECS) may be reasonable, given the lack of evidence and the absence of major ice sheets. However, Caballero and Huber (2013) and Meraner et al. (2013) both found, in modeling studies, substantially (~50%) weaker climate feedback for climates as warm as the PETM. Zhu et al (2019) found, in a state-of-the-art GCM, that ECS was over 50% higher than in present day conditions, with little of the increase being due to higher CO2 ERF. I therefore consider that it would be more realistic to use a positive central estimate for β. Nevertheless, I retain S20’s estimate.

15) Beregninger av klimasensitivitet:

Verdier med blå tekst er samme som i IPCCs siste hovedrapport (fra 2021). Verdier med gul bakgrunn i Lewis22 er konservative valg, se forrige fotnote. Mindre konservative valg ville resultert i en lavere klimasensitivitet. Endringene i Lewis22 er diskutert under Review and revision of S20 data-variable assumptions i Lewis22 og under S5 i Supporting Information.

Summere feedback-effekter (process evidence):

{"backgroundColorModified":false,"backgroundColor":"#FFFFFF","type":"$","code":"$S\\,=\\,-𝛾\\,\\,\\frac{ΔF_{2xCO2}}{λ}$","font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"id":"1","ts":1669928251925,"cs":"0lkyjOw4SUNer4G0xfA51w==","size":{"width":160.16666666666666,"height":29.333333333333332}}

  Sherwood20 Lewis22
λVanndamp + lapse rate 1,15 ± 0,15 Som Sherwood20
λSkyer 0,45 ± 0,33 0,27 ± 0,33
λAlbedo-endringer 0,30 ± 0,15 Som Sherwood20
λPlanck -3,20 ± 0,10 -3,25 ± 0,10
λAndre 0,00 ± 0,18 Som Sherwood20
λ (Sum, feedback-effekter, W/m2/°C) -1,30 ± 0,44 -1,53 ± 0,44
𝛾 (justeringsfaktor) Utelatt (1,00) 0,86 ± 0,09
ΔF2xCO2 (W/m2) 4,00 ± 0,30 3,93 ± 0,30
Klimasensitivitet (S) 3,08 2,21

Endringen i skyenes feedback-effekt (λSkyer) kommer av at Lewis har satt feedback-styrken for lave skyer over havet (0-60° fra ekvator) til 0,19 W/m2/°C, istedenfor 0,37, som var verdien Sherwood20 brukte. Verdien på 0,19 kommer fra studien Myers et al 2021, som ble utgitt året etter Sherwood20. Se Supporting Information (5.1.3) for mer detaljer.

De nyeste klimamodellene (CMIP6) gir en Planck-feedback (λPlanck) på -3,3 W/m2/°C. Lewis22 justerte λPlanck halvveis fra Sherwood20s estimat mot denne verdien. Se Supporting Information (5.1.2).

Formelen for beregning av klimasensitiviteten ECS er ifølge IPCC:

{"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","id":"5","code":"$$ECS\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","type":"$$","backgroundColorModified":false,"ts":1670767186833,"cs":"X7nj47G99ItTFq6sBAYn3g==","size":{"width":198.5,"height":46.333333333333336}}

Sherwood20 hadde brukt denne formelen også for å beregne klimasensitiviteten S, samtidig som de mente at ECS var 6% høyere enn S. Dermed ble S for høy i Sherwood20. Dette er min forenklede forklaringen på hvorfor man må justere S med en faktor 𝛾 (gamma). Lewis22 kom frem til at ECS var 13,5% høyere enn S, noe som tilsvarer 𝛾 = 0,88.

Den korrekte forklaringen fra Lewis22 er litt mer avansert. Istedenfor å konvertere ECS til S, brukes 𝛾 til å modifisere ΔF2xCO2 (også kalt F2xCO2), så det er en annen måte å tenke på, selv om effekten blir den samme. Lewis22 brukte tall fra klimamodeller til å finne at 𝛾 = 0,86. Forklaringen er for avansert for meg, men kan leses i Lewis22 (under F2xCO2 and its scaling when using Eq. (4)) og Supporting Information (S1).

Data for siste 150 år (historical evidence):

{"type":"$$","aid":null,"font":{"family":"Arial","color":"#000000","size":"14"},"id":"3","backgroundColor":"#FFFFFF","code":"$$λ_{hist}\\,=\\,\\frac{ΔN\\,-\\,ΔF}{ΔT}$$","backgroundColorModified":false,"ts":1669929025940,"cs":"K5lUmGmv5uCdzh363AgRQA==","size":{"width":188,"height":45.5}}

{"aid":null,"font":{"family":"Arial","size":"14","color":"#000000"},"backgroundColor":"#FFFFFF","backgroundColorModified":false,"code":"$S\\,=\\,-𝛾\\,\\,\\frac{ΔF_{2xCO2}}{λ_{hist}\\,+\\,Δλ}\\,$","type":"$","id":"1","ts":1669928698859,"cs":"2dZGfsASnExSRCF9w+0xCg==","size":{"width":168.5,"height":33.00000000000002}}

  Sherwood20 Lewis22
ΔFCO2 1,731 1,724
ΔFOther well-mixed greenhouse gases 0,969 1,015
ΔFOzone 0,298 0,400
ΔFLand use -0,106 -0,150
ΔFStratospheric water vapor 0,064 0,041
ΔFBlack carbon on snow and ice 0,020 0,109
ΔFContrails og induced cirrus 0,048 Som Sherwood20
ΔFSolar 0,017 0,019
ΔFVolcanic -0,113 0,044
ΔFAerosols -1,104 -0,860
ΔF (sum, forskjell i klimapådriv, W/m2) 1,824 2,390
ΔN (W/m2) 0,600 ± 0,183 Som Sherwood20
ΔT (eller ΔTGMAT, °C) 1,03 + 0,085 0,94 ± 0,095
λhist -1,188 -1,915
𝛾 (justeringsfaktor) Utelatt (1,00) 0,86 ± 0,09
ΔF2xCO2 4,00 ± 0,30 3,93 ± 0,30
Δλ (mønster-effekten/pattern effect) 0,500 ± 0,305 0,350 ± 0,305
Klimasensitivitet (S) 5,82 2,16

ΔF, ΔN og ΔTGMAT refererer til forskjeller mellom 1861-1880 og 2006-2018. ΔF er forskjell i klimapådriv. ΔN er endring i strålings-ubalanse ved toppen av atmosfæren, målt i W/m2. Positiv ΔN betyr at strålings-ubalansen er større nå enn på slutten av 1800-tallet, og at Jorden mottar mer netto energi nå enn da.

Vi finner ikke de nøyaktige ΔF-verdiene fra Lewis22 igjen i IPCCs siste hovedrapport. Grunnen til det er at Sherwood20 og Lewis22 ser på perioden 1861-1880 til 2006-2018, mens IPCC har vært mer interessert i 1750 til 2019. Men heldigvis fins det også tall for både 1850 og flere år etter 2000, så Lewis har derfor kunnet beregne ΔF-verdier med god nøyaktighet – basert på tallene fra IPCC (se tabell AIII.3 her).

GMAT (Global Mean near-surface Air Temperatur) er gjennomsnittlig luft-temperatur over bakken. GMST (Global Mean Surface Temperature) er det samme, men bruker havtemperatur istedenfor lufttemperatur over havet. Sherwood20 konverterte ΔTGMST (0,94°C) til ΔTGMAT (1,03°C) på bakgrunn av resultater fra klimamodeller som tilsier at GMAT er høyere enn GMST. Lewis påpeker at en høyere GMAT enn GMST ikke er observert i virkeligheten, og at IPCCs middelverdi for forskjell mellom GMST og GMAT er 0. Lewis22 har derfor satt ΔTGMAT = ΔTGMST. (Se Supporting Information, 5.2.1.)

Den relativt store endringen i ΔFAerosols er begrunnet i Supporting Information, 5.2.3.

Endringen Lewis22 gjorde for mønster-effekten (Δλ) er blant annet begrunnet med at de fleste datasett for havtemperatur peker mot at den “utvungne” (unforced – som har med naturlig variasjon å gjøre, se fotnote 8) komponenten av mønster-effekten er veldig liten. Se Supporting Information, 5.2.4.

Tidligere tiders varme- eller kuldeperioder (paleo evidence):

1. Kaldeste perioden i siste istid (Last Glacial Maximum, LGM):

{"code":"$λ\\,=\\,-\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{ΔF}{ΔT}\\,-\\,\\frac{α}{2}ΔT\\right)$","id":"4","font":{"family":"Arial","size":"14","color":"#000000"},"type":"$","backgroundColorModified":false,"backgroundColor":"#FFFFFF","aid":null,"ts":1670066796360,"cs":"eudXkAKdafkU2vgq7fqk7g==","size":{"width":290.5,"height":39.5}}

{"id":"5","code":"$$S\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","backgroundColorModified":false,"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","type":"$$","ts":1670066978670,"cs":"R3BkKA5gppevoibrQVAwaQ==","size":{"width":164.83333333333334,"height":46.333333333333336}}

  Sherwood20 Lewis22
ζ (hvor mye høyere ECS er enn S) 0,06 ± 0,20 0,135 ± 0,10
ΔFCO2 -0,57 x ΔF2xCO2 = -2,28 -0,57 x ΔF2xCO2 = -2,24
ΔFCH4 -0,57 Som Sherwood20
ΔFN2O -0,28 Som Sherwood20
ΔFLand ice and sea level -3,20 -3,72
ΔFVegetation -1,10 Som Sherwood20
ΔFDust (aerosol) -1,00 Som Sherwood20
ΔF (forskjell i klimapådriv, W/m2) -8,43 ± 2,00 -8,91 ± 2,00
ΔT (forskjell i temperatur) -5,0 ± 1,00 -4,5 ± 1,00
α 0,10 ± 0,10 Som Sherwood20
λ -1,522 -1,992
ΔF2xCO2 4,00 ± 0,30 3,93 ± 0,30
Klimasensitivitet (S) 2,63 1,97

Sherwood20 beregnet ζ (zeta; hvor mye høyere klimasensitiviteten ECS er enn klimasensitiviteten S) ved å se på datasimuleringer hvor CO2-nivået raskt 4-dobles (abrupt 4xCO2 simulations). Det resulterende klimapådrivet (ΔF4xCO2) har de så delt på 2 for å finne klimapådrivet for en dobling av CO2-nivå (ΔF2xCO2). Lewis22 skriver at selv om det er vanlig å dele på 2, er det vanskelig å forsvare det valget siden denne faktoren har blitt estimert til å være ca 2,10, ikke 2,0. Lewis brukte uansett ikke denne måten å beregne ζ på – istedenfor hentet han ζ-verdien (0,135) direkte fra resultatene fra klimamodeller. Mer detaljer finner du under Climate Sensitivity Measures i Lewis22.

ΔF og ΔT refererer til forskjellen i klimapådriv og temperatur mellom den kaldeste perioden i siste istid og førindustrielt.

Middelverdien for ΔT for studiene Sherwood20 refererte til var 4,2°C. Lewis22 justerte derfor Sherwood20s ΔT-estimat i retning av den verdien, fra 5,0 til 4,5°C. Se Supporting Information, 5.3.2.

Lewis22 endret ΔFLand ice and sea level fordi Sherwood20 hadde sett bort fra albedo-endringer som følge av lavere havnivå. Se Supporting Information, 5.3.2.

α (alfa) sier noe om hvordan klimasensitiviteten endrer seg ut fra hvilken tilstand Jorda befinner seg i. Det vi er mest interessert i er hva klimasensitiviteten er for den tilstanden Jorda befinner seg i nå og for eksempel frem til år 2100. Men for å beregne det på best mulig måte, må man også ta hensyn til hva klimasensitiviteten var i andre perioder man sammenligner med. Og dette er altså begrunnelsen for å ta med α i formelen. Klimasensitiviteten er muligens lavere i kalde perioder enn i varme. Lewis beholdt Sherwood20s estimat for α selv om han mente estimatet var tvilsomt (se forrige fotnote).

2. Mid-Pliocene Warm Period (mPWP)

{"type":"$$","font":{"size":"14","family":"Arial","color":"#000000"},"aid":null,"code":"$$ΔF_{CO2}\\,=\\,\\frac{\\ln\\left(\\frac{CO2}{284}\\right)}{\\ln\\left(2\\right)}\\,ΔF_{2xCO2}$$","backgroundColor":"#FFFFFF","backgroundColorModified":false,"id":"6","ts":1670067597268,"cs":"+RxVGZEvLibw9PqteRKXNg==","size":{"width":276,"height":58.75}}

{"aid":null,"id":"7","backgroundColorModified":false,"font":{"family":"Arial","color":"#000000","size":"14"},"type":"$$","backgroundColor":"#FFFFFF","code":"$$λ\\,=\\,-\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{ΔF_{CO2}\\,\\left(1\\,+\\,f_{CH4}\\right)\\,\\left(1\\,+\\,f_{ESS}\\right)}{ΔT}\\right)$$","ts":1670067745143,"cs":"IIsd4M9OVn0I9NFRaDuGSw==","size":{"width":470.6666666666667,"height":53}}

{"id":"5","code":"$$S\\,=\\,-\\frac{ΔF_{2xCO2}}{λ}$$","backgroundColorModified":false,"font":{"size":"14","color":"#000000","family":"Arial"},"aid":null,"backgroundColor":"#FFFFFF","type":"$$","ts":1670066978670,"cs":"R3BkKA5gppevoibrQVAwaQ==","size":{"width":164.83333333333334,"height":46.333333333333336}}

{"font":{"family":"Arial","size":"14","color":"#000000"},"type":"$$","backgroundColorModified":false,"backgroundColor":"#FFFFFF","code":"$$S\\,=\\,\\frac{1}{1\\,+\\,ζ}\\,\\,\\frac{\\ln\\left(2\\right)}{\\ln\\left(\\frac{CO2}{284}\\right)}\\,\\,\\frac{ΔT}{\\left(1\\,+\\,f_{CH4}\\right)\\,\\left(1\\,+\\,f_{ESS}\\right)}$$","id":"14","aid":null,"ts":1670227191617,"cs":"YyF+Y+IdzAiOIECnYfaL9w==","size":{"width":444.6666666666667,"height":58.666666666666664}}

  Sherwood20 Lewis22
CO2 375 ± 25 Som Sherwood20
ΔF2xCO2 4,00 ± 0,30 3,93 ± 0,30
ΔFCO2 1,604 1,576
ζ (hvor mye høyere ECS er enn S) 0,06 ± 0,20 0,135 ± 0,10
fCH4 0,40 ± 0,10 Som Sherwood20
fESS 0,50 ± 0,25 0,67 ± 0,40
ΔT 3,00 ± 1,00 2,48 ± 1,25
λ -1,190 -1,686
Klimasensitivitet (S) 3,36 2,33

ΔT og ΔFCO2 refererer til forskjell i henholdsvis temperatur og klimapådriv (fra CO2) mellom mPWP og førindustrielt. fCH4 er estimert klimapådriv fra metan i forhold til klimapådriv fra CO2 (0,40 betyr at klimapådrivet fra metan er 40% av klimapådrivet fra CO2). fESS er hvor mye høyere klimasensitiviteten ESS er enn klimasensitiviteten ECS. Tallet 284 i formelen angir førindustrielt CO2-nivå (ppm).

Lewis22 har brukt en nyere verdi for fESS enn Sherwood20. Sherwood20 hentet verdien 0,50 (eller 50%) fra The Pliocene Model Intercomparison Project, versjon 1 (PlioMIP1), som fokuserer på tidsepoken Pliocen. Verdien 0,67 (eller 67%) har Lewis22 hentet fra PlioMIP2, en nyere versjon av PlioMIP-prosjektet. Se Supporting Information, 5.3.3.

Endringen Lewis22 gjorde for ΔT er også gjort på bakgrunn av PlioMIP2. Tropiske temperaturer i mPWP var ca 1,5°C høyere enn førindustrielt. For å finne endring i global temperatur, har Sherwood20 ganget med 2. Global temperatur har nemlig endret seg omtrent dobbelt så mye som tropiske temperaturer siste 500 000 år. Men det er over 3 millioner år siden mPWP, og forholdene på Jorda var annerledes den gangen. I PlioMIP2 er det ved hjelp av klimamodeller beregnet at endringer i global temperatur var ca 1,65 ganger høyere enn endringer i tropisk temperatur. Lewis22 har brukt denne verdien og multiplisert 1,5°C med 1,65 istedenfor med 2. Dermed blir ΔT 2,48°C istedenfor 3,00°C. Se Supporting Information, 5.3.3.

3. Paleocene–Eocene Thermal Maximum (PETM):

{"code":"$$S\\,=\\,\\frac{\\frac{1}{f_{CO2nonLog}}}{\\left(1\\,+\\,ζ\\right)\\,\\left(\\frac{\\ln\\left(\\frac{CO2}{900}\\right)}{\\ln\\left(2\\right)}\\,\\frac{1\\,+\\,f_{CH4}}{ΔT}\\right)\\,-\\,β}$$","backgroundColor":"#FFFFFF","id":"8","aid":null,"backgroundColorModified":false,"font":{"color":"#000000","family":"Arial","size":"14"},"type":"$$","ts":1670070377501,"cs":"scmxEwpalDPrQFazx01POA==","size":{"width":341,"height":81.5}}

Med β = 0 kan vi forenkle til:

{"backgroundColor":"#FFFFFF","font":{"color":"#000000","family":"Arial","size":"14"},"code":"$$S\\,=\\,\\frac{\\ln\\left(2\\right)}{\\ln\\left(\\frac{CO2}{900}\\right)}\\,\\frac{ΔT}{\\left(1\\,+\\,ζ\\\\\\right)\\,\\left(1\\,+\\,f_{CH4}\\right)\\,f_{CO2nonLog}}$$","aid":null,"id":"9","backgroundColorModified":false,"type":"$$","ts":1670103899754,"cs":"ExJeWuSt1itutLDJKNgf/w==","size":{"width":434,"height":58.666666666666664}}

  Sherwood20 Lewis22
ζ 0,06 ± 0,20 0,135 ± 0,10
ΔT 5,00 ± 2,00 Som Sherwood20
fCH4 0,40 ± 0,20 Som Sherwood20
CO2 2400 ± 700 Som Sherwood20
β 0,0 ± 0,5 Som Sherwood20
fCO2nonLog Utelatt (1,00) 1,117
Klimasensitivitet (S) 2,38 1,99

ΔT refererer til forskjell i temperatur mellom PETM og tiden rett før/etter (Paleocen/Eocen). På det varmeste var det i PETM ca 13 grader varmere enn førindustrielt, mens tiden før/etter PETM altså var ca 5 grader kaldere enn det (8 grader varmere enn førindustrielt). Tallet 900 i formelen angir omtrentlig CO2-nivå før/etter PETM. fCH4 er igjen estimert klimapådriv fra metan i forhold til klimapådriv fra CO2.

Sherwood20 går ut fra at forholdet mellom klimapådriv og CO2-konsentrasjon er logaritmisk. Lewis refererer til Meinshausen et al 2020, som fant at ved høye CO2-konsentrasjoner (som i PETM), blir klimapådrivet høyere enn om forholdet hadde vært rent logaritmisk. Ved å bruke en formel fra Meinshausen, fant Lewis at klimapådrivet kan ha vært 11,7% høyere enn ved et rent logaritmisk forhold (som Sherwood20 antok). fCO2nonLog er derfor satt til 1,117. Se Supporting Information, 5.3.4.

Sherwood20 antar i utgangspunktet at klimasensitiviteten ESS i PETM er lik dagens klimasensitivitet ECS. Siden det ikke var store isdekte områder i PETM, gir det mening. Det er likevel stor usikkerhet rundt hvor forskjellig ECS var i PETM sammenlignet med i dag, så Sherwood20 har tatt høyde for usikkerheten ved hjelp av parameteren β. Sherwood20 har satt middelverdien for β til 0, så vi kan se bort fra den i beregningen av middelverdien for klimasensitiviteten i PETM. Lewis peker riktignok på en studie som har kommet frem til at klimasensitiviteten ECS kan ha vært vesentlig høyere i varmeperioden PETM enn i dag. Han mener derfor at middelverdien for β burde vært større enn 0. Likevel beholdt han Sherwood20s estimat. Se Supporting Information, 5.3.4.

16) Wikipedia forklarer hva klimapådriv (eller strålingspådriv) er:

Begrepet «pådriv» betyr en forandring som kan «tvinge» klimasystemet i retning av oppvarming eller avkjøling. Et eksempel på klimapådriv er økte atmosfæriske konsentrasjoner av klimagasser, som karbondioksid (CO2). Per definisjon er et pådriv eksternt [i] forhold til selve klimasystemet, mens tilbakekoblinger er interne.

Andre eksempler på klimapådriv er:

forandringer i solstrålingen på grunn av solens energiomsetning, vulkanutslipp og endringer av jordens bane rundt solen.

Videre:

Dette er endringer som påvirker jordens strålingsbalanse, og omtales derfor også som strålingspådriv.

Måleenheten er watt per kvadratmeter (W/m2). En dobling av atmosfærens CO2-innhold er et klimapådriv som medfører en redusert utstråling fra Jordens atmosfære på 3,93±0,30 W/m2 over hele atmosfæren.

De engelske begrepene for Klimapådriv og strålingspådriv er climate forcing og radiative forcing.

17) Her er det Bjørn Lomborg sa, sånn ca:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surely we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

18) Det er ikke helt åpenbart hvordan grafen skal tolkes i og med at for eksempel RCP8.5 impliserer at atmosfærens energitilførsel i år 2100 uansett skal være 8,5 W/m2 høyere enn førindustrielt nivå. Min tolkning er at RCP8.5 med en lavere klimasensitivitet enn 3 grader medfører at den økte energitilførselen vil være lavere enn 8,5 W/m2 i år 2100. Det vil si at et gitt scenario definerer hvor store utslippene vil bli, og bare hvis klimasensitiviteten er ca 3 grader, ender vi opp med “riktig” energioverskudd i år 2100 (henholdsvis 8,5, 4,5 eller 2,6 W/m2 hvis vi ser på figuren).

How Much Has The Sun Influenced Changes In Earth’s Average Temperature?

[Norwegian version here, published by Climate Realists of Norway (Klimarealistene)]

According to a study from last year, it’s not clear whether it’s human activity or changes in solar activity that has contributed more to global warming since pre-industrial times.

With current knowledge, we simply cannot know, according to the study, which, among other things, summarizes knowledge and theories about the Sun’s influence on Earth’s temperature changes.

The study’s title is “How much has the Sun influenced Northern Hemisphere temperature trends? An ongoing debate”. The study should be referenced as Ronan Connolly et al 2021, but for simplicity, I’ll just call it Connolly 2021. There are 22 co-authors in addition to Ronan Connolly, but only two (Willie Soon and Michael Connolly) contributed to the first draft. The study is 60 pages plus references, but it’s a relatively easy read – and very interesting.

The study’s conclusion is in sharp contrast to the Intergovernmental Panel on Climate Change (IPCC), which believes it is extremely likely that human activity has caused most of the warming in the last 100 years or so. 1)

According to the IPCC, the Sun has had almost no impact on changes in global temperature since at least 1950. In other words, IPCC believes the Sun’s radiation has varied very little in this period.

Scientists agree that the Sun’s radiation varies relatively little (around 0.1%) throughout one solar cycle (which is about 11 years). However, they disagree about how much the Sun’s radiation varies on longer timescales.

There are also theories proposing that variations in the radiation the Earth receives from the Sun can be amplified by indirect effects, meaning that parameters other than the amount of solar energy that reaches the Earth can have an impact on Earth’s temperature.

A theory promoted by Henrik Svensmark, among others, is that increased solar activity leads to a stronger magnetic field around the Sun, which causes fewer cosmic rays to reach the Earth’s atmosphere. According to theory, this then leads to less low level clouds and thus a warmer climate.

That increased solar activity and a stronger magnetic field around the Sun cause fewer cosmic rays to reach Earth’s atmosphere is uncontroversial. However, IPCC does not believe that this in turn causes less low clouds and higher temperatures. 2)

As a curiosity: If Svensmark’s theory is correct, then the solar system’s position in the Milky Way may actually affect Earth’s climate. If our solar system is in a region with a lot of cosmic rays (for example in one of the galaxy’s spiral arms), we get more low clouds and a colder climate. If we are in an area with less cosmic rays (for example outside of the spiral arms), we get less low clouds, and it gets warmer than it would otherwise be. The position of the solar system in our galaxy may thus affect the timing of ice ages on Earth (Shaviv 2002).

Uncertainty in solar irradiance measurements

To be able to accurately measure solar irradiance (the amount of solar energy that reaches Earth), we need to measure from above the atmosphere. This has only been possible since 1978, when the first satellites were launched. Unfortunately, we don’t have continuous high-quality measurements of solar irradiance for the entire period from 1978 until today due to the 1986 Space Shuttle Challenger disaster, which led to the postponement of new satellite launches. The first generation of satellites with accurate sensors for measuring solar irradiance was retired in 1989. The next generation was launched in 1991, more than two years later.

In the two years from 1989 to 1991, satellites with less accurate sensors provided measurements. Two different interpretations of these data (ACRIM and PMOD) give qualitatively different results: If PMOD is correct, then solar irradiance has decreased slightly in the entire period from about 1980 until today (each consecutive solar minimum is lower than the one before). On the other hand, if ACRIM is correct, then solar irradiance increased from 1980 to 2000 before starting to decrease:

Source: Willson 2014, Fig. 7

The IPCC believes the PMOD interpretation is correct and concludes that solar irradiance has decreased somewhat since about 1950, while global temperatures have increased. They thus conclude that greenhouse gases, in particular CO2, have caused most or all of the warming since 1950.

Connolly 2021 emphasizes that there’s disagreement among scientists as to whether ACRIM or PMOD is more appropriate and refers to research on both sides of the controversy.

(That the IPCC favors PMOD is maybe not surprising, considering that Judith Lean, who actually helped create the PMOD interpretation, was lead author of chapter 2.7.1 on Solar Variability in IPCC’s 4th Assessment Report.) 3)

ACRIM vs PMOD

Let’s briefly take a closer look at ACRIM and PMOD. ACRIM1, ACRIM2 and ACRIM3 are the ACRIM project’s three satellites. They’re considered to be very accurate, but new satellites must be calibrated against older ones to get comparable values. Unadjusted solar irradiance values for ACRIM1 were approximately 6 W/m2 higher than for ACRIM2:

Source: Scafetta et al 2019, Fig. 2

Two satellites, Nimbus7/ERB and ERBS/ERBE, 4) measured solar irradiance during the ACRIM-gap. Unfortunately, and as we can see in the figure above, the two satellites show different trends during the ACRIM-gap: According to ERB, solar irradiance trended up, but according to ERBE it was trending down.

Scafetta et al. 2019 argue that ERB is more reliable than ERBE since ERBE probably experienced excess degradation of its sensors during its “first exposure to the high UV radiation levels characteristic of solar activity maxima” during the ACRIM-gap. The ERB satellite had experienced this 11 years earlier, and a further strong degradation was considered unlikely. 5)

PMOD, on the contrary, argue that ERBE is more reliable. They have adjusted data for both ACRIM1, ERB and ACRIM2 to conform them to predictions of proxy models for solar irradiance. Fröhlich & Lean 1998 justify the adjustments based, among other things, on sensor degradation, but both Richard Willson and Douglas Hoyt, principal investigators for ACRIM and ERB, respectively, believe that the PMOD adjustments are unwarranted. Willson has even said the adjustments are incompatible with the scientific method:

The TSI [Total Solar Irradiance] proxy models, such as Lean’s, are not competitive in accuracy or precision with even the worst satellite TSI observations. To ‘adjust’ satellite data to agree with such models is incompatible with the scientific method.

But we don’t need to conclude here that the ACRIM interpretation is better than PMOD or vice versa. IPCC has chosen to rely on PMOD, and we can at least take note that there’s no scientific consensus on that being correct.

NASA

(I didn’t include this section about NASA in the Norwegian version of the article, but I thought it could be interesting to some of you.)

NASA has a web site, climate.nasa.gov, which is dedicated to climate change. The assumption seems to be that the IPCC is correct. Under Causes (of climate change) they have included an image showing solar variation and average surface temperature, where we can see that Earth’s temperature has increased while (total) solar irradiance (TSI) has decreased since around 1980.

NASA doesn’t discuss the ACRIM vs PMOD controversy, but they assume that PMOD is correct. This isn’t explicitly stated anymore, but previously, the TSI source was listed as “SATIRE-T2 + PMOD”. According to the Wayback Machine, the text ” + PMOD” was removed around November 2021 (old version | new version). The removal of PMOD from the TSI source is the only change in the image.

NASAs old version of the image, which includes “PMOD” in TSI source.

So, if you see someone referring to this image from NASA, you now know that there are scientist who believe that the TSI/solar irradiance graph is inaccurate. Actually, there’s also some uncertainty in the temperature graph. I’ll come back to that shortly.

SATIRE-T2, by the way, is a proxy model for solar variation.

High or low variability?

In order to be able to estimate solar irradiance prior to the satellite era and potentially very far back in time, the datasets for satellite-measured solar irradiance are used as a starting point. By comparing the measured solar irradiance with how the number and size of sunspots have changed, scientists actually find that the two agree well. It’s thus possible to translate from sunspots to solar irradiance a few hundred years back in time. There are also other methods for estimating solar irradiance, allowing scientists to estimate solar irradiance considerably further back in time.

Unfortunately, we’ve seen that there’s disagreement about which dataset is more reliable when it comes to satellite-measured solar irradiance. If PMOD is correct, solar irradiance seems to have varied relatively little over time. However, if ACRIM is correct, solar irradiance seems to have varied significantly more.

Solar irradiance datasets that go further back than the satellite era can be divided into two categories, high variability datasets and low variability datasets. Datasets based on the ACRIM interpretation are mainly high variability, while those based on PMOD are low variability.

The IPCC writes in their latest report from 2021 that the change in solar irradiance was between 0.7 and 2.7 W/m2 in the period from about 1680 (Maunder Minimum, 1645-1715) to about 1975 (second half of the 20th century). According to Judith Curry, this range (0.7-2.7 W/m2) includes both high and low variability datasets.

Despite IPCC having included datasets with high variability in this range, they still recommend that the climate models (which produce estimates of future global temperatures, among other things) should use two datasets with low variability (Matthes et al. 2017).

The effect of this is that the climate models predict higher temperatures in the future than they would have done if datasets with high variability had also been used. This is because low variability datasets indicate that the Sun’s had little impact on temperature changes in recent decades, and thus that greenhouse gases have had a bigger impact. If greenhouse gases have had a big impact in the past, they’ll likely also have a big impact in the future. So if low variability is correct, we can expect a relatively large temperature increase in the future due to more greenhouse gases in the atmosphere. If, on the other hand, high variability is correct, the effect of greenhouse gases has been smaller in recent decades than the IPCC believes and will probably be smaller in the future as well.

Uncertainty in Earth’s surface temperature trend

More and more areas are being developed by humans. This means that weather stations previously located in rural areas suddenly may find themselves in proximity to urban or semi-urban areas. Areas developed by humans are generally warmer than remote areas. This means that weather stations that were previously remote can start to show higher temperatures – not because of global warming, but simply because they’re now closer to urban areas. This effect is called the urban heat island effect.

In order to – as accurately as possible – determine Earth’s temperature changes over time, scientists attempt to correct for the urban heat island effect. The correction process is called statistical homogenization.

Connolly 2021 has compared temperature data for remote weather stations with temperature data for all stations. By exclusively considering stations that are still remote, they found that Earth’s temperature increase has been substantially lower than assumed by the IPCC.

So it is possible that the homogenization process does not correct enough for higher temperatures around previously remote weather stations.

According to Connolly 2021, the standard estimate is that there’s been a temperature increase of 0.86℃ per century over land in the period 1841-2018 in the Northern Hemisphere. But when considering temperature data from remote stations only, Connolly 2021 finds that the temperature increase has been slightly less than half of the standard estimate, 0.41℃ per 100 years.

How much has the Sun influenced Earth’s surface temperature? 80 different answers

Connolly 2021 considered 16 plausible datasets for solar irradiance from the 19th century (or earlier) until today. Eight of the datasets have low variability, and eight have high variability. They then combined each of the solar irradiance datasets with the following five temperature datasets for the Northern Hemisphere:

Source: Connolly 2021, Fig. 13

Assuming, among other things, that there’s a linear relationship (which might not necessarily be the case) between solar irradiance and the associated temperature changes on Earth, this gives a total of 5 x 16 = 80 possible answers to how much the Sun has influenced temperature changes in the Northern Hemisphere since the 19th century.

The results vary from 0 to 100%. On the one extreme, the Sun may have had no role in temperature changes since the 19th century (this is what the IPCC believes). On the other extreme, the Sun may have been the cause of almost all changes in average temperature since the 19th century:

Source: Connolly 2021, Fig. 15
Source: Connolly 2021, Fig. 16

The blue bars in the above two figures show measured temperature increase per 100 years, the yellow bars show the Sun’s contribution to this and the gray bars show the contribution from humans (primarily greenhouse gases). 6)

Connolly 2021 concludes that we cannot know how much of the temperature changes has been caused by the Sun and how much has been caused by humans, and that the debate is still ongoing:

In the title of this paper, we asked “How much has the Sun influenced Northern Hemisphere temperature trends?” However, it should now be apparent that, despite the confidence with which many studies claim to have answered this question, it has not yet been satisfactorily answered. Given the many valid dissenting scientific opinions that remain on these issues, we argue that recent attempts to force an apparent scientific consensus (including the IPCC reports) on these scientific debates are premature and ultimately unhelpful for scientific progress. We hope that the analysis in this paper will encourage and stimulate further analysis and discussion. In the meantime, the debate is ongoing.

Edit: An article criticizing Connolley 2021 has been published on RealClimate. I haven’t looked closely at the criticism.


Footnotes:

1) IPCC states in their 5th assessment report from 2013 that: “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.” In their most recent assessment report from 2021 (Summary for Policymakers), the message is that human activity accounts for all net warming since the late 19th century:

The likely range of total human-caused global surface temperature increase from 1850–1900 to 2010–2019 is 0.8°C to 1.3°C, with a best estimate of 1.07°C. It is likely that well-mixed [greenhouse gases] contributed a warming of 1.0°C to 2.0°C, other human drivers (principally aerosols) contributed a cooling of 0.0°C to 0.8°C, natural drivers changed global surface temperature by –0.1°C to +0.1°C, and internal variability changed it by –0.2°C to +0.2°C.

2) The IPCC only includes four paragraphs in their most recent assessment report on how (galactic) cosmic rays (chapter 7.3.4.5) may affect surface temperatures on Earth. They’re skeptical of Svensmark’s theory and say there’s a high probability that cosmic rays have negligible effect:

There is high confidence that [Galactic Cosmic Rays] contribute a negligible [Effective Radiative Forcing] over the period 1750 to 2019.

Svensmark, on the other hand, has published (with three co-authors) a new study after the IPCC’s working group I published their report. In the study, they look at how explosions on the Sun (Forbush Decrease events) have affected, among other things, low clouds via changes in cosmic rays. They found a reduction in low clouds that was strongest 5-7 days after the explosion. The corresponding increase in solar irradiance reached about 2 W/m2 for the strongest explosions.

3) Judith Lean also said in 2003 that one reason she wanted to look at the solar irradiance data was that she didn’t want people to take the ACRIM data as an excuse to do nothing about greenhouse gas emissions:

The fact that some people could use Willson’s [ACRIM dataset] results as an excuse to do nothing about greenhouse gas emissions is one reason we felt we needed to look at the data ourselves.

Since so much is riding on whether current climate change is natural or human-driven, it’s important that people hear that many in the scientific community don’t believe there is any significant long-term increase in solar output during the last 20 years.

But it should be mentioned that Judith Lean, too, recognizes that there’s uncertainty in how much solar irradiance has varied on longer timescales. However, she believes that, in any case, the resulting temperature change has been very small:

It remains uncertain whether there are long-term changes in solar irradiance on multidecadal time scales other than due to the varying amplitude of the 11-year cycle. If so the magnitude of the additional change is expected to be comparable to that observed during the solar activity cycle. Were the Sun’s activity to become anomalously low, declining during the next century to levels of the Maunder Minimum (from 1645 to 1715), the expected global surface temperature cooling is less than a few tenths °C.

A change this small from a Maunder Minimum level assumes there are no solar indirect effects.

4) According to Willson 2014, the ERB and ERBE satellites have less accurate sensors than the ACRIM satellites:

The traceability of ERB and ERBE results are degraded, relative to the TSI monitors, by: (1) the absence of dedicated solar pointing, (2) brief and infrequent data acquisition opportunities; (3) inability to calibrate sensor degradation and (4) infrequent electrical self-calibration.

5) From Scafetta et al. 2019:

ACRIM contends that the Nimbus7/ERB TSI upward trend during the ACRIM-gap period is more likely correct than the ERBS/ERBE TSI downward trend because it agrees with the solar activity-TSI variability paradigm established by the ACRIM1 experiment [references]. In fact, the downward ERBE trend is at variance with the paradigm and was caused by well-documented degradation of its sensors that were experiencing their first exposure to the high UV radiation levels characteristic of solar activity maxima. High UV exposure is known to cause excess degradation of TSI sensors in other satellite results [references]. Nimbus7/ERB experienced a similar excess degradation 11 years earlier during the TSI maximum of solar cycle 21 [references] and would have already reached its asymptotic degradation limit during the ACRIM Gap. Clearly, the Nimbus7/ERB TSI record is the most reliable choice to relate the ACRIM1 and ACRIM2 records across the ACRIM-Gap.

6) The sum of the human and solar contribution doesn’t always add up to exactly 100% of the observed trend. Connolly 2021 states that if the sum is greater than 100%, this means that either the solar or the human contribution (or both) is exaggerated. If the sum is less than 100%, there may have been other contributing factors.

Why Everyone Should Be A Climate Skeptic

The Hockey Stick Illusion, Climategate, The Media’s Deliberate Deception And Other Reasons For Optimism

Many people believe the media is exaggerating the dangers of human CO2 emissions. After reading this post, you’ll have a greater understanding of why that is.

As we’ll see, there’s been a lot of dishonesty in the climate debate. This applies not only to the media, but also to some high-profile researchers.

Hockey stick temperature graph

I’m going to tell you a bit about the temperature graph above. This may sound boring, but it’s actually a pretty incredible story, and you might be a little shocked. The graph’s underlying studies were used as references to say that 1998 was the warmest year in the last 1000 years and that the 1990s was the warmest decade. The graph has been called the hockey stick graph due to its shape. It was included six times in the Third Assessment Report of the IPCC. (IPCC stands for the Intergovernmental Panel on Climate Change — it’s a part of the UN). The graph was also included in their 4th Assessment Report, from 2007. The main author of the graph’s underlying studies is Michael Mann. As we’ll see, the studies are weak and have important flaws.

In this post, I’m also going to quote some private emails sent to and/or from scientists at the Climatic Research Unit (CRU). CRU is a component of the University of East Anglia in Norwich, UK. With about 15 employees, it’s not a big institution. It is, however, considered to be one of the leading institutions concerned with climate change.

The reason I’ve been able to quote private emails is that on November 19, 2009, about 1000 emails were leaked from the CRU. This episode has been called Climategate. Here you can read parts of the most sensational emails from the first round of Climategate, along with comments to give context. In 2011, another 5,000 emails leaked from CRU were released to the public. 1)

I’ll use the words alarmist and skeptic to describe two common opinions held by scientists and the public about CO2 and policy measures. Although the words aren’t necessarily the most neutral, they are widely used and easy to understand. I define them as follows:

  • Alarmist: Thinks CO2 will lead to dangerous global warming and wants political action. (Please feel free to suggest a better word than alarmist.)
  • Skeptic: Believes alarmists exaggerate the dangers of CO2 and doesn’t want political action.

As you may have guessed, I am a skeptic. I may thus tend to rely too much on skeptical arguments and too little on arguments from alarmists. If you find factual errors, please let me know, so I can correct them.

As a contributor to global warming, my main focus will be on CO2. This is the media’s main focus as well, and the IPCC also believes CO2 is the most important factor:

Carbon dioxide (CO2) represents about 80 to 90% of the total anthropogenic forcing in all RCP scenarios through the 21st century.

Shortcuts to the different parts of this article:

Consensus?

We often hear that 97% of climate scientists believe that climate change is man-made. But what does that really mean? Does this mean that almost all climate scientists believe humans are:
a) the main cause of climate change? Or
b) a contributor to climate change, but not necessarily the main cause?

A couple of years ago, I gave a short presentation on the 97% consensus to some of my colleagues at work. Before I started the presentation, I asked this question. Everyone in the audience (about 25 people, all engineers) answered alternative a — that the scientists believe humans are the main cause of climate change. Maybe you’ve been under the same impression yourself?

Although the media usually doesn’t say this explicitly, it’s easy to get that impression. But it’s wrong. It is true that the IPCC believes that humans are the main cause. From the Summary for Policymakers in their most recent assessment report from 2013/2014:

It is extremely likely [95-100% certain] that human influence has been the dominant cause of the observed warming since the mid-20th century.

But there seems to be no good studies confirming that the agreement on this among climate scientists is as high as 97%. Usually, a 2013 meta-study by John Cook and others is used as a reference for the claim. The study is entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature”, but is usually referred to as just “Cook et al. (2013) ”.

Cook et al. (2013) assessed the abstracts of almost 12,000 research papers. These papers were all published between 1991 and 2011 and contained the text “global warming” or “global climate change”. The articles were categorized based on the authors’ expressed views on whether, and to what extent, humans contribute to global warming or climate change — for example, whether humans contribute, don’t contribute, or contribute at least 50% (ie more than all other factors combined). There was also a category for articles that didn’t explicitly say that humans contribute, but where this was implicit or implied.

Cook et al. (2013) found that about two thirds of the papers didn’t express an opinion about the extent of the human contribution to climate change. However, of the remaining one third of papers, 97% of abstracts expressed the view that humans do contribute to global warming or climate change. But only 1.6% of these remaining papers clearly stated that humans were the main cause. The fact that so few paper abstracts explicitly stated that humans are the main cause of global warming or climate change was not mentioned in Cook et al. (2013). But the raw data can be found under Supplementary data, so verifying this is relatively easy. 2)

In another article, also from 2013, where Cook was a co-author (Bedford and Cook 2013), the results of Cook’s meta-study were misrepresented. In the article, we find the following sentence:

Of the 4,014 abstracts that expressed a position on the issue of human-induced climate change, Cook et al. (2013) found that over 97 % endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.

Notice that it says main cause, but this does not agree with the raw data from Cook et al. (2013).

Although the agreement isn’t as high as 97% among researchers or climate scientists that human activity is the main cause of climate change, there’s still a clear majority that believes this. I’ll return to this topic in two later footnotes (footnotes 40 and 41).

Here’s a video which, in addition to criticizing Cook et al. (2013), also criticizes other consensus studies:

Background

Earlier this year, I received a Facebook message from Martha Ball, Tim Ball’s wife (probably after she’d seen a post I wrote about Cook’s consensus study). Tim Ball is a well-known climate skeptic, and has been sued by Michael Mann for defamation. Mann is, as mentioned in the introduction, the main author of the hockey stick graph studies. Mann’s studies have received a lot of criticism, and as we’ll see, some of the criticism is highly justified.

The reason for the lawsuit against Tim Ball was that Ball had said:

Michael Mann at Penn State should be in the state pen [penitentiary], not Penn State [Pennsylvania State University].

After more than eight years in the Canadian judiciary, Tim Ball won the lawsuit in 2019 on grounds that Mann did not seem interested in making progress. Mann was ordered to pay legal fees — $100,000, according to Martha Ball. Mann, on the other hand, claims he was not ordered to pay Ball’s legal fees, but the print-out from court seems to make it clear that he was:

[19] MR. SCHERR [counsel for Tim Ball]:  I would, of course, ask for costs for the defendant, given the dismissal of the action.

[20] MR. MCCONCHIE [counsel for Michael Mann]:  Costs follow the event. I have no quarrel with that.

[21] THE COURT:  All right. I agree. The costs will follow the event, so the defendant will have his costs of the application and also the costs of the action, since the action is dismissed.

In 2018, the program series Folkeopplysningen on NRK (a Norwegian TV channel) had a climate episode where they showed something similar to Mann’s hockey stick graph. It showed how the global average temperature had changed since 20,000 years before our era, when it may have been more than 4 degrees colder than today. The graph showed no rapid changes until the 20th century, when the graph suddenly shot straight up. I didn’t quite understand how they could know the temperature so accurately so far back in time, and ever since I watched the program I’d wanted to know more about the topic, but it never reached the top of my priority list. But now that I’d received a message from Martha Ball, I thought it might be a good opportunity to finally learn about it. So I asked if she could point me to some good learning resources about the hockey stick graph. She recommended googling McIntyre and McKitrick. And so I did.

I then found the book The Hockey Stick Illusion — Climategate and the Corruption of Science, written by Andrew Montford. The book is a detailed, exciting and shocking story about how the Canadian mining consultant Stephen McIntyre tries to reproduce and verify Michael Mann’s hockey stick graph based on Mann’s published studies.

I’ll soon give a summary of the hockey stick graph story. The summary is based on the following interview from 2019 (which I’ll call the SoundCloud interview), where McIntyre tells his version of the story:

I definitely recommend reading or hearing more detailed retellings than mine. According to Ross McKitrick, McIntyre’s co-author, The Hockey Stick Illusion is the best place to start if you want to learn about the hockey stick. Matt Ridley, author of The Rational Optimist, has also recommended the book, calling it one of the best science books in years. I can also recommend McIntyre and McKitrick’s presentation to a National Academy of Sciences expert panel in 2006 (also recommended by McKitrick).

There was also a very good article, Kyoto Protocol Based on Flawed Statistics, in the Dutch magazine Natuurwetenschap & Techniek in 2005. The article covers many of the same topics as I do, but in more detail. The author is Marcel Crok.

Short intro to tree rings and terminology

Before I start, I’ll briefly explain some words that are important for the story. This should make it easier to follow along.

We only have direct temperature measurements with thermometers from about 1850. To estimate temperatures on Earth before then, we have to rely on one or more proxies — more indirect temperature indicators. The most common proxy in Mann’s hockey stick studies (and the one most relevant to this story) is tree rings. Under certain conditions, and for certain types of trees, a tree will grow more in a hot year than in a cold year. Based on the tree ring widths, it’s thus possible to know the temperature at the place where the tree grew — within some uncertainty range.

How the tree rings of a single tree have changed over time (from year to year) is called a tree ring series.

Temperature isn’t the only factor affecting the growth of trees. Samples are thus gathered from several trees at each geographical location. Preferably, 10 or more trees should be sampled. Usually many more are. The tree ring results for all these trees together is called a tree ring chronology. The temperature can be estimated more precisely for a chronology than for a single tree.

Mann had used a relatively advanced statistical method to reconstruct the Earth’s temperature (for the Northern Hemisphere) from the proxies. His studies were the first temperature reconstruction studies to use this method, called principal component analysis, or PC analysis, or just PCA). Principal component analysis itself wasn’t new, but its application in the context of temperature reconstructions was.

If you have a lot of proxy data (for example many tree ring chronologies) within a small geographical area, you can merge the chronologies in the area using principal component analysis. This results in a more even geographical distribution of proxies across the world (or hemisphere). 3)

An important early step in the calculation of principal components is centering all the tree ring widths in a chronology around 0. This is achieved by subtracting the average tree ring width from each individual tree ring width in the chronology. As we’ll see, Mann had used a slightly different method…

The hockey stick illusion

(Footnotes provide supplementary information. Click on the footnote number to navigate to the footnote, click the number again to navigate back.)

For McIntyre, it all started when he received a brochure in the mail (1:13 in the SoundCloud interview):

Well, I got interested in climate through sheer bad luck, I think. In 2002, the Canadian government was promoting the Kyoto treaty, and as part of their promotion they sent a brochure to every household in Canada, announcing that 1998 was the warmest year in a thousand years, and the 1990s was the warmest decade, and I wondered, in the most casual possible way, how they knew that.

McIntyre found that the claim that the 1990s were the warmest decade of the last 1,000 years came from two studies, published in 1998 and 1999.

The 1998 study had attempted to reconstruct the average temperature in the Northern Hemisphere for the last 600 years. The study from 1999 extended the time interval by 400 years. It thus showed the reconstructed temperature for the last 1000 years (also for the Northern Hemisphere).

The studies are often just referred to as MBH98 4) and MBH99 5), from the first letter of the authors’ last names and the year the study was published. The authors are, for both studies: Michael Mann, Raymond Bradley and Malcolm Hughes. Mann is the main author. The story, as I present it here, is mostly about MBH98.

McIntyre sent an email to Mann asking where he could find the data that MBH98 was based on. Mann had forgotten where the data was, but replied that his colleague (Scott Rutherford) would locate it. Rutherford said the data wasn’t all in one place, but he would get it together for McIntyre. A couple of weeks later, McIntyre was given access to the data on an FTP website. McIntyre thought it was strange that no one seemed to have asked for the data before. Had no one audited such an influential study?

But now that they’d taken the trouble to find the data for him, McIntyre felt obligated to investigate further. In the beginning, he had no particular goal in verifying/auditing Mann’s studies — he saw it more as a kind of big crossword puzzle to be solved.

There were some things in the data McIntyre had received that didn’t make sense, 6) and it was difficult to reproduce Mann’s results since the exact procedure was not described in Mann’s studies. McIntyre therefore sent a new email to Mann asking whether the data was correct. Mann would not answer the question and made it clear that he didn’t want to be contacted again: “Owing to numerous demands on my time, I will not be able to respond to further inquiries.”

McIntyre then published his first article with Ross McKitrick and reported on the problems he had found. The article is entitled Corrections to the Mann et. al. (1998) Proxy Data Base and Northern Hemispheric Average Temperature Series, but is usually referred to as MM03. Again, the letters are the authors’ last names and the digits are the year of publication. In MM03‘s abstract, McIntyre and McKitrick summarize the most important errors they found in Mann’s study:

The data set of proxies of past climate used in [MBH98] for the estimation of temperatures from 1400 to 1980 contains collation errors, unjustifiable truncation or extrapolation of source data, obsolete data, geographical location errors, incorrect calculation of principal components and other quality control defects.

Without these errors, the graph’s hockey stick shape disappeared:

The particular “hockey stick” shape […] is primarily an artefact of poor data handling, obsolete data and incorrect calculation of principal components.

Ross McKitrick, McIntyre’s co-author, is (and was) an economics professor, he had modeled CO2 tax in his doctoral dissertation, and McIntyre had seen him talking about the Kyoto Protocol on TV. Also, they both lived near Toronto, Canada. So when McIntyre contacted McKitrick, it was both to have someone verify his results, and also to have someone to publish together with. In regards to the latter, McKitrick’s experience with publishing scientific articles made him a good match.

Mann’s response to McIntyre and McKitrick’s first article was that they had used the wrong data set. According to Mann, the correct data was available on the FTP website McIntyre had been given access to. But it turned out the data was on another FTP website, which McIntyre had not had access to, but he was given access now. Mann also said that McIntyre and McKitrick should have contacted him when they found problems with the study (which McIntyre had in fact done).

Mann further explained that Rutherford had made some mistakes when he put the data together. 7) And it is probably true that McIntyre had received the wrong data. From The Hockey Stick Illusion:

It looked very much as if the version of pcproxy.txt that Rutherford had sent [to McIntyre] had been originally prepared for Rutherford’s own paper. In preparing these figures, he seemed to have introduced errors into the database – the same errors that had alerted McIntyre to the possibility that there were serious problems with MBH98.

So some of the errors McIntyre found in the beginning may have been introduced by Rutherford (who was Mann’s assistant). Thus, MBH98 probably did not contain all the errors McIntyre pointed out in MM03. Admittedly, McIntyre could not know this. It was Rutherford who had made the mistake, and Mann hadn’t checked when McIntyre asked if the data was correct.

McIntyre and McKitrick now realized that to be able to reproduce Mann’s results, they’d need access to Mann’s computer code. McIntyre wrote another e-mail to Mann, requesting to see the code. Again, Mann is uncooperative:

To reiterate one last time, the original data that you requested before and now request again are all on the indicated FTP site, in the indicated directories, and have been there since at least 2002. I therefore trust you should have no problem acquiring the data you now seek.

In his reply, Mann once again explained that he wasn’t interested in responding to further emails from McIntyre.

After this, many new files and directories appeared on the new FTP site. McIntyre had previously clicked around so much on this site that Mann had practically accused McIntyre of breaking into the server 8). So it was clear that the new files and directories hadn’t been available before – at least there had been no easy way to find them. McIntyre’s theory was that a robots.txt file had prevented these files and directories from being indexed by search engines. And since there were no links to them, there was no way to find them unless one knew the exact web address.

Among the files that appeared was a file containing the computer code to calculate the principal components. It appeared Mann had used a slightly unconventional way of calculating principal components. This was in spite of the fact that MBH98 stated that “conventional principal components analysis” had been used. Instead of using a suitable programming language (such as R) or an existing code library for the calculation, Mann had written the algorithm himself (in Fortran).

In standard principal component analysis, data is centered before doing further calculations. Centering is done by subtracting the average for the whole series for each value. The average of the resulting series is thus zero. This can be illustrated as follows:

Unrealistic example of two tree ring series. Left: Original data. Right: Both series are centered, that is, moved down so they’re equally much above 0 as below 0. The image is composed of two screenshots from The Hockey Stick Illusion. (X axis is time, y axis is tree ring width. The example is unrealistic because tree ring widths vary much more from year to year than this.)

But Mann had subtracted a different average. Instead of subtracting the average for the whole period, he had subtracted the average for the 20th century. This led to the resulting temperature graph very easily getting a sharp uptick in the 20th century: 9)

Unrealistic example of two tree ring series. One has an uptick in the 19th century. The other has an uptick in the 20th century. Left: Original data. Middle: Correctly centered. Right: Incorrectly centered with Mann’s algorithm. When the series are centered correctly, the flat parts of the curves are at the same level (close to zero). When centered with Mann’s algorithm, the flat part of the curve is too far down for curves with a 20th century uptick. Curves that deviate a lot from 0 (either above or below) are given greater weight in the final temperature reconstruction. Mann’s erroneous algorithm thus has a tendency to create temperature reconstructions with a hockey stick shape. This even happens with random data (random walk/red noise). The picture is composed of three screenshots from The Hockey Stick Illusion.

As a result, the trees in an area of California with a 20th century uptick were given almost 400 times greater weight than the trees in another area of the United States without a similar uptick in the 20th century. In fact, removing the trees from this one area (and possibly one more 10)) from Mann’s data resulted in a temperature reconstruction where the 15th century had higher temperatures than those at the end of the 20th century — even when using Mann’s erroneous algorithm! 11) And then one could no longer claim that 1998 was the warmest year in a thousand years.

Dashed line: The hockey stick graph from MBH98. Solid line: How the graph would have looked with correct principal component analysis and corrected data (including removal of bristlecone pines and Gaspé cedar trees). Screenshot from Ross McKitrick’s article What is the ‘Hockey Stick’ Debate About? from 2005. McIntyre and McKitrick don’t argue that their graph gives a correct picture of the temperature over the last 600 years, only that Mann’s hockey stick graph is not correct based on the proxies MBH98 chose to use. There’s also a large uncertainty in the reconstructed temperature before 1750.

The reason why these trees had wider tree rings in the 20th century was not only related to temperature. Mann also acknowledges this in the second hockey stick study (MBH99):

A number of the highest elevation chronologies in the western U.S. do appear, however, to have exhibited long-term growth increases that are more dramatic than can be explained by instrumental temperature trends in these regions.

Donald Graybill had taken the tree ring samples. He and Sherwood Idso published an article (in 1993) in which they hypothesized that the large increase in tree ring widths for these trees could be a direct effect of the increased CO2 content in the atmosphere (CO2 fertilization). 12)

To determine how reliable the temperature reconstruction in MBH98 was, Mann had calculated verification statistics 13), and these were included in MBH98. The verification statistics they had used were RE 14) and R2 15), but R2 was only shown for the period after 1750, which McIntyre thought was strange.

R2 is usually between 0 and 1 (but can also be negative), and the higher, the better. An R2 value of 1 means perfect correlation, 0 means no correlation, and a negative value means negative correlation. In the SoundCloud interview, McIntyre says that 0.4 or 0.5 would be good. But it turned out that for periods prior to 1750, R2 ranged from 0.00 to 0.02 in Mann’s study. This means one can have very little confidence in the study’s results for periods before 1750. It wasn’t strange, then, that Mann didn’t wish to publish R2 for the earlier time periods, although, of course, he should have done so.

In 2005, McIntyre and McKitrick published a thorough critique of MBH98 in Energy & Environment (MM05). In it, they wrote that Mann probably had calculated R2 for the earlier time periods as well. 16) If so, these verification statistics should have been published in the study. But now that they weren’t, the fact that R2 was missing for the earlier time periods should still have been caught by the peer review process. Unfortunately, this hadn’t happened.

The article in Energy & Environment gave McIntyre a front-page article in The Wall Street Journal 17). This, in turn, led to the House Energy and Commerce Committee asking Mann if he had calculated R2 and what the result was. Mann failed to answer whether he had calculated R2, but said that R2 wasn’t a good verification statistic, and that he hadn’t used it. The committee also asked Mann to publish the source code for the study, which he did. The source code showed that Mann had in fact calculated R2 for the earlier time periods, which McIntyre had assumed.

So it was quite obvious that Mann knew that his study wasn’t particularly good. That the study wasn’t good is further supported by one of the leaked Climategate emails. (Quotes with blue text are from the Climategate emails). Tim Osborne, the current CRU director, had asked Mann for some intermediate results (residuals) that could be used to calculate R2 without going through all previous steps in the calculation. Mann sent this to Osborn, but warned that it would be unfortunate if it came out. Mann wrote:

p.s. I know I probably don’t need to mention this, but just to insure [absolute clarity] on this, I’m providing these for your own personal use, since you’re a trusted colleague. So please don’t pass this along to others without checking w/ me first. This is the sort of “dirty laundry” one doesn’t want to fall into the hands of those who might potentially try to distort things…

Tom Wigley, former CRU director, also knew Mann’s study was poor. In an email to Phil Jones, Wigley wrote:

I have just read the [McIntyre and McKitrick] stuff critcizing MBH. A lot of it seems valid to me. At the very least MBH is a very sloppy piece of work — an opinion I have held for some time.

In a comment on his own blog (ClimateAudit), McIntyre has written two examples of what Mann should have written as a disclaimer to his study:

Readers should be aware that the reconstruction contained herein badly fails many standard cross-validation tests, including the R2, CE, sign test and product mean test, some of which are 0. Accordingly, the apparent skill of the RE statistic may be spurious and the reconstruction herein may bear no more relationship to actual temperature than random numbers. Readers should also be aware that the confidence intervals associated with this reconstruction may be meaningless and that the true confidence interval may only be natural variability.

Readers should be aware that the reconstruction contained herein cannot be replicated without the use of bristlecone pines. Some specialists attribute 20th century bristlecone pine growth to nonclimatic factors such as carbon dioxide or other fertilization or to nontemperature climate factors or to a nonlinear response to temperature. If any of these factors prove to be correct, then all portions of the reconstruction prior to [the year] 1625 will be invalidated.


I’d like to also mention that McIntyre and McKitrick are not the only ones who’ve criticized MBH98/99. One other criticism relates to the extensive use of tree rings in the study. Tree rings are better suited for finding year-to-year temperature variations than for finding temperature-trends over longer periods of time:

[W]hile tree rings are excellent at capturing short frequency variability, they are not very good at capturing long-term variability.

– James Speer, Fundamentals of Tree-Ring Research (pdf)

Further recommendations

In my summary of the hockey stick story, I’ve focused on some of the most important flaws in Mann’s studies and the fact that Mann has been uncooperative. But this is only one part of the story. The story is also about how the IPCC broke its own rules to be able to use the hockey stick graph also in its fourth assessment report. 18) Andrew Montford, author of The Hockey Stick Illusion, wrote a blog post about this before he started writing the book.

The video below is also about the hockey stick graph. It explains how the hockey stick graph ended up in IPCC’s third assessment report in 2001, and how data was removed from a tree ring study that showed decreasing tree ring widths in the 20th century. Tree ring widths are, as we remember, used as a proxy for temperature. (Decreasing tree ring widths in a time of rising temperatures would cast doubt on the validity of tree rings as a proxy for temperature.) The story is backed up by Climategate e-mails, including Phil Jones’ famous 1999 email, where he writes:

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) [and] from 1961 for Keith’s to hide the decline. 19)

Another video that may be of interest is the video below. It’s about the leaked Climategate emails and the subsequent British Climategate inquiries. The presenter is Andrew Montford, author of The Hockey Stick Illusion. (Despite the video’s German title, the talk is in English). Montford has also written a report on the inquiries.

The last video I’ll currently recommend is a talk by McIntyre himself. In the video, McIntyre discusses some problems with the tree ring chronologies used in Mann’s studies (and other similar studies): 20)

Peer review

Peer review is used by scientific journals to determine whether to publish a paper or study submitted to the journal. The study will be forwarded to a small number of experts in the study’s subject area, who’ll assess its quality. The experts usually aren’t paid extra for peer reviewing studies, and they typically spend around one working day on the review.

On October 27, 2009, Phil Jones sent an e-mail in which he wrote:

The two Canadians she refers to [McIntyre and McKitrick] have never developed a tree-ring chronology in their lives and McIntyre has stated several times on his blog site that he has no aim to write up his results for publication in the peer-review literature.
I’m sure you will be of the same opinion as me that science should be undertaken through the peer-review literature as it has been for over 300 years. The peer-review system is the safeguard science has developed to stop bad science being published.

It’s somewhat ironic that Phil Jones says McIntyre should publish in peer-reviewed journals, since Jones and Mann did everything they could to actually prevent McIntyre, McKitrick and other skeptics from publishing in peer-reviewed journals.

In 2003, a study critical of Mann’s temperature reconstruction, was published in Climate Research. The authors were Willie Soon and Sallie Baliunas. Following the study’s publication, Mann wrote:

The Soon & Baliunas paper couldn’t have cleared a ‘legitimate’ peer review process anywhere. That leaves only one possibility–that the peer-review process at Climate Research has been hijacked by a few skeptics on the editorial board.
[…]
There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn’t get published in a reputable journal.
This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature”. Obviously, they found a solution to that–take over a journal! [Emphasis added]

Mann and other climate scientists with connections to CRU had apparently been able to control a lot of what was published in peer reviewed climate journals. They’d managed to keep most skeptic papers out of the peer-reviewed literature. It was thus easy to meet criticism — they could simply ask why the skeptics didn’t publish in the peer-reviewed literature. As an example of this, see Mann’s response to a journalist who had referred to a skeptic article:

Professional journalists I am used to dealing with do not rely upon unpeer-reviewed claims off internet sites for their sources of information. They rely instead on peer-reviewed scientific research, and mainstream, rather than fringe, scientific opinion.

How did they achieve this level of gatekeeping control? One explanation is that there was almost always at least one person from this group of climate scientists who was asked to peer review new climate studies in their field. When they recommended against publishing a study, their advice was usually taken.

It can also, to some extent, be explained in a simpler way: As Judith Curry writes, researchers — like most other people — tend to more easily trust conclusions that agree with what they themselves believe to be true. Review comments that address papers with opinions contrary to the reviewer’s own, will thus naturally be more critical. If a large majority of scientists in a field have similar opinions, it then becomes more difficult for other opinions to get through the peer-review process.

Some researchers who failed to get their study published in climate journals simply gave up.

The climate scientists at CRU also wanted to discredit Soon and Baliunas. Tom Wigley writes:

Might be interesting to see how frequently Soon and Baliunas, individually, are cited (as astronomers).
Are they any good in their own fields? Perhaps we could start referring to them as astrologers (excusable as … ‘oops, just a typo’).

Following is an example of how scientists on the alarmist side expected someone on their side to be asked to peer-review new studies: The day before McIntyre and McKitrick’s first paper (MM03) was published in Energy & Environment, and before Mann had seen the paper, Mann writes in an email:

My suggested response is:
1) to dismiss this as [a] stunt, appearing in a so-called “journal” which is already known to have defied standard practices of peer-review. It is clear, for example, that nobody we know has been asked to “review” this so-called paper [Emphasis added]
2) to point out the claim is nonsense since the same basic result has been obtained by numerous other researchers, using different data, elementary compositing techniques, etc. Who knows what sleight of hand the authors of this thing have pulled. Of course, the usual suspects are going to try to peddle this crap. The important thing is to deny that this has any intellectual credibility whatsoever and, if contacted by any media, to dismiss this for the stunt that it is..

Here we also see Mann having a clear opinion about the study before actually reading it… Mann wrote the above after receiving an e-mail (with unknown sender) stating that MM03 would soon be published. The unknown person writes:

Personally, I’d offer that this was known by most people who understand Mann’s methodology: it can be quite sensitive to the input data in the early centuries. Anyway, there’s going to be a lot of noise on this one, and knowing Mann’s very thin skin I am afraid he will react strongly, unless he has learned (as I hope he has) from the past….”

If you’d like to read more about how Mann, Jones and others discussed peer review internally, check out this online book on Climategate and search for “email 1047388489”. (The emails quoted in the book are commented on by the author.) Following Climategate, The Guardian also wrote an article about how climate scientists, including at the CRU, tried to prevent skeptical papers from being published.

When Phil Jones writes “The peer-review system is the safeguard science has developed to stop bad science being published”, it’s ironic also because the peer review process failed to prevent Mann’s flawed hockey stick studies from being published.

As noted, Phil Jones wrote the email I quoted from above (first quote under the section on peer review) in late 2009 — just a few weeks before Climategate. The IPCC’s most recent assessment report was published in 2013 and 2014, not that many years later. As Phil Jones pointed out in an email, IPCC only considers studies that are peer-reviewed:

McIntyre has no interest in publishing his results in the peer-review literature. IPCC won’t be able to assess any of it unless he does.

So it’s perhaps not surprising that IPCC concludes that most of the warming since 1950 is human-caused. So when the IPCC says they’re at least 95% certain about this, that may be an exaggeration of how certain they should be — given the artificially low number of skeptical papers in the peer-reviewed literature. 21)

In general, peer review is no guarantee that a study is correct. And this is something scientists are aware of, wrote Charles Jennings, former editor of Nature, on Nature’s peer review blog in 2006:

[S]cientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.

As mentioned, the amount of time normally spent on peer review is limited. When Stephen McIntyre was asked to review an article for Climatic Change, he was prepared to do so diligently and asked the editor, Stephen Schneider, for access to the study’s underlying data. Schneider then said that in the 28 years he’d been editor of Climatic Change, no one had previously requested access to data. McIntyre told this in the SoundCloud interview I embedded earlier (7:39), where he went on to say:

[Schneider] said, “If we required reviewers to look at data, I’d never be able to get anyone to review articles”. I think the important point is for people unfamiliar with the academic journal process to understand the level of due diligence, and what is a relevant level of due diligence. I’ve come to the opinion that it’s unreasonable to expect unpaid reviewers to […] do a full audit of an article — it’s too much work, and people aren’t going to do that.

Since peer review won’t uncover all flaws of a study, it’s important to also have other processes that can help to uncover flaws. And generally, such processes do exist. For example, the results of a study may not agree with some scientist’s own assumptions, and (s)he may thus suspect errors. In that case, it may be interesting for them to look more closely at the study and write up a separate study or article, criticizing the original study. The new paper then also needs to get through the peer review process for it to be published in a scientific journal.

In the field of climate science, there were many scientists interested in writing critical papers. But since the alarmists managed to keep so many critical papers out of scientific journals, normal quality assurance was lost. 22) Additionally, it was difficult for skeptics to get access to the data behind the alarmists’ studies. 23) These things help explain how Mann’s study, despite its many flaws, ended up in an IPCC assessment report. And not just as one study among many, but as the most prominent study!

Who can we trust

In the climate debate, we have two sides with wildly opposing views. And it’s difficult for someone outside the field to know who’s right and who’s wrong. Either side probably has some good arguments. But when I learn that someone has deliberately deceived the public (for example, Michael Mann (by withholding important information) and John Cook 24)), at least I know I can’t trust them. Even though a lot of what they say is true, it’ll be difficult to tell truth from lie, so, to me, their words carry less weight.

From the Climategate emails, it’s clear there are others, too, on the alarmist side, that one shouldn’t trust completely. (Of course, you shouldn’t trust everyone on the skeptic side either.) As an example, Tom Wigley, former CRU director, wrote in an email to Phil Jones, then director of CRU:

Here are some speculations on correcting [sea temperatures] to partly explain the 1940s warming blip.
If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 [degrees Celsius], then this would be significant for the global mean — but we’d still have to explain the land blip.
I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and I think one needs to have some form of ocean blip to explain the land blip[.]
[…]
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.

Here, the two CRU directors weren’t quite happy with the temperature data and discussed how they could change it.

Another example: In 1995, after Science had published a borehole study he’d written, David Deming received an e-mail from a person in the climate science community. Deming doesn’t say who the person is, but writes that it is a major person. Deming writes:

They thought I was one of them, someone who would pervert science in the service of social and political causes. So one of them let his guard down. A major person working in the area of climate change and global warming sent me an astonishing email that said “We have to get rid of the Medieval Warm Period.” [Emphasis added]

It’s been speculated that the person who sent the email was Jonathan Overpeck. Overpeck was Coordinating Lead Author for the chapter on paleoclimate (climate of the past) in IPCC’s 4th assessment report from 2007. In an email to Phil and Mike (probably Phil Jones and Michael Mann), Overpeck wrote that he couldn’t remember having sent such an e-mail to Deming. But he also didn’t entirely rule out the possibility that he had written something similar.

Deming mentioned The Medieval Warm Period (MWP), also called Medieval Climate Anomaly. This refers to a period from about the year 1000 to 1300 when temperatures were assumed to have been higher than at the end of the 20th century. I don’t know when temperatures were higher, but for some it was clearly important to be able to say that Medieval temperatures were not higher than today’s.

As you may recall, the errors in MBH98 led to their temperature reconstruction showing a significantly lower temperature around the year 1400 than what their data actually indicated. 25) And the subsequent study, MBH99, which reconstructed temperatures back to the year 1000, showed no Medieval Warm Period for the Northern Hemisphere. This may have been exactly the result Mann wanted.

These examples seem to show that several scientists with connections to CRU wanted to convince politicians and others that human-caused global warming is happening and is dangerous. That the data didn’t always support their views was less important. The scientists may have had doubts, but those were rarely communicated to the public.

Are RealClimate and SkepticalScience unbiased websites?

Michael Mann is one of several people who started the website realclimate.org. In 2006, Mann sent an e-mail, explaining that they won’t approve just any comment — the skeptics shouldn’t be allowed to use the RealClimate comment section as a “megaphone”:

Anyway, I wanted you guys to know that you’re free to use [RealClimate] in any way you think would be helpful. Gavin [Schmidt] and I are going to be careful about what comments we screen through, and we’ll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you’d like us to include.

You’re also welcome to do a followup guest post, etc. think of [RealClimate] as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We’ll use our best discretion to make sure the skeptics [don’t get] to use the [RealClimate] comments as a megaphone…

And in a 2009 e-mail:

Meanwhile, I suspect you’ve both seen the latest attack against [Briffa’s] Yamal work by McIntyre. 26) Gavin [Schmidt] and I (having consulted also w/ Malcolm [Hughes]) are wondering what to make of this, and what sort of response—if any—is necessary and appropriate. So far, we’ve simply deleted all of the attempts by McIntyre and his minions to draw attention to this at RealClimate.

John Cook (from the 97% consensus study) started the website skepticalscience.com. SkepticalScience is a website that intends to refute arguments from skeptics. 27)

SkepticalScience seems to be a popular website with a very good search engine ranking (at least with Google). A few years ago, I was looking into the consensus argument. SkepticalScience was then one of the first sites I came across. And according to them, several studies showed there was a near 100% agreement that climate change was man-made. I was almost convinced, but have since learned that the studies don’t say what SkepticalScience wanted to convince me about.

Although a lot of what SkepticalScience and RealClimate write is correct, it’s also clear that they aren’t unbiased websites. But since their message agrees well with what we hear in the media, it’s only natural that many people think they’re not biased. It’s easier to be skeptical of information that contradicts what we regularly hear. And being skeptical is a good thing. It’s also important to be aware that SkepticalScience and RealClimate want their readers to be convinced that global warming is man-made and dangerous, just as much as most skeptics want to convince us otherwise.

So who can we trust? We can probably trust many people, but one person that I trust, is Stephen McIntyre. (That doesn’t mean everything he says is true.) Although he’s criticized Mann’s studies and other tree ring studies quite strongly, he hasn’t “chosen side” in the climate debate. He consistently emphasizes that what he himself found in no way is proof that global warming isn’t taking place, and that determining the value of the climate sensitivity is the important scientific question. (I will soon get back to the topic of climate sensitivity.) In the introduction to a talk he gave on the role of Climategate in relation to the hockey stick graph story, he said (3:59):

[K]eep in mind that nothing that I say tonight proves or disproves global warming. Nor does climate science as a whole stand or fall on proxy reconstructions. If we do nothing about tree rings, we would still be obliged to assess the impact of doubled CO2. As a final preamble, there’s far too much angriness in my opinion on both sides of the debate. People are far too quick to yell “Fraud” at the other side. And I think such language is both self-indulgent and counterproductive. I don’t apply these labels myself, I don’t permit them at ClimateAudit, and don’t believe they serve any purpose. That doesn’t mean you can’t criticize authors — I do so all the time and will do so tonight, but any point you make should be able to be made on the facts rather than the adjectives.

And in his ending remarks (36:31):

I started my comments with caveats, and I’ll close with some more. The critical scientific issue, as it has been for the past 30 years, is climate sensitivity, and whether cloud and water cycle feedbacks are strongly positive or weekly negative or somewhere in between. This is the territory of Lindzen, Spencer, Kininmonth and Paltridge at this conference, and I urge you to listen to what they have to say. But also keep an open mind, because many serious scientists don’t agree with them and stand behind standard estimates of climate sensitivity of doubled CO2 in perfectly good faith.
[…]
If I were a politician, regardless of what I felt personally, I would also take scientific guidance from official institutions rather than what I might think personally, either as an occasional contributor to academic journals or as a blogger. Although, knowing what I know now, I would try as hard as I possibly could, to improve the performance and accountability of these institutions.

Here’s the full talk:

https://youtube.com/watch?v=SqzcA7SsqSA%3Fstart%3D105

Can we trust science?

We may think of scientists as selfless truth-seekers, but according to Terence Kealey, author of The Economic Laws of Scientific Research, it’s a myth that scientists will try to falsify their own theories. They want their theories published and don’t mind taking shortcuts in the use of statistical methods:

One problem is that scientists are much less scientific than is popularly supposed. John Ioannidis […] has shown […] that the poor application of statistics allows most published research findings to indeed be false[.]

There is a perverse reason that scientists use poor statistics: career progression. In a paper entitled “The Natural Selection of Bad Science,” Paul Smaldino and Richard McElreath of the University of California, Merced, and the Max Planck Institute, Leipzig, found that scientists select “methods of analysis … to further publication rather than discovery.” Smaldino and McElreath report how entire scientific disciplines — despite isolated protests from whistleblowers — have, for more than half a century, selected statistical methods precisely because they will yield publishable rather than true results. The popular view is that scientists are falsifiers, but in practice they are generally verifiers, and they will use statistics to extract data that support their hypotheses.

In light of this, the hockey stick scandal makes a little more sense.

So the answer to whether we can trust science is both yes and no. Individual studies aren’t necessarily correct or good even if they’re published in peer reviewed journals. To trust a given scientific field, a diversity of opinion within the field is advantageous — to always have someone with incentives to try to find flaws in new studies. One incentive is simply disagreeing with the study’s conclusions. There are many scientists with this incentive within climate science. Instead of ignoring them, they should rather be made use of in the process of determining what is good and what is bad science.

Climate sensitivity

As I quoted Stephen McIntyre on earlier, the most important scientific question when it comes to CO2 is what the climate sensitivity is. How sensitive is the global average temperature to changing CO2 levels?

There seems to be a general agreement (also among many skeptics) that an exponential rise in CO2 level will lead to a linear rise in temperature. This means that every doubling of the atmosphere’s CO2 concentration will cause a constant temperature rise. So if a doubling of CO2 from today’s 420 parts per million (ppm) to 840 ppm leads to a 2°C temperature rise, a doubling from 840 ppm to 1680 ppm will also cause a 2°C temperature rise. The climate sensitivity is then said to be 2 degrees.

However, climate sensitivity may also vary depending on Earth’s state and temperature. This is due to changes in feedback effects as the Earth warms or cools. In practice, the climate sensitivity probably won’t change much this century. (I’ll explain what feedback effects are shortly.)

When climate scientists talk about the climate sensitivity, they usually talk about the temperature rise caused by a doubling of CO2 from pre-industrial levels, so from 280 to 560 ppm.

Climate sensitivity comes in a few different flavors, including:

  • Equilibrium Climate Sensitivity (ECS), which is the temperature rise that a doubling of CO2 will lead to in the long term — that is, when, finally, a new equilibrium between CO2 and temperature is reached. The full temperature rise won’t be realized immediately. The biggest temperature changes will come early, but reaching the new equilibrium could take more than 1000 years.
  • Transient Climate Response (TCR), which is the temperature rise that a doubling of CO2 will have caused at the time when CO2 has doubled . The assumption is that the CO2 level increases by 1% per year. With an increase of 1% per year, one doubling takes 70 years. (TCR will always be lower than ECS.)

Feedback effects can cause the temperature to rise more or less than if there were no feedback effects. Higher temperatures lead to more water vapor in the atmosphere which leads to a further rise in temperature since water vapor is a strong greenhouse gas. This is an example of a positive feedback effect. Higher temperatures could also lead to more low level clouds that reflect more sunlight. This is an example of a negative feedback effect. The equilibrium climate sensitivity, ECS, would be about 1°C without feedback effects28)

If we’d like to determine how much the temperature will rise as a result of more CO2 in the atmosphere towards the year 2100, it’s more relevant to look at TCR than ECS. 29) However, a 1% annual increase in CO2 concentration (which is assumed in the climate sensitivity TCR) is probably unrealistic (even without new policy measures being introduced). In the last 10 years, atmospheric CO2 increased on average 0.6% per year. From 2000 to 2010 it increased about 0.5% per year.

There’s a lot of uncertainty about the exact value of the climate sensitivity. This is due to uncertainty in feedback effects. In their previous assessment report, IPCC stated that TCR almost certainly has a value of between 1.0 and 2.5 degrees. 1.0 degrees is relatively unproblematic. 2.5 degrees, on the other hand, can be more challenging. For ECS, IPCC stated a value of between 1.5 and 4.5 degrees. This is identical to the so-called Charney range dating all the way back to 1979 (3 degrees ± 1.5 degrees).

Climate models are important in IPCC’s calculation of climate sensitivity. 30) Climate models are advanced computer simulations of the Earth’s climate. They’re based on physical laws, other knowledge about the Earth’s climate processes and expected emissions of greenhouse gases in various scenarios. Since we don’t have perfect knowledge about all climate processes, some approximations are also used. These are called parameterizations. The start time for the simulation can, for example, be the year 1850 and the end time could be the year 2100. Climate models are run on supercomputers, and one run can take several months. According to IPCC’s previous assessment report, climate models calculated that the climate sensitivity, ECS, was in the range of 2.1 to 4.7 degrees. 31) This was slightly higher than the IPCC’s final estimate (1.5 to 4.5 degrees).

https://youtube.com/watch?v=rN7YHsokRV4%3Fstart%3D28

But from 2012, as explained in the video above, a number of studies have been published where climate sensitivity is calculated almost entirely from data and instrumental observations. These studies have all concluded that ECS is well below 3 degrees — most of them estimate the most probable ECS value to be in the range of 1.5 to 2 degrees.

Further in the video, we’re told that a study from 2018 is particularly important since it uses IPCC’s own data sets to calculate climate sensitivity (8:48):

This 2018 paper by Nicholas Lewis and Judith Curry, published in the American Meteorological Society’s Journal of Climate, is particularly important because it applies the energy balance method to the IPCC’s own datasets, while taking account of all known criticisms of that method. And it yields one of the lowest ECS estimates to date, 1.5 degrees, right at the bottom of the Charney range.

According to the study, the most likely TCR value is 1.20 degrees. (More accurately, it’s the median value of TCR that’s 1.20 degrees. Median for ECS: 1.50 degrees) The study’s main author, Nic Lewis, has written a guest post about the study on ClimateAudit, McIntyre’s blog.

In another study, Knutti et al. (2017), the authors have made an overview of various climate sensitivity studies and categorized them based on whether they’re observation-based (historical), based on climate models (climatology), or based on past temperature changes (palaeo). Knutti et al. (2017) confirms that observation-based studies find a lower value for the climate sensitivity than studies that use other methods:

Knutti et al. do not agree, however, that one should put more weight on the observation-based studies. 32)

But Nic Lewis definitely thinks so. He and Marcel Crok have written a report, Oversensitive — How the IPCC Hid the Good News on Global Warming (long version here). In it, they argue that one should trust the observation-based estimates for climate sensitivity more. Their conclusion:

[W]e think that of the three main approaches for estimating ECS available today (instrumental observations, palaeoclimate observations, [and climate model] simulations), instrumental estimates – in particular those based on warming over an extended period – are superior by far.

According to Lewis and Crok, there is a great deal of uncertainty associated with estimates of climate sensitivity based on past temperature changes. And climate models won’t necessarily simulate future climate correctly even if they’re able to simulate past temperature changes well. Lewis and Crok write:

There is no knob for climate sensitivity as such in global climate models, but there are many adjustable parameters affecting the treatment of processes (such as those involving clouds) that [climate models] do not calculate from basic physics.

Climate sensitivities exhibited by models that produce realistic simulated climates, and changes in climatic variables over the instrumental period, are assumed to be representative of real-world climate sensitivity. However, there is no scientific basis for this assumption. An experienced team of climate modellers has written that many combinations of model parameters can produce good simulations of the current climate but substantially different climate sensitivities. [Forest et al. (2008)] They also say that a good match between [climate model] simulations and observed twentieth century changes in global temperature – a very common test, cited approvingly in the [IPCC assessment report 4] as proving model skill – actually proves little.

Models with a climate sensitivity of 3°C can roughly match the historical record for the global temperature increase in the twentieth century, but only by using aerosol forcing values that are larger than observations indicate is the case, by underestimating positive forcings, by putting too much heat into the oceans and/or by having strong non-linearities or climate state dependency. [Paragraphs added to improve readability]

In the report, Lewis and Crok also point out errors in some of the observation-based studies that found a high climate sensitivity. 33)

So while it’s not entirely certain that climate sensitivity is low — and most climate scientists will probably not agree that it is — Lewis and Crok’s argument indicates there’s at least a very good chance that the climate sensitivity is low.

The video below is a presentation given by Nic Lewis, where he talks about climate sensitivity:

Business-as-usual CO2 emissions

As we’ve seen, to be able to calculate how much temperatures will rise as a result of human CO2 emissions, we need to know what the climate sensitivity is. In addition, we need to know how much CO2 and other greenhouse gases we’re going to emit.

In 2007, IPCC asked the climate research community to create greenhouse gas emission scenarios for the remainder of this century. The emission scenarios are named “RCP”, followed by a number. The higher the number, the higher the emissions.

The four main scenarios are RCP2.6, RCP4.5, RCP6.0 and RCP8.5.

RCP8.5 is the most pessimistic of the four scenarios. It’s a scenario in which no new policy measures are introduced to limit emissions. This is often used as a business as usual scenario, but according to Moss et al. (2010), it’s not intended as a prediction or forecast for the most likely CO2 emissions path. 34) Still, that’s how it’s been interpreted by many.

Climate scientist Zeke Hausfather writes about RCP8.5 in a comment in Nature:

RCP8.5 was intended to explore an unlikely high-risk future. But it has been widely used by some experts, policymakers and the media as something else entirely: as a likely ‘business as usual’ outcome. A sizeable portion of the literature on climate impacts refers to RCP8.5 as business as usual, implying that it is probable in the absence of stringent climate mitigation. The media then often amplifies this message, sometimes without communicating the nuances.

Hausfather has also, along with Justin Ritchie, looked at the International Energy Agency (IEA)’s 2019 annual report on World Energy Outlook. Ritchie writes that the IEA’s forecasts for CO2 emissions up to the year 2040 are far below the emissions assumed by RCP8.5:

IEA scenarios are a more realistic projection of the global energy system’s current ‘baseline’ trajectory; showing we are far from RCP8.5 [and] RCP6.0. World currently tracking between RCP4.5 [and] lower climate change scenarios – consistent with 1.5˚ to 2.5˚C [warming this century].

– Twitter-thread (Here’s an article that goes into more detail, written by Hausfather and Ritchie)

So they write that the warming this century will be 1.5 to 2.5 degrees. This assumes a higher climate sensitivity than the observation-based studies suggest.

In 2014, Lewis and Crok calculated that if the climate sensitivity TCR is 1.35 degrees, it will lead to a temperature rise of 2.1 degrees between 2012 and ca 2090 in the pessimistic RCP8.5 scenario. In a more realistic scenario than RCP8.5, the temperature will rise less:

On the RCP6.0 scenario and using the observational TCR-based method, total warming in 2081–2100 would still be around the international target of 2°C, with a rise of 1.2°C from 2012 rather than the 2°C rise projected by the [climate models].

As we’ve seen, Nic Lewis later adjusted the estimate for TCR down to 1.20 degrees, which would mean slightly less warming even than this.

And according to the IEA, it seems we’ll stay well below the emissions in RCP6.0 as well, maybe even below RCP4.5.

Accelerating technological progress

Even though I haven’t been as familiar with the science as I am now, I’ve never been worried about global warming. One reason for this is that technological progress is accelerating — improving faster every year.

About 40 years ago, Ray Kurzweil discovered that computing power was improving in a surprisingly predictable way.

The picture above is from Kurzweil’s 2005 book The Singularity Is Near, which I read in 2010. It shows the amount of computing power (measured in operations per second) that you at most (typically with supercomputers) could get for $1000 (inflation adjusted) in different years between 1900 and 2000. It also shows how Kurzweil envisions this price performance to change going forward. His forecast is just that the historical exponential trend will continue.

The y-axis is logarithmic. A straight line thus means exponential growth, where a doubling of the price performance will happen at regular intervals. But the graph bends upwards, so Kurzweil expects the growth in computing power to be even faster than this. In practice, this means each doubling in computing power will happen at shorter and shorter time intervals.

So far, Kurzweil’s prediction has proven quite accurate.

With the extreme computing power we’ll have access to later this century, our technological capabilities will also become extreme. Artificial intelligence (AI), too, is progressing extremely fast. AI will help us develop many other technologies faster than we would otherwise be able to.

https://youtube.com/watch?v=-1N3j8pBCNE

An important technology Kurzweil believes we’ll develop by 2030 is atomically precise manufacturing (APM). With APM we’ll be able to create physical objects where the positioning of each individual atom can be precisely controlled (31:52):

There are roadmaps to get to [APM]. The more conservative ones have that emerging by the end of the 2020s. I’m quite confident that [in the] 2030s, we’ll be able to create atomically precise structures which will enable us to create these medical nano-robots which will enable us to overcome essentially every disease and aging process — not instantly, but it’ll be a very powerful new tool to apply to health and medicine.

APM can help us cure disease and heal unwanted effects of aging. APM is also an enabling factor for 3D-printing of complex physical objects. With the help of APM, we’ll eventually be able to 3D-print e.g. clothing, computers and houses very quickly.

One can object that Kurzweil may be too optimistic. But even though many of these technologies won’t be developed until much later in the century, our technological capabilities will still be quite extreme by the year 2100. I therefore see no reason to worry about impacts of global warming that far into the future.

Kurzweil also believes we’ll soon change the way we produce food. This may be relevant for CO2 emissions (24:35):

We’re going to go from horizontal agriculture, which now takes up 40% of our usable land, to vertical agriculture, where we grow food with no chemicals, recycling all the nutrients; in vitro muscle tissue for meat; hydroponic plants for fruits and vegetables.

If he’s right, food production will require both less resources and much less land than today.

Will renewable energy take over?

This is a difficult question to answer. Many people are very optimistic about this, and I’m mostly an optimist myself. But there are also many who are pessimistic, especially among climate skeptics. 35) There are seemingly good arguments on both sides.

In 2011, I accepted a bet with a friend about solar energy. Based on Kurzweil’s predictions, I had spoken warmly about solar energy, and how it might soon surpass fossil energy usage in the world. My friend was sure this wouldn’t happen any time soon and suggested a bet: If we get more energy from solar than from fossil fuels within 20 years (from 2011, so by 2031), I win the bet. Otherwise he wins.

It’s primarily photovoltaic (PV) solar energy (solar cells) that’s expected to have a large increase in usage. According to Kurzweil, the reason for this is that solar PV is an information technology. It can thus benefit from computing power’s accelerating progress. It can also benefit from advancements in nano-technology and materials science.

The video below is from 2016 and shows how cheap “unsubsidized” 36) solar energy has become in many parts of the world and how quickly the price has fallen:

If you liked the video, I can also recommend this newer (from 2019) and longer video with Ramez Naam. In the newer video, Naam shows us that the price of solar has continued to fall after 2016:

Advancements in nanotechnology and materials science will make solar cells more efficient and cheaper. Solar cell technology has a great potential as a cheap energy source since future solar cells can be extremely thin and lightweight (and bendable). They can thus be integrated on many types of surfaces.

This type of solar cell has already been made and is probably not far from commercialization. The main ingredient is a material called perovskite. (Perovskite is named after a Russian named Lev Perovski.) Silicon, which is used in traditional solar cells, needs to be heated to very high temperatures. Perovskite, on the other hand, can be made at room temperature. This is one reason perovskite solar cells will be cheap compared with traditional solar cells. According to Sam Stanks:

[A] $100 million perovskite factory could do the job of a $1 billion silicon factory.

Perovskite is a type of ink that can be easily printed on various surfaces, for example plastic.

Although dark perovskite solar cells are the most effective ones, they can be made in different colors and with varying degrees of transparency. Solar cells that only capture infrared light can be completely transparent. They can thus be used on the sides of buildings (including in windows), and the building can still look very good. (Solar cells on buildings don’t have the same ecological consequences that solar power plants can have.)

Let’s take a look at how my bet is faring. According to ourworldindata.org, the amount of energy we received from solar energy in 2010 was 33.68 TWh. This corresponded to 0.024% of the world’s energy production. 8 years later, in 2018, we received 584.63 TWh from solar energy. This was still only 0.4% of the world’s total energy production. But in those 8 years, there had been a 17-fold increase in the amount of energy we get from solar, or a 15-fold increase in the share of solar energy.

When I wrote about my bet in 2011, I wrote, as Kurzweil had predicted, that solar energy’s share of the world’s energy consumption would double every two years. A doubling every two years, becomes four doublings in 8 years, which is a 16-fold increase. We ended up getting a little more than a 15-fold increase. So far so good.

However, solar’s growth rate has slowed in recent years. From 2011 to 2019, OurWorldInData shows there was only an 11-fold increase in solar energy generation worldwide. The growth might pick up again when perovskite solar cells become available.

Ray Kurzweil is still optimistic about solar and renewable energy soon overtaking fossil fuels. In 2019, he wrote:

[Renewable energy is] doubling every 4 years and I’m confident it will meet 100% of our energy needs by 2030.

Of course, renewable’s share won’t actually reach 100% by 2030, and Kurzweil surely doesn’t really believe so himself either. But if we can only get more energy from solar than from fossil fuels by 2031, then I’ll be happy… In any case, it’ll be very interesting to see how solar and renewable energy fares in the coming years.

Certain forms of renewable energy sources, particularly solar and wind, also have some problems associated with them:

  • They require large areas of land and may damage or displace wildlife (wind turbines, for example, kill many birds and bats).
  • A high share of renewables in the energy mix is associated with high electricity prices.
  • Solar and wind are intermittent energy sources. They generally don’t produce energy continuously throughout the day and need to be backed up by other forms of energy in periods when they can’t produce electricity. The backup-power is typically fossil fuels.
  • Although solar and wind don’t have direct greenhouse gas emissions, production and disposal of wind turbines and solar panels isn’t necessarily all that clean. Production of turbines and panels is energy intensive and requires extensive mining for materials. There are also challenges in connection with disposing of panels and turbines after end-of-life.

I won’t go into detail about these challenges, but they seem like valid concerns.

There are very smart people on both sides of the renewables debate, so how does one determine what’s right? It’s certainly not easy.

I think many of those who are most optimistic are looking more towards the future than the past. They see how solar energy prices have fallen and expect prices to continue to fall. They see better technologies on the horizon. They see potential solutions to difficult challenges.

On the other hand, the pessimists are looking more at renewable energy’s past performance, which may not be that great. They may be more aware than the optimists of the challenges that need to be solved. They don’t necessarily believe they can’t be solved, but may consider the likely cost to be higher than the benefit.

I’m generally an optimist and believe the challenges can be overcome. I’m far from certain, though, and it would be interesting to see a discussion about renewables in the comments.

Ramez Naam is also an optimist and focuses on the potential solutions:

[D]on’t bet on the forecasters, but on the innovators, they’re the ones actually making it happen[.]

Naam said this after explaining how fast Tesla had improved battery technology. In 2013, the US Energy Information Administration predicted that batteries would be about a third cheaper by the year 2048. Instead, they got 4 times cheaper in just 5 years!

Before Naam talked about the improvements in battery technology, he also showed how the amount of energy we get from solar has increased and compared it with forecasts from the International Energy Agency (IEA). It’s a bit funny to see, because while the usage of solar energy has soared, the IEA’s forecasts have been that solar energy usage will hardly increase at all:

Source: https://www.carbonbrief.org/profound-shifts-underway-in-energy-system-says-iea-world-energy-outlook (based on IEA World Energy Outlook 2019).

Hausfather and Ritchie have also discussed the above graph. Maybe not surprisingly, they write that the IEA has been criticized for being too conservative when it comes to renewable energy.

A few weeks ago, however, IEA published this year’s World Energy Outlook report. And this time they’re actually optimistic about solar:

Solar becomes the new king of electricity…

Renewables grow rapidly in all our scenarios, with solar at the centre of this new constellation of electricity generation technologies. […] With sharp cost reductions over the past decade, solar PV is consistently cheaper than new coal- or gasfired power plants in most countries, and solar projects now offer some of the lowest cost electricity ever seen.

They’re probably talking about the price as experienced by the power utility companies. A low price for solar means more solar plants are likely to be built. Due to subsidies, consumers won’t necessarily benefit in the form of lower prices, though. I’ll briefly return to this topic very soon.

Even if IEA is more optimistic about solar energy this year than they’ve been before, they’re still a lot less optimistic than Ray Kurzweil about solar, even in their lowest-emission scenario. Considering IEA’s history of underestimating solar, it wouldn’t be surprising if their projections are still on the low side.

Another exciting alternative to fossil fuels is nuclear power — fission and fusion.

Fission-based nuclear power has gotten a bad reputation due to a few serious incidents (including Chernobyl and Fukushima), but is today a safe way to produce electricity. CO2 emissions from nuclear fission are very low, but nuclear power is relatively expensive. This is partly due to strict regulations and the fact that building a fission power plant takes a long time. New technological breakthroughs, such as modular nuclear power plants or Thorium reactors, could make fission-based nuclear power more profitable and common.

Fusion has traditionally always been 20 or 30 years away, but there’s been good progress in fusion research lately, and in addition to the gigantic ITER project, which is a collaboration between several countries, there are now several smaller, private companies working towards the commercialization of fusion energy. If we’re lucky, maybe we’ll start getting electricity from fusion power plants around the middle of the 2030s?

Should renewable energy take over? What about the poor?

So, I don’t think it’s unlikely that renewable energy will take over, but I didn’t explicitly say it was a good thing. Like the question of whether renewable energy will take over, the question of whether renewable energy should take over is difficult to answer. It depends…

In the long term, I think it’s inevitable that fossil fuels will, to a large degree, be replaced by alternative energy sources. The question then becomes: Should we take political action to force more renewables into the energy mix today?

If we need to subsidize renewable energy considerably for it to be used, this means we end up paying more for electricity — one way or another. Either directly through electricity fees or renewables surcharges, or indirectly through higher taxes in general. Germany and Denmark have both invested heavily in renewables, and their electricity prices (for household consumers) are the highest in Europe. They also have the highest share of taxes and levies in the overall electricity price.

Higher electricity prices naturally affect the poor the most. This means that subsidizing renewable energy can make life worse for poor people.

It’s also the poorest — especially poor people in developing countries — who will be most affected by climate change. So shouldn’t we reduce our CO2 emissions for their sake?

The problem is that a large reduction in CO2 emissions will only have a small impact on the global temperature several decades from now. This means that by reducing the world’s CO2 emissions, we are not helping the world’s poorest today, but we may be helping them a little bit — far into the future.

A much better strategy for assisting the poor is to help them become richer right here and now. The richer you are, the easier it will be to withstand climate change and extreme weather since the risk of dying in a natural disaster decreases as you get richer. So one way to help is to allow poor countries to use the cheapest form of energy, even if the result will be higher CO2 emissions in the short term.

But renewable energy — especially solar energy — is getting cheaper at a fast pace, and if subsidies are needed today, they may not be for long, at least not in fairly sunny places. Since the sun doesn’t shine at night and we don’t yet have good enough storage solutions, solar energy cannot presently cover 100% of a country’s energy needs – you need hydropower, fossil fuels and/or nuclear power in addition.

My conclusion is that renewable energy should take over if and when it becomes cheaper than the alternatives. Preferably, one shouldn’t subsidize the usage of renewable energy, but one can subsidize alternative energy research, with the goal of making it cheaper than fossil fuels. If this happens, then every country would switch to renewables (as Bjørn Lomborg often points out). And then, switching to renewables wouldn’t hurt other areas of the economy and people’s lives.

More CO2 has made the world greener

The concentration of CO2 in the atmosphere has increased from about 280 parts per million (ppm) in pre-industrial times to about 420 ppm today. (420 ppm is 0.042%.) Today’s level of CO2 is still lower than what’s optimal for trees and plants to grow as much as possible. And this is the reason why it’s common to raise CO2 concentration in greenhouses.

The increased atmospheric CO2 concentration has already had good effect on vegetation growth. The first satellite measurements began in 1979. In 2014, after only 35 years of satellite measurements, the amount of green vegetation on the planet had increased by 14%, according to this study. NASA has written an article about the study, where they state:

The greening represents an increase in leaves on plants and trees equivalent in area to two times the continental United States.

The whole increase can’t be attributed to the effect of CO2, but most of it can — around 70%, according to the study.

Global warming policy measures

Bjørn Lomborg is founder and director of Copenhagen Consensus Center (CCC). CCC is a think tank which invites leading economists to prioritize potential solutions to global issues. They do so using cost-benefit analysis.

Bjørn Lomborg emphasizes the importance of not wasting huge amounts of money and resources on measures that don’t work or do more harm than good.

Lomborg and CCC take the IPCC as authority on global warming. They (or at least Bjørn Lomborg) assume that the pessimistic RCP8.5 scenario is the most likely business as usual scenario. And in this scenario, the temperature will, according to IPCC, be almost 4 degrees higher in 2100 than today (4.1 degrees above the 1986-2005 average).

Source: Chapter 12 (from working group I) of IPCCs previous assessment report (FAQ 12.1 Figure 1)

According to IPCC, the cost of this higher temperature will be a ca 3% reduction of global GDP in 2100 37) (see from 32:20 in the video above). This means that if the world’s GDP would have risen by 450% by the year 2100 absent negative consequences of climate change, it will instead rise by only 436%. The lower rise in GDP is due to more climate-related damages. As Bjørn Lomborg says, this cost is a problem, not the end of the world.

In 2009, CCC assessed several potential solutions to global warming. The Paris Agreement and various taxes on CO2 were considered very bad solutions with high cost and little benefit.

Earlier in the article, I said it’s better to subsidize research on renewable energy than to subsidize its usage. Lomborg also recommends this, but in combination with a low, slowly increasing, tax on CO2. The goal of subsidizing renewable energy research is to make renewable energy so cheap that it outcompetes fossil fuels.

If the temperature, in the business-as-usual scenario, rises less than IPCC expects for RCP8.5 (as seems likely), then even the low CO2 tax that Lomborg recommends may be too high. It thus risks doing more harm than good. When we further take into account the positive effects of CO2 on plant growth, it isn’t obvious that human CO2 emissions will have a net negative effect in the next few decades.

And in the longer term, our technology will be so advanced that any temperature rise can, if needed, be reversed. CCC judged two such potential solutions as very good: Marine Cloud Whitening and Stratospheric Aerosol Insertion. 38) Both solutions cause the atmosphere to reflect more sunlight, thus reducing temperature.

Carbon Storage R&D and Air Capture R&D were also considered as very good and good solutions, respectively. Air Capture is about removing CO2 from the atmosphere. This, according to K. Eric Drexler (by some called “the father of nanotechnology”), was too expensive in 2013, but should be affordable in a few decades:

[T]o have the the 21st century have a planet that resembles what we’ve had in the previous human history will require taking the CO2 levels down, and that is an enormous project. One can calculate the energy required – it’s huge, the area of photovoltaics required to generate that energy is enormous, the costs are out of range of what can be handled by the world today.

But the prospects with a better means of making things, more efficient, more capable, are to be able to do a project of that scale, at low cost, taking molecular devices, removing molecules from the atmosphere. Photovoltaics produced at low cost to power those machines can draw down CO2 and fix the greenhouse gas problem in a moderate length of time once we pass the threshold of having those technologies […] We now have in hand tools for beginning to build with atomic precision, and we can see pathways […] to a truly transformative technology.

The disadvantage of removing CO2 from the atmosphere is that we lose the positive effect CO2 has on plant growth. Getting the atmosphere to reflect more sunlight — especially over the hottest areas — may be a better solution. Lomborg seems to agree:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surly we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

– Bjorn Lomborg on the Costs and Benefits of Attacking Climate Change (8:35)

Bad climate solutions can potentially have very bad consequences.

Earlier this year, Lomborg published a new book: False Alarm – How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet. In his book, Lomborg points to two of IPCC’s new scenarios, SSP1 and SSP5. SSP1 has the tagline Sustainability – Taking the Green Road. SSP5 has the tagline Fossil-fueled Development – Taking the Highway. Both scenarios are relatively optimistic about economic growth.

According to Riahi et al. (2017), to which Lomborg refers, world per capita GDP will, by the year 2100, have increased by 600% for SSP1. For SSP5, it will have increased by as much as 1040%.

Screenshot from Lomborg’s False Alarm. GDP per capita increases to $182,000 for SSP5 and to $106,000 for SSP1.

Lomborg then subtracts the costs associated with climate change for the two scenarios. For SSP1, the cost is 2.5% of GDP. For SSP5, the cost is 5.7% of GDP:

Screenshot from Lomborg’s False Alarm. Even when subtracting the costs associated with global warming, people are still much richer in SSP5 than SSP1 in 2100.

Even after subtracting the costs of climate change in the two scenarios, the average GDP per capita is still significantly higher in SSP5 (the scenario with no restrictions on the usage of fossil fuels) than in SSP1. In SSP5, GDP per capita grows to $172,000, versus $103,000 in SSP1. The fossil-fueled scenario thus leads to much greater prosperity for most people. This is especially true for people in developing countries, who then won’t be nearly as vulnerable as today when natural disasters happen. (And natural disasters and extreme weather will, of course, continue to occur regardless of the global temperature.)

In other words, it’s extremely important that we don’t unnecessarily limit poor countries’ opportunities for economic growth. And this is much more important than limiting our CO2 emissions. The argument applies to an even greater extent if climate sensitivity is lower than assumed by Lomborg and IPCC.

Norwegian media’s white lies and deliberate deception

NRK is a tax-funded Norwegian TV broadcaster. They have a program series called Folkeopplysningen. Folkeopplysningen means something like enlightenment of the people or public education. In 2018, they showed an episode they called “Klimakrisa” (the climate crisis) where they took sides with the alarmists. I’ve enjoyed watching many of Folkeopplysningen’s episodes. Not this one, though.

They referred both to John Cook’s 97% consensus study (which I’ve mentioned earlier) and a hockey stick temperature graph. The graph wasn’t Mann’s, but there were similarities. The reason for showing the hockey stick graph was no doubt to convince the viewer that the current rate of warming is unprecedented in the last 22,000 years. Since they didn’t use Mann’s graph, I’ll say a little about the graph they did use.

Mann’s hockey stick graph from 1999 goes back 1000 years. Folkeopplysningen’s graph, on the other hand, shows the Earth’s average temperature for the last 22,000 years. And while Mann’s graph applies to the Northern Hemisphere, Folkeopplysningen‘s graph applies to both hemispheres.

Here’s an attempt to translate some of what Andreas Wahl, the host of the program, said:

Wahl, speeking to camera: And we start 20,000 years before the current era (BCE). At that time it was more than 4 degrees colder, and as you can see, Norway and Oslo are covered by a thick layer of ice. And the temperature; it is… stable. A tiny rise here, which causes the ice to begin to melt, releasing more CO2, and the temperature rise accelerates. And it goes gently, millennium after millennium.

Wahl, speeking in background (visible Wahl muted): The temperature in the past has had several local and short-term fluctuations that don’t appear here. But this graph shows the global trend over time.

Wahl, speeking to camera: Only here do humans start farming. A gentle rise, stabilizes just above the red line here. Continues… The Pyramids, Stonehenge, the world’s first virgin birth. Middle Ages; slightly higher temperature in Europe, but not high enough to affect the global average temperature, which remains stable through the Middle Ages, Renaissance, Enlightenment. Until we land here, in the industrial revolution, where we dig up coal and wells, where we invent planes, cars and oil platforms and grow to seven billion people. Then this happens. Here we are today, and if we continue much as we do now, we envision this scenario. This scenario is demanding, but realistic, and then we don’t achieve the 2 degrees target. While this is our dream scenario, but that’s extremely demanding. Wherever this ends, to say that these are natural, and not man-made changes, that is rather absurd.

Screenshot from Folkeopplysningen’s episode about “the climate crisis” (11:33).

Folkeopplysningen has copied the graph (including the three scenarios for the future) from this XKCD page. The sources Folkeopplysningen provides are the same as those provided by XKCD. In addition, Folkeopplysningen states XKCD as a source:

  • Shakun et al. (2012)
  • Marcott et al. (2013)
  • Annan and Hargreaves (2013)
  • Hadcrut4
  • IPCC

The IPCC is probably included as a source for the three future temperature scenarios.

HadCRUT4 is recent temperature data (from 1850) used by the IPCC. Met Office Hadley Center and CRU collaborate to maintain the dataset, hence the first six letters of the name. The T stands for “Temperature” and the number 4 is the version number (4 is the latest version).

Annan and Hargreaves (2013) estimated Earth’s average temperature at the point when the ice was at its thickest during the last ice age. According to the study, the temperature was 4.0 ± 0.8 degrees lower at that time (19-23,000 years ago) than in pre-industrial times (with 95% certainty). This, they write, is warmer than earlier studies had concluded. (The earlier studies had used a more limited data set.)

Shakun et al. (2012)‘s main conclusion is that atmospheric CO2 increased before temperature at the end of the last ice age. The study also presents a temperature reconstruction that goes further back in time than Marcott et al. (2013).

Marcott et al. (2013) has reconstructed global average temperature for the last 11,300 years. According to the study, the reconstruction has a time resolution of a few hundred years. This means it can’t reveal short-term large temperature fluctuations. In a FAQ published on RealClimate two weeks after the study itself was published (commented on by McIntyre here), the authors write:

We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer.

So if there have been large but short-term temperature fluctuations during the last 11,300 years, they won’t show up in the graph. Jeremy Shakun, lead author of Shakun et al. (2012), was also co-author of the Marcott study. Shakun was interviewed about Marcott et al. (2013) and was asked about potential short-term temperature fluctuations. His answer:

No, to be fair, I don’t think we can say for sure isn’t a little 50-year warm blip in there that was much warmer than today? That could be hiding in the data out there. We don’t have the resolution for that, because we have an ocean core data point every 200 years, plus that mud’s all mixed around, so when you really get down to it, you might never see that blip.

The title of the XKCD page is A Timeline of Earth’s Average Temperature Since the Last Ice Age Glaciation. The subtitle is When People Say “The Climate Has Changed Before” These are the Kinds of Changes They’re Talking About.

It’s quite obvious that Andreas Wahl and Folkeopplysningen want to convey the same message as XKCD — they are trying to make viewers believe that the rate of warming since 1950 is unprecedented and that similarly rapid temperature changes haven’t occurred previously in at least 20,000+ years. But the sources stated by Folkeopplysningen don’t justify such a conclusion.

Folkeopplysningen is not unique among Norwegian media in the way they communicate about the climate. Folkeopplysningen didn’t lie outright, but the way they presented the science was imprecise. This is true both in regards to the consensus question and the hockey stick graph that they showed. The science is presented in such a way that viewers are left with an incorrect impression. Only very observant viewers can be expected to notice Folkeopplysningen‘s imprecise use of language.

On the alleged consensus, Andreas Wahl said (translated from Norwegian):

In 2013, a study came out, that went through all published climate science to determine what share of the science confirms man-made climate change. And the result? 97%.

This, again, is imprecise — what does it mean to confirm man-made climate change? Does it mean that scientists believe humans contribute at least somewhat to climate change, or does it mean we’re the main cause? The study referred to by Folkeopplysningen (Cook et al. (2013)) only showed that among the approximately 12,000 research papers reviewed, about one third of the papers expressed an opinion about the role of humans, and among those ca 4,000 papers, 97% expressed the opinion that humans contribute to climate change or global warming — either a little or a lot. But when Andreas Wahl says what he says, it’s hard for a neutral viewer to understand that this is what he means.

In my view, Folkeopplysningen‘s special way of communicating scientific research can’t be interpreted as anything other than deliberate deception.

Faktisk.no (faktisk means actually or in fact) is a Norwegian fact-checker which is also an official fact-checker for Facebook. Faktisk is owned by some of the biggest media companies in Norway, including NRK.

Faktisk did a fact-check related to the 97% consensus claim. The article’s title (translated to English) is Yes, the vast majority of the world’s climate scientists agree that humans affect the climate. That humans affect the climate is not a very interesting claim. Even most skeptics would agree to that.

The fact-check reviews an article from resett.no, a Norwegian alternative media company. (Resett means reset.) The title of the reviewed article (again, translated to English) is How the 97% Myth Arose — and the World’s Most Resilient Fake News. There are some misrepresentations in the article, and Faktisk addressed one of those and rated the article as false. I think it’s a good thing Resett is trying to refute the 97% myth, but they need to be more accurate in their criticism. Else, they’re no better than the other side.

The claim that was rated as false was (translated from Norwegian): The vast majority of the world’s climate scientists, 66%, don’t conclude whether humans influence the climate.

The claim relates to Cook’s 97% consensus study. I agree with Faktisk that the statement is false. The 66% figure is the share of studies reviewed that were assessed as not expressing an opinion on the human contribution to climate change or global warming. 66% of the studies isn’t necessarily the same as 66% of the scientists, and most of the scientists probably had an opinion, although it wasn’t expressed in the study’s abstract 39) (which was the only part of the papers that Cook et al. assessed).

Although I agree with Faktisk that Resett’s claim is false, I think it would be more interesting if they did a fact-check on another claim — the claim that 97% of the world’s climate scientists agree that human activity is the main cause of global warming. Because that’s something many (most?) Norwegians actually believe.

Faktisk actually also mentions the author of The Hockey Stick Illusion, Andrew Montford, in their fact-check:

[Kjell Erik Eilertsen refers to] a report from the British organization Global Warming Policy Foundation (GWPF), authored by Andrew Montford. It states that the Cook report’s methodology is one-sided, and that it only describes agreement on the obvious. [Translated from Norwegian]

The report they refer to is entitled Fraud, Bias and Public Relations — The 97% ‘consensus’ and its critics. I can definitely recommend reading it — Cook’s study also has an exciting story behind it, with lies, leaked data and leaked internal communication.

Faktisk doesn’t attempt to address Montford’s criticism of Cook – instead they just write:

Montford is an accountant with a bachelor’s degree in chemistry. He hasn’t published a single peer-reviewed scientific article. On the other hand, he is a well-known climate skeptic, blogger and author of the book “The Hockey Stick Illusion” about “the corrupt science”.

GWPF was established in 2009 as a charitable foundation by the climate-skeptical British politician Nigel Lawson. In 2015, they were investigated by the British Authority for Charitable Foundations. They concluded that the GWPF didn’t publish independent information suitable for teaching, but agitated politically. The organization was later split up. [Translated from Norwegian]

According to Wikipedia, the Global Warming Policy Foundation (GWPF) was split into two parts: the existing Global Warming Policy Foundation, which was to remain a charitable foundation, and the Global Warming Policy Forum, which was allowed to engage in political lobbying. (Faktisk wrote that the investigation happened in 2015, but according to Wikipedia, it happened in 2014 and possibly 2013.) My impression is that the quality of the reports published by GWPF is very high.

It’s only natural that Faktisk doesn’t want to argue that the Cook report only describes agreement on obvious things. This is because, to a large extent, Faktisk does the same thing themselves — recall their fact check title: “Yes, the vast majority of the world’s climate scientists agree that humans affect the climate.” However, they do alternate between saying that humans affect the climate and that human activity is the main cause of climate change. 40) 41)

If Faktisk’s journalists have actually read Montford’s relatively crushing report, it’s hard to understand how they can still defend Cook’s study. One way Faktisk defends the study is by re-stating the study’s rationale for why it is okay to set aside the nearly 8,000 studies in which no opinion was expressed about the human contribution to global warming.

I agree that human activity contributes to climate change, and it’s quite possible that more than 97% of climate scientists think so, too. But I also agree with Andrew Montford that this is a matter of course and not very interesting. If one wants to determine how many scientists believe human activity is the main cause of global warming since 1950, Cook et al. (2013) cannot provide the answer to that, and I strongly recommend the media to stop referring to that study. (Instead, see the study discussed in footnote 41.)

Improvement in the IPCC and among climate scientists?

The Climategate emails revealed some serious problems within the field of climate science prior to 2010, but is it as bad today? Fortunately, there seems to have been some improvements:

1. In a 2019 interview, Ross McKitrick said:

Now that whole constituency that wants certainty and wants catastrophe and wants the big scary message, it’s beginning to detach itself from the IPCC, because the message in the IPCC reports just isn’t keeping up with where the exaggeration folks want to go. 42)

2. Valérie Masson-Delmotte, co-chair of IPCC’s Working Group I since 2015, agrees with McIntyre’s critique of Mann’s studies. She told this to McIntyre in 2006, but she wanted McIntyre to keep her name secret. Recently (probably in 2019) McIntyre asked if he could use her name, and she allowed it. McIntyre said this in the SoundCloud interview that I embedded earlier (1:00:40).

3. You may recall that I’ve quoted Tom Wigley, the former CRU director, a few times. Wigley has recommended Michael Shellenberger’s new book Apocalypse Never — Why Environmental Alarmism Hurts Us All. Shellenberger is a climate activist who’s no longer an alarmist. He argues, among other things, that climate change doesn’t lead to worse natural disasters and that humans are not causing a sixth mass extinction. (The article I linked to here was first published on forbes.com, but was soon removed from the Forbes website 43)). Shellenberger is pro nuclear power, but skeptical about solar and wind. This is largely due to solar and wind’s ecological consequences in that they require large areas of land and kill birds. In the book, he describes how producers of fossil fuels and renewable energy oppose nuclear power. (Personally, I think he’s too one-sidedly negative about renewable energy.)

In conversation with Shellenberger, Wigley said that climate change does not threaten our civilization, that it’s wrong to exaggerate to get people’s attention, and:

All these young people have been misinformed. And partly it’s Greta Thunberg’s fault. Not deliberately. But she’s wrong.

In his recommendation of the book, Wigley wrote that Apocalypse Never “may be the most important book on the environment ever written”.

4. Judith Curry is a climate scientist who became a climate skeptic after Climategate. In a 2014 blog post, she listed some positive changes that had resulted from Climategate and the hockey stick graph revelations, including:

Transparency has improved substantially. Journals and funding agencies now expect data to be made publicly available, along with metadata. The code for most climate models is now publicly available. As far as I know, there are no outstanding [Freedom of Information Act] requests for data (other than possibly some of Mann’s [Hockey Stick] data and documentation). Climategate shed a public light on the lack of transparency in climate science, which was deemed intolerable by pretty much everyone (except for some people who ‘owned’ climate data sets).

Understanding, documenting and communicating uncertainty has continued to grow in importance, and is the focus of much more scholarly attention. With regards to the IPCC, I feel that [Working Group 2] in [Assessment Report 5] did a substantially better job with uncertainty and confidence levels (I was not impressed with what [Working Group 1] did).

Life for a scientist that is skeptical of ‘consensus’ climate science or critical of the IPCC is definitely easier post-Climategate.

As a result of Climategate, there is little tolerance for the editorial gatekeeping ways of trying to keep skeptical papers from being published.

(In 2019, however, she wrote that it’s become even more difficult to get skeptical papers published in the largest and most influential journals. 44))

Despite the progress, she argues that the IPCC should be shut down.

Conclusion (Summary for Policymakers)

There’s little reason to worry about climate change — at least as a result of CO2 emissions from human activity. CO2 emissions are on a much lower path than the media leads us to believe. There’s also a good chance that the climate sensitivity is low. If so, Earth’s average temperature may not rise more than about 1°C this century (as a result of human greenhouse gas emissions).

More CO2 causes trees and plants to grow more, so it isn’t entirely obvious that our CO2 emissions will have a net negative impact in the next few decades. And in the longer term, our technology will be so advanced that the negative effects can be easily managed or reversed.

It’s important that we don’t waste large amounts of money and resources on policy measures that do more harm than good. Regarding renewables, it’s better to subsidize renewables research than its usage. If the goal is zero emissions, unsubsidized alternative energy needs to become cheap enough to out-compete fossil fuels.

For poor countries, economic growth is more important than reduced CO2 emissions. The richer you are, the greater the likelihood of surviving extreme weather and natural disasters. The effect of reduced CO2 emissions, on the other hand, won’t be measurable for decades.

There has been a lot of dishonesty among some climate scientists, but, fortunately, some things have also improved after Climategate. The media, though, isn’t one of those things. I wish for the media to become more honest when writing about the climate, and I wish they would stop with their climate scares. Less scary news could mean better mental health for many young people who are today afraid of the future.


Footnotes:

1) All emails from the second round of Climategate can (or could previously) be downloaded from here. If the link is down, Wayback Machine has them archived.

The online book I linked to in the main text has a slightly sarcastic tone. A book with a more neutral tone — which may also be better — is “Climategate: The CRUtape Letters”, but I haven’t found an online version of it.

2) To determine how many studies were categorized by Cook et al. (2013) as expressing the view that humans have caused at least 50% of the warming since about 1950, we can start with the raw data from Cook et al. (2013). We can then write a small Python program, as I have done here, where I’ve also copied the raw data into datafile.txt:

The code above is just an image (since I couldn’t avoid the code automatically receiving focus when it was embedded). Clicking the image takes you to a page where the code is (or can be) executed.

The point is to count up all the studies categorized as endorsement level 1. These are the studies that, according to Cook et al. (2013), expressed the opinion that humans are the main cause of global warming.

The endorsement level for a study is the last number on lines containing information about a study (in datafile.txt). Line 19 in datafile.txt, which describes the format of the lines below, informs us about this:

Year,Title,Journal,Authors,Category,Endorsement

When you navigate to the page with the code from the image above, the code will be run automatically. You will then see the result on the right (if your screen is wide enough — otherwise, press the play button). It should give the following result:

{‘1’: 64, ‘2’: 922, ‘3’: 2910, ‘4’: 7970, ‘5’: 54, ‘6’: 15, ‘7’: 9}

This means 64 studies were categorized as endorsement level 1. The endorsement categories can also be found in the raw data file:

Endorsement
1,Explicitly endorses and quantifies AGW as 50+%
2,Explicitly endorses but does not quantify or minimise
3,Implicitly endorses AGW without minimising it
4,No Position
5,Implicitly minimizes/rejects AGW
6,Explicitly minimizes/rejects AGW but does not quantify
7,Explicitly minimizes/rejects AGW as less than 50%

AGW is short for Anthropogenic (that is, man-made) Global Warming.

3) Without the use of principal components, a possible way to reconstruct the temperature is to take a relatively simple average of all the chronologies. But you then risk missing important patterns in the temperature data, such as the temperature hypothetically increasing a lot in many places in the 20th century. In the simple average, this won’t necessarily be visible because the temperature may have decreased in other places. Principal component analysis can also help to elicit such underlying patterns.

4) The full title of MBH98 is Global-scale temperature patterns and climate forcing over the past six centuries. It’s not especially easy to read. You’re hereby warned.

5) The full title of MBH99 is Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations.

6) Another thing that seemed strange was that a simple average of all the proxy series in MBH98 showed that the temperature only varied around a constant level — there was no particular trend — no sharp temperature rise in the 20th century. The temperature reconstruction in MBH98 nevertheless showed that there had been a temperature rise in the 20th century, and that these high temperatures were unique for the last 600 years:

Screenshot from a presentation McIntyre and McKitrick gave to an expert panel at the National Academy of Sciences (NAS) in 2006. Top: A simple average of all the proxy series in MBH98. Bottom: The final temperature reconstruction in MBH98.

There could, in theory, be valid and good reasons for the difference, but the result was suspicious and gave motivation for further investigations.

7) Mann’s response to McIntyre and McKitrick was published in the form of two short articles by freelance journalist David Appell on his website (archived by Wayback Machine here and here). Mann’s response didn’t quite make sense. In addition to saying that the wrong data had been used (which was probably correct), Mann criticized McIntyre and McKitrick for requesting the data in Excel format (which they hadn’t done) and for not using all 159 proxies — despite the fact that Mann’s study stated that there were 112 proxies, not 159. It was an easy task for McIntyre and McKitrick to counter Mann’s criticism, and you can see their response here.

A few days later, Mann published a formal response, in which he explained that McIntyre and McKitrick’s calculation of principal components incorrect. McIntyre and McKitrick hadn’t used the stepwise procedure that Mann had used. However, this procedure wasn’t described in his study. In McIntyre and McKitrick’s subsequent response to Mann, they document that it’s still not possible to recreate Mann’s stepwise procedure even with access to the new FTP website.

8) Mann had written:

Here is an email I sent [to McIntyre] a few weeks ago in response to an inquiry. It appears, by the way, that he has been trying to break into our machine[.] Obviously, this character is looking for any little thing he can get ahold of.

McIntyre briefly commented on Mann’s email in the comment section of a blog post on ClimateAudit.

9) Or down, but Mann’s program turned such graphs upside down, so that curves pointing downwards in the 20th century were interpreted as pointing upwards. From The Hockey Stick Illusion: “Meanwhile, any [series] with twentieth century downticks were given large negative weightings, effectively flipping them over and lining them up with upticks.”

10) The other area is Gaspé in south east Canada. MBH98 had used a controversial proxy from cedar trees from this area:

  • Like for the trees in California, the Gaspé tree ring widths had a distinct hockey stick shape with an uptick in the 20th century. Also, like the trees in California, the sharp increase in tree ring width didn’t match measured temperatures in the area.
  • From the year 1404 to 1447, the Gaspé chronology consisted of only 1-2 trees.
  • Mann had extrapolated the chronology back to the year 1400, so that it could be used for the earliest period in MBH98. The Gaspé chronology was the only proxy series in the study that had been extrapolated in this way.
  • The Gaspé chronology was included twice in MBH98, once as a stand-alone chronology, and once as part of the first principal component (PC1) for North America (NOAMER).

(See McIntyre and McKitrick’s 2005 Energy & Environment article.)

11) Ross McKitrick has written:

If the flawed bristlecone pine series are removed, the hockey stick disappears regardless of how the [principal components] are calculated and regardless of how many are included. The hockey stick shape is not global, it is a local phenomenon associated with eccentric proxies. Mann discovered this long ago and never reported it.

The reason McKitrick could say that Mann knew about it, is that there was a folder with a very suspicious name on the FTP website where the data was located: BACKTO_1400-CENSORED. In that folder, Mann had made calculations without the trees from California. The trees in question are called strip-bark bristlecone pines.

12) McIntyre had a different theory, though. The cross-section of the trunks of these trees wasn’t a perfect circle — there were large variations in tree ring width depending on which side the tree rings were measured from. McIntyre believed the asymmetry could be due to branches that had broken in the 19th century, possibly due to heavy snow. The asymmetry may also have made it more challenging to measure the tree rings.

Linah Ababneh re-sampled the trees in 2006 and did not find the same large increase in tree ring widths in the 20th century as was found in the earlier sampling. McIntyre talks about this in a video here. Several theories attempting to explain the seemingly drastic increase in tree ring widths were discussed in McIntyre and McKitrick’s 2005 article in Energy & Environment.

13) The verification statistics tell us something about how much we can trust the temperature reconstruction. As I have understood it and a little more specifically for MBH98, the verification statistics say how well the reconstructed temperature graph matches thermometer data in the study’s verification period and earlier periods. The verification period was 1856-1901. 1902-1980 was the calibration period.

The calibration period determines the relationship between ring widths and temperature. You then want to see how well this relationships holds up in a different period where you also have both temperature and proxy data. This other period is the verification period. For MBH98, the correlation between temperature and proxy data in the verification period was R2=0.2. This is a rather poor correlation.

I found it difficult to understand what R2 means for earlier time periods, but I think McIntyre explained it in a somewhat understandable way in a comment to a blog post about the meaning of R2:

[T]he 1400 step used a subset of proxies available in the later steps. It produced a time series for the period 1400-1980 that was scaled (calibrated) against instrumental (an early HadCRU) for 1901-1980. Statistics comparing the 1856-1901 segment of the reconstruction time series to 1856-1901 observations are the verification statistics.

My interpretation of this: The proxies that go all the way back to the period that you want to calculate R2 for are used to create a temperature graph that goes all the way from that time period until today. Proxies that don’t go back that far are ignored in the calculation. R2 for the desired time period is how well the curve matches temperature measurements in the verification period (1856-1901).

14) RE is short for Reduction of Error and was a widely used verification statistic among climate scientists. It wasn’t used much at all in other scientific fields. In MBH98, the term β was used instead of RE, but it’s the same thing.

In MBH98, Mann wrote that their results were statistically significant if RE was greater than 0.0. But using so-called Monte Carlo analysis where McIntyre and McKitrick tested Mann’s algorithm with random data series (red noise), they found that the threshold value for RE was 0.59. Mann’s graph didn’t satisfy this for the earliest time period (starting in 1400). Their results thus weren’t statistically significant for that period. See A Brief Retrospective on the Hockey Stick by Ross McKitrick, Section 2 — Our Critique of the Method.

(R2 was low for the entire period before 1750, indicating a lack of statistical significance for the entire period from 1400 to 1750.)

Mann had also used Monte Carlo analysis to find the threshold value for RE. However, due to the error in his principal components algorithm, the threshold value that MBH98 found was incorrect. This made it look as though their temperature reconstruction was better than it actually was.

15) R2 or r2 is much more commonly used than RE among scientists in general, and it was also commonly used by climate scientists. An advantage of R2 over RE is that you don’t need to do Monte Carlo analysis to be able to interpret the R2 value. See A Brief Retrospective on the Hockey Stick by Ross McKitrick, Section 2 — Our Critique of the Method.

16) From the Energy & Environment article:

For steps prior to 1820, MBH98 did not report verification statistics other than the RE statistic. Unlike the above case, we cannot prove on the present record that Mann et al. had calculated these other statistics, but we consider it quite likely that these statistics were calculated and not reported. (In this case, we believe that diligent referees, even under the limited scope and mandate of journal peer review, should have requested the reporting of this information.)

17) If behind a paywall, read it here instead. The title is Global Warring In Climate Debate, The ‘Hockey Stick’ Leads to a Face-Off.

18) The temperature reconstruction from MBH99 is shown in a “spaghetti diagram” together with temperature reconstructions from several other studies in Chapter 6 (from Working Group I) on “palaeoclimate”.

19) John Stewart joked about the “hide the decline” email on The Daily Show. I couldn’t find the video, but here you can see Stephen McIntyre retell Stewart’s joke.

20) The video doesn’t show all of McIntyre’s slides, but the full set of slides is available here.

21) In 2013, McIntyre wrote that until then there had still not been much progress in terms of what was published in the peer-reviewed literature:

The IPCC assessment has also been compromised by gatekeeping by fellow-traveler journal editors, who have routinely rejected skeptic articles on the discrepancy between models and observations or pointing out the weaknesses of articles now relied upon by IPCC. Despite exposure of these practices in Climategate, little has changed. Had the skeptic articles been published (as they ought to have been), the resulting debate would have been more robust and IPCC would have had more to draw on [in] its present assessment dilemma.

22) Judith Curry has put it like this:

Simply, scientists are human and subject to biases. Further, they have personal and professional stakes in the outcomes of research – their professional reputation and funding is on the line. Assuming that individual scientists have a diversity of perspectives and different biases, then the checks and balances in the scientific process including peer review will eventually see through the biases of individual scientists. However, when biases become entrenched in the institutions that support science – the professional societies, scientific journals, universities and funding agencies – then that subfield of science may be led astray for decades and make little progress.

23) We saw that it was difficult for Stephen McIntyre to access the underlying data and methods from Mann’s studies. The same was true of many other studies. Skeptics eventually requested data under the Freedom of Information Act, but this, too, proved difficult. The book Climategate: The CRUtape Letters tells about this in more detail, particularly in the context of skeptics requesting data for the weather stations used in the calculation of global average temperature, so they could verify the calculations.

24) Brandon Shollenberger (not to be confused with Michael Shellenberger) has written several blog posts documenting that John Cook (and SkepticalScience) have been dishonest:

25) But the verification statistics for MBH98 were so poor that MBH98 couldn’t really say much about what the temperature was so far back in time.

26) McIntyre criticized climate scientists for using a tree ring chronology from the Yamal Peninsula in northern Russia. The chronology had a hockey stick shape, but was based on very little data (few trees) in the later years. McIntyre suggested merging data from Yamal with data from nearby areas (including Polar Urals). In 2013, CRU (and Briffa) began using a combined Yamal chronology that was very similar to McIntyre’s proposal. The new chronology didn’t have a hockey stick shape. (See also Andrew Montford’s post on The Yamal deception.)

27) SkepticalScience writes:

Scientific skepticism is healthy. Scientists should always challenge themselves to improve their understanding. Yet this isn’t what happens with climate change denial. Skeptics vigorously criticise any evidence that supports man-made global warming and yet embrace any argument, op-ed, blog or study that purports to refute global warming. This website gets skeptical about global warming skepticism. Do their arguments have any scientific basis? What does the peer reviewed scientific literature say?

They do have a point that many skeptics are too unskeptical of arguments from skeptics. So I recommend being a little skeptical of arguments from both sides — both from alarmists and from skeptics.

28) The IPCC believes that it’s very likely that the feedback effect of water vapor and albedo (how much sunlight is reflected from the earth) is positive, ie that these effects contribute to a higher climate sensitivity. The feedback effect from clouds is more uncertain, but probably positive, according to IPCC:

The water vapour/lapse rate, albedo and cloud feedbacks are the principal determinants of equilibrium climate sensitivity. All of these feedbacks are assessed to be positive, but with different levels of likelihood assigned ranging from likely to extremely likely. Therefore, there is high confidence that the net feedback is positive and the black body response of the climate to a forcing will therefore be amplified. Cloud feedbacks continue to be the largest uncertainty. The net feedback from water vapour and lapse rate changes together is extremely likely positive and approximately doubles the black body response [meaning that the climate sensitivity, ECS, doubles from about 1°C to about 2°C].

29) IPCC agrees and has written:

For scenarios of increasing [radiative forcing], TCR is a more informative indicator of future climate change than ECS.

30) Zeke Hausfather, a climate scientist who works with climate models (among other things), explains:

“Sensitivity” is something that emerges from the physical and biogeochemical simulations within climate models; it is not something that is explicitly set by modellers.

31) The IPCC writes:

The Coupled Model Intercomparison Project Phase 5 (CMIP5) model spread in equilibrium climate sensitivity ranges from 2.1°C to 4.7°C[.]

32) Knutti et al. write:

Our overall assessment of ECS and TCR is broadly consistent with the IPCC’s, but concerns arise about estimates of ECS from the historical period that assume constant feedbacks, raising serious questions to what extent ECS values less than 2 °C are consistent with current physical understanding of climate feedbacks.

Many other climate scientists also disagree with Lewis that we should trust the observation-based studies more. An article on CarbonBrief comments on a new study (Sherwood et al. (2020)), which concludes that a low climate sensitivity is unlikely.

33) See the Appendix of the long version of Lewis and Crok’s climate sensitivity report.

34) In the paper they write:

The RCPs provide a starting point for new and wide-ranging research. However, it is important to recognize their uses and limits. They are neither forecasts nor policy recommendations, but were chosen to map a broad range of climate outcomes. The RCPs cannot be treated as a set with consistent internal logic. For example, RCP8.5 cannot be used as a no-climate-policy reference scenario for the other RCPs because RCP8.5’s socioeconomic, technology and biophysical assumptions differ from those of the other RCPs.

And Chapter 12 (from Working Group I) of IPCC’s previous assessment report states:

It has not, in general, been possible to assign likelihoods to individual forcing scenarios.

35) According to Judith Curry:

Skeptics generally support nuclear energy and natural gas, but are dubious of rapid expansion of wind and solar and biofuels.

36) It’s been argued that solar energy (and wind) receive unfair advantages in competition with other energy sources, and that these are hidden subsidies for solar energy, which means that the price consumers have to pay for electricity increases.

37) “5-95% percentile range 0.5-8.2%”

That the IPCC thinks climate change will have relatively low costs, can also be seen in Chapter 10 (from Working Group II) in their previous assessment report:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement).

38) The question that the economists were tasked with: “If the global community wants to spend up to, say $250 billion per year over the next 10 years to diminish the adverse effects of climate change, and to do most good for the world, which solutions would yield the greatest net benefits?”

39) As part of Cook et al. (2013), a survey was also e-mailed to the authors of the reviewed studies. The survey asked the authors to categorize (or rate) their own study (or studies). Rated by the authors themselves, a higher proportion (64.5%) now expressed an opinion about the role of humans. And, as when rated by Cook et al., 97% of the studies that expressed an opinion agreed that humans contribute to climate change.

Unfortunately, Cook et al. (2013) didn’t reveal how many of the authors rated their own study as agreeing that human activity was the main cause of global warming. But Dana Nuccitelli, co-author of Cook et al. (2013) and contributor at SkepticalScience, gave the answer in the comments section of a blog post. Out of 2143 studies, 228 studies were rated by their authors as Category 1 — that humans are the main cause:

The self-rating column looks like it has 228 Category 1 results. There were a further 18 where 1 author rated a paper as a 1, but a second author rated it as a 2.

40) In some places, Faktisk write that scientists think humans merely contribute to climate change. In other places, they write that scientists think humans are the main cause. It can be confusing for the reader. In support of the “main cause”-claim, Faktisk quoted NASA (though they only provided a Norwegian translation of NASA’s text):

Multiple studies published in peer-reviewed scientific journals show that 97 percent or more of actively publishing climate scientists agree: Climate-warming trends over the past century are extremely likely due to human activities.

The source provided by NASA is a meta-study from 2016 entitled Consensus on consensus: a synthesis of consensus estimates on human-caused global warming. The lead author is John Cook, so I’ll call it Cook et al. (2016). Cook, along with several of his co-authors, are also authors of other consensus studies that are discussed in this study. These other authors include Naomi Oreskes, Peter Doran and William Anderegg.

NASA’s claim is an exaggeration of what the meta-study says. Cook et al. (2016) concludes:

We have shown that the scientific consensus on [Anthropogenic Global Warming] is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology.

This is also not entirely accurate. The study defines consensus:

The consensus position is articulated by the Intergovernmental Panel on Climate Change (IPCC) statement that ‘human influence has been the dominant cause of the observed warming since the mid-20th century'[.]

Cook et al. (2016) also provided the definition of consensus for the various studies. And for several of the studies the consensus definition does not align well with IPCC’s definition. We have already seen that Cook et al. (2013) doesn’t use the IPCC definition. For Doran and Zimmerman (2009), the consensus definition is “Human activity is a significant contributing factor in changing mean global temperatures”. For Stenhouse et al. (2014) the consensus definition is “Humans are a contributing cause of global warming over the past 150 years”. For Carlton et al. (2015) the question was “Do you think human activity is a significant contributing factor in changing mean global temperatures?”

For each study (where possible), Cook et al. (2016) provided numbers both for all respondents/authors and for a sub-set: publishing climate scientists. Their analysis shows that among scientists with many climate science publications, the share of scientists who agree with the consensus definition is higher than for scientists with fewer climate science publications:

This may well — at least to some extent — be due to the difficulty for skeptics to get their papers published in scientific climate journals.

In the above image, “C13” is Cook et al. (2013), “DZ1”, “DZ2” and “DZ3” are Doran and Zimmermann (2009), “S141”, “S142” and “S143” are Stenhouse et al. (2014), and “C151” and “C152” are Carlton et al. (2015). These studies all had a weaker consensus definition than the IPCC’s. “A10200” is a subset of Anderegg et al. (2010) — the 200 authors with the greatest number of published climate-related papers (there were 1372 respondents in total). 66% of all 1372 respondents agreed with Anderegg’s consensus definition, but this result hasn’t been plotted in the image. Other studies have also been left out of the plot.

41) Faktisk writes (translated from Norwegian):

A 2016 survey among 1868 scientists showed that, among those having published at least ten peer-reviewed scientific papers, 90 percent agreed that human greenhouse gas emissions is the most important driver of climate change.

They link to a study by Verheggen and others, where John Cook is one of the co-authors. The study’s title is Scientists’ Views about Attribution of Global Warming. The study has a publication date of 22 July 2014, and according to the study, the survey was conducted in 2012 (not 2016).

If the goal is to determine the share of scientists who think humans are the main cause of global warming, then this is a much better study than Cook et al. (2013).

Verheggen et al. (2014) e-mailed a 35-question survey to 7555 scientists. 1868 responded. Question 1 was “What fraction of global warming since the mid-20th century can be attributed to human-induced increases in atmospheric [greenhouse gas] concentrations?” The answers were distributed as follows (blue dots show the percentages for all 1868 respondents):

66% of the 1868 respondents thought that humans contribute more than 50% to global warming through greenhouse gas emissions. But there were some who thought the question was difficult to answer. Of those who gave a quantitative answer, the result was 84%.

The first part of question 3 was similar to question 1, but the options were more qualitative than quantitative: “How would you characterize the contribution of the following factors to the reported global warming of ~0.8 °C since preindustrial times: [greenhouse gases], aerosols, land use, sun, internal variability, spurious warming?” So the first part of question 3 was about greenhouse gases, and the answers were distributed as follows (unfortunately the figure doesn’t show the percentages for all 1868 respondents, but since the four categories are roughly equal in size, a simple average will give a good approximation):

There were significantly fewer who were undecided on this question than on question 1.

Those who answered “Strong warming” are naturally considered as supporting the consensus definition that human greenhouse gas emissions have contributed to most of the temperature increase since 1950. According to the study, some of those who answered “Moderate warming” should also be considered as supporting the consensus definition — those who had not chosen “Strong warming” on any of the other sub-questions in question 3. This doesn’t seem unreasonable, but at the same time, it isn’t quite obvious that it’s okay, either.

In any event, in this way, they found that 83% of the 1868 respondents to the survey supported the consensus definition, or 86% of those who were not undetermined.

The number used by faktisk.no is 90%. Here’s how it appears in Verheggen et al.:

Excluding undetermined answers, 90% of respondents, with more than 10 self-declared climate-related peer-reviewed publications, agreed with dominant anthropogenic causation for recent global warming. This amounts to just under half of all respondents.

Again, I’m a little skeptical that there’s so much focus on those scientists who have published the greatest number of peer-reviewed studies. This is because of how difficult it was for skeptics to get published in the peer-reviewed literature. A positive thing about this study is that they’ve included some skeptics who have only published things outside of peer-reviewed journals (“gray literature”).

Although Verheggen et al. (2014) in many ways appears to be a good study, they’ve misrepresented the results of other studies. They write:

However, Oreskes, Anderegg et al., and Cook et al. reported a 97% agreement about human-induced warming, from the peer-reviewed literature and their sample of actively publishing climate scientists […]. Literature surveys, generally, find a stronger consensus than opinion surveys. This is related to the stronger consensus among often-published — and arguably the most expert — climate scientists.

However, as we’ve seen, Cook et al. (2013) has a completely different consensus definition than Verheggen et al. (2014). While the consensus definition in Verheggen et al. (2014) focused on human activity as the main cause, the consensus definition was only that humans contribute to global warming in Cook et al. (2013).

Later, though, they write:

Different surveys typically use slightly different criteria to determine their survey sample and to define the consensus position, hampering a direct comparison. It is possible that our definition of “agreement” sets a higher standard than, for example […] Doran and Kendall-Zimmermann’s survey question about whether human activity is “a significant contributing factor”.

It isn’t just possible, it’s absolutely certain.

42) However, IPCC’s Summaries for Policymakers are often more alarming than the reports they summarize. Richard Tol, a Dutch economist who’s contributed extensively to IPCC’s assessment reports, said the following about the chapter he was convening lead author for in IPCC’s previous assessment report:

That [many of the more dramatic impacts of climate change are really symptoms of mismanagement and poverty and can be controlled if we had better governance and more development] was actually the key message of the first draft of the Summary for Policymakers. Later drafts … and the problem of course is that the IPCC is partly a scientific organisation and partly a political organisation and as a political organisation, its job is to justify greenhouse gas emission reduction. And this message does not justify greenhouse gas emission reduction. So that message was shifted from what I think is a relatively accurate assessment of recent developments in literature, to the traditional doom and gloom, the Four Horsemen of the Apocalypse and they were all there in the headlines; Pestilence, Death, Famine and War were all there. And the IPCC shifted its Summary for Policymakers towards the traditional doom and gloom [message].

43) Climate Feedback, a fact-checker for Facebook, has criticized Shellenberger’s article. Shellenberger has responded, and Mallen Baker has commented on the conflict on YouTube.

44) Curry writes:

The gate-keeping by elite journals has gotten worse [in my opinion], although the profusion of new journals makes it possible for anyone to get pretty much anything published somewhere.

She also mentions two other things that have gotten worse in recent years:

– Politically correct and ‘woke’ universities have become hostile places for climate scientists that are not sufficiently ‘politically correct’

– Professional societies have damaged their integrity by publishing policy statements advocating emissions reductions and marginalizing research that is not consistent with the ‘party line’

Hockeykølle-illusjonen, Climategate, medias bevisste villedning og hvorfor det ikke er grunn til klimabekymring

– There’s also an English version of this article. Read it here.

Etter å ha lest dette innlegget, vil du forhåpentligvis ha fått en større forståelse for hvorfor mange mener media overdriver farene ved menneskelige CO2-utslipp.

Som dere snart skal få se, har det nemlig vært mye uærlighet i klimadebatten, og det gjelder ikke bare i media, men også blant en del høyt profilerte forskere.

Jeg skal fortelle litt om temperatur-grafen over. Det høres kanskje kjedelig ut, men det er en ganske utrolig historie, og jeg tror du vil bli litt sjokkert. Studiene grafen baserer seg på ble brukt som referanser for å si at 1998 sannsynligvis var det varmeste året i løpet av de siste 1000 år og at 1990-årene var det varmeste tiåret. Grafen har blitt kalt hockeykølle-grafen på grunn av sin form, og den ble brukt seks ganger i Intergovernmental Panel on Climate Change (IPCC, en del av FN) sin tredje hovedrapport i 2001 og kom også med i den fjerde hovedrapporten fra 2007. Som vi skal se er studiene grafen baserer seg på, hvor det er Michael Mann som er hovedforfatter, veldig svake studier med mange feil og mangler.

I dette innlegget kommer jeg også til å sitere en del private e-poster sendt til og/eller fra ansatte ved Climatic Research Unit (CRU) ved East Anglia-universitetet i Norwich, Storbritannia. Med bare rundt 15 ansatte er det ikke en stor institusjon, men den anses å være en av de ledende institusjonene som jobber med klimaendringer.

Grunnen til at jeg har kunnet sitere disse private e-postene er at det 19. november 2009 ble lekket ca 1000 e-poster fra CRU. Episoden har blitt kalt Climategate. De mest oppsiktsvekkende e-postene fra første runde av Climategate kan leses her, sammen med kommentarer som setter dem inn i en forståelig sammenheng. I 2011 ble ytterligere 5000 lekkede e-poster fra CRU offentliggjort. 1)

Jeg kommer til å bruke ordene alarmist og skeptiker til å beskrive to vanlige meninger en del forskere og andre har om CO2 og klima-tiltak. Selv om ordene ikke nødvendigvis er de mest nøytrale, er de mye brukt og relativt enkle å forstå intuitivt. Jeg definerer dem som følger:

  • Alarmist: Mener CO2 vil føre til farlig oppvarming av jorden og ønsker politiske tiltak. (Foreslå gjerne et bedre ord enn alarmist)
  • Skeptiker: Mener alarmister overdriver farene ved CO2 og ønsker ikke politiske tiltak.

Som du sikkert har skjønt, anser jeg meg selv som skeptiker. Jeg kan derfor ha en tendens til å stole for mye på skeptiske argumenter og for lite på argumenter fra alarmister. Hvis du finner faktafeil, så si fra, så jeg kan rette opp i det.

Jeg fokuserer mest på CO2 som bidragsyter til global oppvarming fordi det er CO2 det er mest fokus på i media og fordi IPCC mener CO2 er klart viktigst:

Carbon dioxide (CO2) represents about 80 to 90% of the total anthropogenic forcing in all RCP scenarios through the 21st century.

Snarveier til de forskjellige delene av innlegget:

Konsensus?

Vi får stadig høre at 97 % av klimaforskere mener klimaendringene er menneskeskapte. Men hva menes egentlig med det? Betyr det at nesten alle klimaforskere mener mennesker er:
a) hovedårsaken til klimaendringer? Eller
b) en bidragsyter til klimaendringer, men ikke nødvendigvis hovedårsak?

Jeg hadde et foredrag om 97 %-konsensusen på jobben, og før jeg begynte foredraget, stilte jeg nettopp dette spørsmålet. Alle tilhørerne (ca 25 stk, alle ingeniører) svarte alternativ a, at klimaforskerne mener mennesker er hovedårsaken til klimaendringer. Kanskje har du tenkt det samme selv?

Selv om media som regel ikke sier dette eksplisitt, er det lett å få det inntrykket. Men det er feil. Riktignok stemmer det at IPCC mener mennesker er hovedårsak. I deres foreløpig siste hovedrapport fra 2013, skriver de i Summary for Policymakers:

It is extremely likely [95-100 % sikkert] that human influence has been the dominant cause of the observed warming since the mid-20th century.

Men det ser ikke ut som det finnes gode studier som kan bekrefte at enigheten om dette blant klimaforskere er så høy som 97 %. Oftest er det en metastudie fra 2013, skrevet av John Cook med flere, som brukes som referanse. Studien har tittelen “Quantifying the consensus on anthropogenic global warming in the scientific literature”, men refereres vanligvis til bare som “Cook et al. (2013)”.

I Cooks metastudie ble abstract-delen av nesten 12.000 forskningsartikler gjennomgått. Disse artiklene var alle utgitt mellom 1991 og 2011 og inneholdt teksten “global warming” eller “global climate change”. Artiklene ble kategorisert ut fra artikkelforfatternes uttrykte syn på hvorvidt, og i hvor stor grad, mennesker bidrar til global oppvarming eller klimaendringer — for eksempel om mennesker bidrar, ikke bidrar, eller bidrar minst 50 % (altså mer enn alle andre faktorer til sammen). Det var også en kategori for artikler som ikke eksplisitt sa at mennesker bidrar, men hvor dette var implisitt/underforstått.

Cook et al. (2013) kom frem til at ca to tredjedeler av artiklene ikke uttrykte noen mening om menneskenes bidrag til klimaendringer, men av den siste tredjedelen, var det ca 97 % som mente mennesker bidrar til global oppvarming eller klimaendringer. Men det var bare 1,6 % av denne tredjedelen som klart uttrykte at mennesker var hovedårsak. At det var så få som uttrykte at mennesker var hovedårsak ble riktignok ikke nevnt i Cook et al. (2013), men rådataene ligger som et vedlegg under Supplementary data, så det er relativt enkelt å sjekke. 2)

Cook feil-representerte resultatene fra sin egen studie i en annen artikkel fra samme år, hvor han var medforfatter (Bedford and Cook 2013). I artikkelen står det:

Of the 4,014 abstracts that expressed a position on the issue of human-induced climate change, Cook et al. (2013) found that over 97 % endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.

Legg merke til at det står main cause. Men dette stemmer altså ikke med rådataene fra Cook et al. (2013).

Selv om enigheten ikke er så høy som 97 % blant forskere eller klimaforskere om at menneskelig aktivitet er hovedårsak til klimaendringer, er det nok et klart flertall som mener dette, noe jeg kommer tilbake til i to senere fotnoter (fotnote 40 og 41).

Her er en video som i tillegg til å kritisere Cook et al. (2013) også kritiserer andre konsensus-studier:

Bakgrunn

Tidligere i år fikk jeg en Facebook-melding fra Martha Ball, Tim Balls kone (sannsynligvis etter at hun hadde sett et innlegg jeg har skrevet om Cooks konsensus-studie). Tim Ball er en kjent klimaskeptiker, og har vært saksøkt av Michael Mann for ærekrenkelse. Mann er, som nevnt innledningsvis, hovedforfatter av hockeykølle-graf-studiene. Mann har fått mye kritikk for studiene, og som vi snart skal se er den høyst berettiget.

Bakgrunnen for søksmålet mot Tim Ball var at Ball hadde sagt

Michael Mann at Penn State should be in the state pen [fengsel], not Penn State [Pennsylvania State University].

Etter over åtte år i det kanadiske rettsvesenet vant Tim Ball i 2019 søksmålet med begrunnelse at Mann ikke virket interessert i å få fremgang i det. Mann ble dømt til å betale saksomkostninger. Ifølge Martha Ball er summen $100.000. Mann har riktignok skrevet at han ikke ble dømt til å betale saksomkostninger, men utskriften fra rettssaken kan tyde på at han må betale:

[19] MR. SCHERR [advokat for Tim Ball]: I would, of course, ask for costs for the defendant, given the dismissal of the action.

[20] MR. MCCONCHIE [advokat for Michael Mann]: Costs follow the event. I have no quarrel with that.

[21] THE COURT: All right. I agree. The costs will follow the event, so the defendant will have his costs of the application and also the costs of the action, since the action is dismissed.

NRKs Folkeopplysningen hadde i 2018 en klima-episode hvor de viste noe som lignet på Manns hockeykølle-graf. Den viste hvordan global temperatur hadde endret seg siden 20.000 år før vår tidsregning, da det kan ha vært mer enn 4 grader kaldere enn idag. Grafen viste ingen raske endringer før slutten av forrige århundre, da grafen plutselig gikk rett opp. Jeg skjønte ikke helt hvordan de kunne vite temperaturen så nøyaktig så langt tilbake i tid, og etter at jeg så programmet har jeg hatt lyst til å vite mer om temaet, men det hadde ikke kommet høyt nok på prioriteringslisten. Men nå som jeg hadde fått melding fra Martha Ball, tenkte jeg det kunne være en god anledning til å lære litt om det. Så jeg spurte om hun visste om gode ressurser for å lære om hockeykølle-grafen. Hun anbefalte meg da å Google’e McIntyre og McKitrick. Så da gjorde jeg det.

Jeg fant blant annet boken The Hockey Stick Illusion – Climategate and the Corruption of Science (pdf), skrevet av Andrew Montford. Boken er en detaljert, spennende og sjokkerende fortelling om hvordan den kanadiske gruve-konsulenten Stephen McIntyre prøver å gjenskape og verifisere Michael Manns hockeykølle-graf ut fra Manns publiserte studier.

Jeg skal snart gi en oppsummering av hockeykølle-historien, som tar utgangspunkt i følgende intervju fra 2019 (jeg kommer til å kalle det SoundCloud-intervjuet), hvor McIntyre forteller sin versjon av hockeykølle-historien:

Jeg anbefaler absolutt å lese eller høre mer detaljerte gjenfortellinger enn min. Ifølge Ross McKitrick, McIntyres medforfatter, er The Hockey Stick Illusion det beste stedet å begynne for å lære om hockeykølle-grafen og historien om den. Matt Ridley, som har skrevet The Rational Optimist, har også anbefalt boken og kalte den “one of the best science books in years“. Jeg kan også anbefale McIntyre og McKitricks presentasjon til et National Academy of Sciences-ekspertpanel i 2006 (også anbefalt av McKitrick).

Det var også en veldig god artikkel, Kyoto Protocol Based on Flawed Statistics, i det nederlandske magasinet Natuurwetenschap & Techniek i 2005. Artikkelen tar opp mye av det samme som jeg skriver, men mer detaljert. Forfatteren er Marcel Crok.

Kort om treringer og terminologi

Før jeg begynner på historien, skal jeg kort forklare noen ord som er viktige for historien, så det forhåpentligvis blir lettere å følge med på den.

Vi har bare direkte temperatur-målinger med termometre fra ca 1850. For å finne ut omtrent hvor varmt det var på jorden før dette, må man derfor basere seg på en eller flere proxyermer indirekte temperatur-indikatorer. Den mest brukte proxyen i Manns hockeykølle-studier (og den som er mest relevant for denne historien) er treringer eller årringer. Jeg kommer til å bruke ordet treringer siden det ligner mest på de engelske ordene (“tree rings”). Under visse forhold, og for visse typer trær, vil et tre vokse mer i et varmt år enn i et kaldt år. Ut fra størrelsen på treringene er det derfor mulig å si noe om temperaturen på stedet hvor treet vokste.

Hvordan treringene til ett enkelt tre har endret seg over tid (fra år til år) kalles en trering-serie.

Ettersom det kan være andre ting enn temperatur som påvirker hvor mye treet vokser et bestemt år, tar man ikke bare prøver av ett tre på hvert geografiske sted, men ganske mange, helst minst 10, vanligvis mye mer. Trering-resultatene for alle disse trærne sammen kalles for en kronologi.

Mann hadde brukt en relativt avansert statistisk metode for å rekonstruere jordens temperatur (på den nordlige halvkule) ut fra proxyene, og hans studier var de første temperatur-rekonstruksjons-studiene som brukte denne metoden som kalles hovedkomponent-analyse (på engelsk principal components analysis, eller PC analysis, eller bare PCA). Hovedkomponent-analyse i seg selv var ikke nytt, bare bruken av det i denne sammenhengen.

Hvis man har mye proxy-data (for eksempel mange trering-kronologier) innenfor et relativt lite geografisk område, kan man slå sammen proxy-dataene i området ved hjelp av hovedkomponent-analyse for blant annet å få en jevnere geografisk fordeling av proxyer. 3)

Et viktig tidlig steg i beregningen av hovedkomponenter, går ut på at man sentrerer alle trering-tykkelsene i en kronologi rundt verdien 0. Det kan man få til ved at man for hver enkelt tykkelse-verdi trekker fra gjennomsnitts-verdien for alle trering-tykkelsene i kronologien. Som vi skal se hadde ikke Mann gjort det på nøyaktig denne måten…

Historien om hockeykølle-grafen

(Jeg har lagt litt mer utfyllende informasjon for spesielt interesserte i fotnoter. Klikk på fotnote-nummeret for å navigere til fotnoten, trykk på nummeret igjen for å navigere tilbake.)

For McIntyres del begynte det hele da han fikk en brosjyre i posten (1:13 i SoundCloud-intervjuet):

Well, I got interested in climate through sheer bad luck, I think. In 2002, the Canadian government was promoting the Kyoto treaty, and as part of their promotion they sent a brochure to every household in Canada, announcing that 1998 was the warmest year in a thousand years, and the 1990s was the warmest decade, and I wondered, in the most casual possible way, how they knew that.

McIntyre fant ut at påstanden om at 1990-årene var det varmeste tiåret de siste 1000 årene kom fra to studier, publisert i 1998 og 1999.

Studien fra 1998 hadde forsøkt å rekonstruere gjennomsnittstemperaturen på den nordlige halvkule for de siste 600 årene. Studien fra 1999 utvidet tidsintervallet med 400 år og viste dermed beregnet gjennomsnittstemperatur for de siste 1000 årene (også for den nordlige halvkule).

Studiene kalles ofte bare MBH98 4) og MBH99 5), etter første bokstav i forfatternes etternavn og året studien ble utgitt. Forfatterne er, for begge studier: Michael Mann, Raymond Bradley og Malcolm Hughes. Mann er hovedforfatter. Historien, slik jeg presenterer den her, handler mest om MBH98.

McIntyre sendte en e-post til Mann og spurte hvor han kunne finne dataene MBH98 baserte seg på. Mann hadde glemt hvor dataene var, men svarte at hans assistent (Scott Rutherford) ville finne dem. Ifølge Rutherford var visstnok ikke alle dataene på ett sted, men Rutherford skulle sette dem sammen for McIntyre, og McIntyre fikk et par uker senere tilgang til dataene på en FTP-nettside. McIntyre syntes det var rart at ingen hadde bedt om dataene før — var det ingen som hadde sjekket en så innflytelsesrik studie?

Men nå som de hadde tatt seg bryet med å finne dataene for hans skyld, følte McIntyre seg forpliktet til å sette seg inn i studien. Han hadde ikke noe spesielt mål med å verifisere/revidere Mann’s studier i starten — han så på det mer som et slags stort kryssord som skulle løses.

Det var en del ting i dataene McIntyre hadde fått som ikke ga mening, 6) og det var vanskelig å gjenskape Manns resultater, siden nøyaktig prosedyre ikke var beskrevet i Manns studier. McIntyre sendte derfor en ny e-post til Mann og spurte om det var riktige data. Mann ville ikke svare på det og gjorde det klart at han ikke ville bli kontaktet igjen: “Owing to numerous demands on my time, I will not be able to respond to further inquiries.”

McIntyre publiserte deretter sin første artikkel sammen med Ross McKitrick og rapporterte om problemene han hadde funnet. Artikkelen har tittelen Corrections to the Mann et. al. (1998) Proxy Data Base and Northern Hemispheric Average Temperature Series, men refereres vanligvis til bare som MM03, igjen etter første bokstav i forfatternes etternavn og publiserings-året. McIntyre og McKitrick oppsummerer de viktigste feilene de fant i Manns studie i abstract-delen av MM03:

The data set of proxies of past climate used in [MBH98] for the estimation of temperatures from 1400 to 1980 contains collation errors, unjustifiable truncation or extrapolation of source data, obsolete data, geographical location errors, incorrect calculation of principal components and other quality control defects.

Uten disse feilene forsvant grafens hockeykølle-form:

The particular “hockey stick” shape […] is primarily an artefact of poor data handling, obsolete data and incorrect calculation of principal components.

Ross McKitrick, McIntyres medforfatter er (og var) professor i økonomi, han hadde modellert CO2-skatt i sin doktorgradsavhandling, og McIntyre hadde sett ham snakke om Kyoto-protokollen på TV. I tillegg bodde begge i nærheten av Toronto, Canada. Så McIntyre tok kontakt med McKitrick — både for å ha noen som kunne sjekke det han hadde gjort og for å ha noen å publisere sammen med, for McKitrick hadde dessuten erfaring med å publisere vitenskapelige artikler.

Manns svar på McIntyre og McKitricks første artikkel var at de hadde brukt feil datasett, og at de korrekte dataene hele tiden hadde ligget på nettsiden McIntyre hadde blitt sendt til. Men det viste seg at dataene lå på en annen FTP-nettside, som McIntyre ikke hadde fått tilgang til ennå, men det fikk han nå.

Mann sa også at McIntyre og McKitrick burde ha kontaktet ham da de fant problemer ved studien (noe McIntyre altså hadde gjort), og at Rutherford hadde gjort noen feil da han satte sammen dataene. 7)

At McIntyre hadde fått feil data stemmer antageligvis. Fra boken The Hockey Stick Illusion:

It looked very much as if the version of pcproxy.txt that Rutherford had sent [to McIntyre] had been originally prepared for Rutherford’s own paper. In preparing these figures, he seemed to have introduced errors into the database – the same errors that had alerted McIntyre to the possibility that there were serious problems with MBH98.

Så en del av feilene McIntyre fant i starten kan ha vært introdusert av Rutherford (som altså var Manns assistent) — det vil si at MBH98 sannsynligvis ikke inneholdt alle feilene McIntyre påpekte i MM03. Det kunne riktignok ikke McIntyre vite — det var Rutherford som hadde gjort feilen, og Mann hadde ikke sjekket da han fikk spørsmål fra McIntyre om det var korrekte data.

McIntyre og McKitrick skjønte nå at hvis de skulle klare å gjenskape Manns resultater, måtte de få tilgang til data-koden Mann hadde brukt. McIntyre skrev en ny e-post til Mann hvor han blant annet ba om tilgang til data-koden. Mann er igjen lite samarbeidsvillig:

To reiterate one last time, the original data that you requested before and now request again are all on the indicated FTP site, in the indicated directories, and have been there since at least 2002. I therefore trust you should have no problem acquiring the data you now seek.

I en påfølgende e-post skrev Mann for andre gang at han ikke var interessert i å svare på flere e-poster fra McIntyre.

Etter dette dukket mange nye kataloger og filer opp på det nye FTP-nettstedet. McIntyre hadde tidligere klikket seg såpass mye rundt på dette nettstedet at Mann i en e-post til Phil Jones, daværende sjef for CRU (minner om at CRU er Climatic Research Unit ved East Anglia-universitetet i Norwich), nærmest hadde beskyldt McIntyre for datainnbrudd 8), så det var åpenbart at de nye katalogene og filene ikke hadde vært tilgjengelige før — det hadde i hvert fall ikke vært mulig å finne dem på en enkel måte. McIntyres teori var at en robots.txt-fil hadde gjort så disse katalogene og filene ikke ble indeksert av søkemotorer, og siden det ikke var noen lenker til dem, var det ingen måte å finne dem på hvis man ikke på forhånd visste nøyaktig adresse.

Blant filene som dukket opp var en fil som inneholdt datakode for beregningen av hovedkomponenter. Det viste seg da at Mann hadde brukt en litt ukonvensjonell måte å beregne hovedkomponenter på — dette til tross for at det sto i MBH98 at de hadde brukt “conventional principal components analysis“. Istedenfor å bruke et egnet programmeringsspråk (for eksempel R) eller et eksisterende kode-bibliotek til beregningen, hadde Mann skrevet algoritmen selv (i Fortran).

I standard hovedkomponent-analyse sentrerer man (som nevnt tidligere) dataene før man starter den videre beregningen. Det skjer ved at man for hver verdi trekker fra gjennomsnittet av hele serien, slik at gjennomsnittet av den nye serien blir null. Dette kan illustreres slik:

Urealistisk eksempel på to trering-serier. Til venstre: Original-data. Til høyre: Begge seriene er sentrert, det vil si flyttet ned, slik at de er like mye over 0 som under 0. Bildet er sammensatt av to skjermbilder fra boken The Hockey Stick Illusion. (X-aksen er tid, y-aksen er trering-tykkelse. Eksempelet er urealistisk fordi trering-tykkelse varierer mye mer fra år til år enn dette.)

Men Mann hadde trukket fra et annet gjennomsnitt. Istedenfor å trekke fra gjennomsnittet for hele perioden, hadde han bare trukket fra gjennomsnittet for det 20. århundret. Dette førte til at man veldig lett fikk en temperatur-kurve som gikk kraftig opp i det 20. århundret: 9)

Urealistisk eksempel på to trering-serier. Den ene har et “uptick” på 1800-tallet, den andre har det på 1900-tallet. Til venstre: Original-data. Midten: Korrekt sentrert. Til høyre: Feilaktig sentrert med Manns algoritme. Når seriene er korrekt sentrert, er den flate delen av kurven på samme nivå (nærme null), mens når de sentreres med Manns algoritme, kommer den flate delen for langt ned for kurver som har uptick på 1900-tallet. Kurver som avviker mye fra 0 (enten over eller under) får stor vekt (betydning) i den endelige temperatur-rekonstruksjonen, slik at Manns feilaktige algoritme fører til at resultatet veldig lett får hockeykølle-form — selv med tilfeldige data (random walk/red noise). Bildene er tre skjermbilder fra boken The Hockey Stick Illusion.

Resultatet av dette var at trærne i et område i California, som hadde en trering-kurve som gikk kraftig opp i det 20. århundret, fikk nesten 400 ganger større vekt enn trærne i et annet område i USA som ikke hadde samme endring i trering-tykkelse. Faktisk, hvis man fjernet trærne fra dette ene området (og muligens ett til 10)) fra Manns data, resulterte det i en temperatur-rekonstruksjon hvor 1400-tallet hadde høyere temperaturer enn slutten av 1900-tallet — selv når man brukte Manns feilaktige prosedyre! 11) Og dermed ville det ikke lenger ha vært grunnlag for å si at 1998 var det varmeste året på 1000 år.

Stiplet linje: Hockeykølle-grafen fra MBH98. Hel linje: Hvordan grafen ville sett ut med korrekt implementert hovedkomponent-analyse og korrigerte data (blant annet fjerning av bristlecone-furuer og Gaspé sedertrær). Skjermbilde fra Ross McKitricks artikkel “What is the ‘Hockey Stick’ Debate About?” fra 2005. McIntyre og McKitrick argumenterer ikke for at deres graf gir et korrekt bilde av temperaturen siste 600 år, bare at Manns hockeykølle-graf ikke er riktig ut fra proxy-dataene MBH98 valgte å bruke. Usikkerheten i temperatur før 1750 er dessuten veldig stor.

Det viste seg også at grunnen til at disse trærne hadde tykkere treringer i det 20. århundret ikke bare hadde med temperatur å gjøre, noe også Mann anerkjenner i den andre hockeykølle-studien (MBH99):

A number of the highest elevation chronologies in the western U.S. do appear, however, to have exhibited long-term growth increases that are more dramatic than can be explained by instrumental temperature trends in these regions.

Donald Graybill, som hadde tatt trering-prøvene, publiserte sammen med Sherwood Idso en artikkel (i 1993) hvor de mente den store økningen i trering-tykkelse for disse trærne kunne være en direkte effekt av økt CO2-innhold i atmosfæren (CO2 fertilization). 12)

For å si noe om hvor mye man kunne stole på temperatur-rekonstruksjonen i MBH98, hadde Mann beregnet verifikasjonstall (verification statistics) 13), og disse var oppgitt i MBH98. Verifikasjonstallene de hadde brukt var RE 14) og R2 15), men R2 var bare oppgitt for perioden etter 1750, noe McIntyre syntes var rart.

R2 er vanligvis mellom 0 og 1 (men kan også være negativ), og jo høyere, jo bedre. En R2-verdi på 1 betyr perfekt sammenheng, mens 0 betyr ingen sammenheng, og negativ verdi betyr negativ sammenheng. McIntyre sier i Soundcloud-intervjuet at 0,4 eller 0,5 ville vært bra, mens det viste seg at R2 for tidsperiodene før 1750 varierte fra 0,00 til 0,02 i Manns studie, noe som betyr at man kan ha veldig liten tillit til det resultatet de hadde funnet for årene før 1750. Det var derfor ikke rart at Mann ikke ønsket å publisere R2 for de tidligere tidsperiodene selv om han selvfølgelig skulle og burde ha gjort det.

McIntyre og McKitrick publiserte en grundig kritikk av MBH98 i Energy & Environment i 2005, hvor de skrev at Mann sannsynligvis hadde beregnet R2 for de tidligere tidsperiodene også, og isåfall skulle dette tallet ha blitt publisert i studien. Men nå som det dessverre ikke hadde skjedd, burde likevel fagfelle-vurderingen ha fanget opp at R2 manglet for de tidligere tidsperiodene.

Artikkelen i Energy & Environment ga McIntyre en forside-artikkel i The Wall Street Journal 16), noe som i sin tur førte til at The House Energy and Commerce Committee spurte Mann om han hadde beregnet R2 og hva resultatet var. Mann unnlot å svare på om han hadde beregnet R2, men sa at R2 ikke var et godt verifikasjonstall, og at han ikke hadde brukt det. Komitéen ba samtidig Mann om å publisere kildekoden til studien, noe Mann motvillig gjorde. Kildekoden viste at Mann faktisk hadde beregnet R2 for de tidligere tidsperiodene, som McIntyre hadde antatt.

Det var altså ganske åpenbart at Mann visste at studien hans ikke var spesielt god. At studien ikke var god underbygges ytterligere av en av de lekkede Climategate-e-postene. (Sitater fra Climategate-e-poster er de som har blå tekst). Tim Osborne, nåværende leder for CRU, hadde bedt Mann om noen mellom-resultater som kunne brukes til å beregne R2 uten å gå gjennom alle tidligere steg i utregningen. Mann sendte dette til Osborn, men advarte om at det ville være uheldig hvis det kom ut. Mann skrev:

p.s. I know I probably don’t need to mention this, but just to insure [absolute clarity] on this, I’m providing these for your own personal use, since you’re a trusted colleague. So please don’t pass this along to others without checking w/ me first. This is the sort of “dirty laundry” one doesn’t want to fall into the hands of those who might potentially try to distort things…

Tom Wigley, tidligere leder for CRU, visste også at Manns studie var dårlig. I en e-post til Phil Jones skrev han:

I have just read the [McIntyre and McKitrick] stuff critcizing MBH. A lot of it seems valid to me. At the very least MBH is a very sloppy piece of work — an opinion I have held for some time.

I en kommentar på sin egen blogg (ClimateAudit) har McIntyre skrevet to eksempler på hva Mann burde ha skrevet som en disclaimer til studien sin:

Readers should be aware that the reconstruction contained herein badly fails many standard cross-validation tests, including the R2, CE, sign test and product mean test, some of which are 0. Accordingly, the apparent skill of the RE statistic may be spurious and the reconstruction herein may bear no more relationship to actual temperature than random numbers. Readers should also be aware that the confidence intervals associated with this reconstruction may be meaningless and that the true confidence interval may only be natural variability.

Readers should be aware that the reconstruction contained herein cannot be replicated without the use of bristlecone pines. Some specialists attribute 20th century bristlecone pine growth to nonclimatic factors such as carbon dioxide or other fertilization or to nontemperature climate factors or to a nonlinear response to temperature. If any of these factors prove to be correct, then all portions of the reconstruction prior to [the year] 1625 will be invalidated.


Jeg vil kort nevne at MBH98/99 også har blitt kritisert av andre enn McIntyre og McKitrick — blant annet har den utstrakte bruken av treringer blitt kritisert siden treringer egner seg bedre for å finne temperatur-variasjoner fra år til år enn til å finne temperatur-trender over lengre tidsperioder:

[W]hile tree rings are excellent at capturing short frequency variability, they are not very good at capturing long-term variability.

– James Speer, Fundamentals of Tree-Ring Research (pdf)

Videre anbefalinger

I min oppsummering av hockeykølle-historien har jeg fokusert mest på noen av de viktigste tingene som var feil i Manns studier og det at Mann har vært lite samarbeidsvillig. Men dette er bare én del av historien. I historien inngår også hvordan IPCC brøt sine egne regler for å kunne bruke hockeykølle-grafen også i sin fjerde hovedrapport. 17) Andrew Montford, forfatteren av The Hockey Stick Illusion, skrev et blogginnlegg om dette før han begynte å skrive boken.

Videoen under handler også om hockeykølle-grafen. Fokuset er blant annet på hvordan hockeykølle-grafen havnet i IPCCs tredje hovedrapport i 2001, og hvordan man fjernet data fra en trering-studie som viste nedgang i trering-tykkelse i det 20. århundret. Som vi husker, brukes trering-tykkelse som en proxy for temperatur. (Nedgang i trering-tykkelse i en tid hvor temperaturen stiger ville kunne så tvil om gyldigheten av treringer som en proxy for temperatur.) Historien støttes opp av Climategate-e-poster, blant annet den e-posten som kanskje har fått mest oppmerksomhet, hvor Phil Jones i november 1999 blant annet skriver:

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) [and] from 1961 for Keith’s to hide the decline. 18)

En annen video som kan være interessant er videoen under, som handler om de lekkede Climategate-e-postene og de britiske høringene i etterkant. Det er forfatteren av The Hockey Stick Illusion, Andrew Montford, som presenterer (til tross for tysk tittel på videoen, er foredraget på engelsk). Montford har også skrevet en rapport om høringene.

Den siste videoen jeg vil anbefale i forbindelse med hockeykølle-historien i denne omgang er et foredrag av McIntyre selv, hvor han blant annet kommer inn på problemene med trering-kronologiene som ble brukt i Manns studier (og andre lignende studier): 19)

Fagfellevurdering (peer review)

Fagfellevurdering (eller på engelsk peer review) brukes av vitenskapelige tidsskrifter for å avgjøre om de skal publisere en innsendt artikkel/studie eller ikke. En liten gruppe eksperter innen studiens fagområde får tilsendt studien og skal vurdere om den er god nok. Ekspertene får gjerne ikke ekstra betalt og bruker typisk rundt en arbeidsdag på vurderingen.

27. oktober 2009 sendte Phil Jones en e-post hvor han blant annet skrev:

The two Canadians she refers to [McIntyre and McKitrick] have never developed a tree-ring chronology in their lives and McIntyre has stated several times on his blog site that he has no aim to write up his results for publication in the peer-review literature.
I’m sure you will be of the same opinion as me that science should be undertaken through the peer-review literature as it has been for over 300 years. The peer-review system is the safeguard science has developed to stop bad science being published.

Det er litt ironisk at Phil Jones sier at McIntyre burde publisere i fagfellevurderte tidsskrifter. Jones og Mann gjorde nemlig alt de kunne for å faktisk hindre McIntyre, McKitrick og andre skeptikere fra å få publisert i fagfellevurderte tidsskrifter.

Etter at en studie (av Willie Soon og Sallie Baliunas), kritisk til Manns temperatur-rekonstruksjon, hadde blitt publisert i Climate Research i 2003, skrev Mann blant annet følgende i en e-post hvor Phil Jones var en av mottagerne:

The Soon & Baliunas paper couldn’t have cleared a ‘legitimate’ peer review process anywhere. That leaves only one possibility–that the peer-review process at Climate Research has been hijacked by a few skeptics on the editorial board.
[…]
There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn’t get published in a reputable journal.
This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature”. Obviously, they found a solution to that–take over a journal! [Min utheving]

Mann og klimaforskerne med tilknytning til CRU hadde åpenbart hatt god kontroll på hva som ble publisert i klima-journaler med fagfellevurdering. De hadde i stor grad klart å holde skeptiske artikler/studier utenfor den fagfellevurderte litteraturen, og dermed hadde de en enkel måte å møte kritikk på — å spørre hvorfor skeptikerne ikke publiserte i den fagfellevurderte litteraturen. Eksempelvis skrev Mann følgende i et svar til en journalist som hadde referert til en skeptisk artikkel:

Professional journalists I am used to dealing with do not rely upon unpeer-reviewed claims off internet sites for their sources of information. They rely instead on peer-reviewed scientific research, and mainstream, rather than fringe, scientific opinion.

Grunnen til at de hadde hatt såpass god kontroll var at det nesten alltid var minst én fra dette miljøet av klimaforskere som ble spurt om å fagfellevurdere nye klimastudier innen deres fagfelt, og når de anbefalte at studien ikke burde publiseres, ble den som regel ikke publisert.

Det kan også til en viss grad forklares på en enda enklere måte: Som Judith Curry forklarer, har forskere — som de fleste andre — en tendens til å stole mest på konklusjoner som stemmer med det man selv mener er riktig. Man vil dermed naturlig gjøre en strengere vurdering av studier som konkluderer annerledes. Dersom et stort flertall har lignende meninger, blir det dermed vanskeligere for andre meninger å slippe til i den fagfellevurderte litteraturen. For noen ble det så vanskelig å publisere i klima-journaler at de ga opp.

Klimaforskerne ved CRU ønsket også å diskreditere Soon og Baliunas. Tom Wigley skriver:

Might be interesting to see how frequently Soon and Baliunas, individually, are cited (as astronomers).
Are they any good in their own fields? Perhaps we could start referring to them as astrologers (excusable as … ‘oops, just a typo’).

Et eksempel på at forskerne på alarmist-siden forventet at noen på deres side skulle bli spurt om å fagfelle-vurdere nye studier: Dagen før McIntyre og McKitricks første studie (MM03) ble publisert i Energy & Environment, og før Mann hadde sett studien, skriver Mann i en e-post:

My suggested response is:
1) to dismiss this as [a] stunt, appearing in a so-called “journal” which is already known to have defied standard practices of peer-review. It is clear, for example, that nobody we know has been asked to “review” this so-called paper [Min utheving]
2) to point out the claim is nonsense since the same basic result has been obtained by numerous other researchers, using different data, elementary compositing techniques, etc. Who knows what sleight of hand the authors of this thing have pulled. Of course, the usual suspects are going to try to peddle this crap. The important thing is to deny that this has any intellectual credibility whatsoever and, if contacted by any media, to dismiss this for the stunt that it is..

Her ser vi også at Mann har en klar mening om studien før han faktisk har lest den… Mann skrev dette etter å fått en e-post (med ukjent avsender) hvor det sto at MM03 snart ville bli publisert, og hvor det blant annet også sto:

Personally, I’d offer that this was known by most people who understand Mann’s methodology: it can be quite sensitive to the input data in the early centuries. Anyway, there’s going to be a lot of noise on this one, and knowing Mann’s very thin skin I am afraid he will react strongly, unless he has learned (as I hope he has) from the past….”

Hvis du vil lese mer om hvordan Mann, Jones med flere diskuterte fagfellevurdering internt, sjekk denne online-boken om Climategate og søk deg ned til “email 1047388489”. (E-postene som siteres blir også kommentert av forfatteren.) I etterkant av Climategate skrev også The Guardian en artikkel om hvordan klimaforskerne, blant annet ved CRU, forsøkte å hindre skeptiske studier fra å bli publisert.

Når Phil Jones skriver “The peer-review system is the safeguard science has developed to stop bad science being published”, er det ironisk også fordi fagfellevurderingen åpenbart ikke klarte å hindre at Manns dårlige hockeykølle-studier ble publisert.

Phil Jones skrev altså e-posten jeg siterte fra over (første sitatet under avsnittet om fagfellevurdering) sent i 2009 — bare noen uker før Climategate. IPCCs foreløpig siste hovedrapport kom ut i 2013 og 2014, altså ikke mange år senere. Som Phil Jones påpekte i en e-post, må IPCC forholde seg til den fagfellevurderte litteraturen:

McIntyre has no interest in publishing his results in the peer-review literature. IPCC won’t be able to assess any of it unless he does.

Derfor er det kanskje ikke rart at IPCC konkluderer med at mesteparten av oppvarmingen etter 1950 skyldes menneskelig aktivitet. Så når IPCC sier de er minst 95 % sikre på dette, kan det være en overdrivelse av hvor sikre de burde være — sett på bakgrunn av den skjeve fordelingen innen fagfellevurderte klima-studier. 20)

Fagfellevurdering er generelt ingen garanti for korrekt vitenskap. Og dette er noe forskere generelt er klar over, skrev Charles Jennings, tidligere redaktør i Nature, på Natures fagfellevurderings-blogg i 2006:

[S]cientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.

Som nevnt er det begrenset hvor mye tid som brukes på fagfellevurdering, og da Stephen McIntyre ble bedt om å vurdere en artikkel for Climatic Change, var han forberedt på å gjøre det nøye og ba redaktøren, Stephen Schneider, om å få tilgang til studiens underliggende data. Schneider sa da at, på de 28 årene han hadde vært redaktør i Climatic Change, var det ingen som tidligere hadde bedt om å få tilgang til dataene. Dette fortalte McIntyre i SoundCloud-intervjuet fra tidligere i innlegget (7:39), hvor han videre sier:

[Schneider] said, “If we required reviewers to look at data, I’d never be able to get anyone to review articles”. I think the important point is for people unfamiliar with the academic journal process to understand the level of due diligence, and what is a relevant level of due diligence. I’ve come to the opinion that it’s unreasonable to expect unpaid reviewers to […] do a full audit of an article — it’s too much work, and people aren’t going to do that.

Siden fagfellevurdering altså ikke vil kunne avdekke alle feil og mangler i en studie, er det viktig at man også har andre prosesser som kan gjøre den jobben. Og normalt sett har man det. Hvis noen har mistanker om at det kan være noe feil med en studie — fordi resultatet for eksempel ikke stemmer med ens egne antagelser — kan det være interessant å se nøyere på studien og skrive en egen studie eller artikkel hvor man kritiserer den opprinnelige studien. Den kritiske artikkelen må da også komme seg gjennom prosessen med fagfellevurdering for å bli publisert i et vitenskapelig tidsskrift.

Innen klimafeltet fantes det mange som var interessert i å gjøre jobben med å skrive kritiske artikler, men siden altså alarmist-siden klarte å holde mange av de kritiske artiklene utenfor de vitenskapelige tidsskriftene, mistet man i stor grad den kvalitetssikringen av forskningen som man normalt sett har. 21) I tillegg var det vanskelig for skeptikerne å få tilgang til data fra alarmistenes studier. 22) Disse tingene kan være med å forklare hvordan Manns studie, til tross for sine mange feil, kunne bli inkludert i IPCCs hovedrapport — til og med som den mest profilerte studien!

Hvem kan man stole på?

I klimadebatten har man to sider som står veldig steilt opp mot hverandre, og det er i utgangspunktet vanskelig for meg som utenforstående å vite hvem som har rett og hvem som tar feil — sannsynligvis har begge sider litt rett. Men når jeg finner ut at noen bevisst har forsøkt å lure offentligheten (for eksempel Michael Mann og John Cook 23)), vet jeg i hvert fall at jeg ikke kan stole på dem. Selv om de ikke lyver hele tiden, blir det vanskelig å finne ut når de lyver og når de snakker sant, så da legger jeg mindre vekt på det de sier.

Ut fra Climategate-e-postene er det også tydelig at det er flere på alarmist-siden man ikke kan stole helt på (det samme gjelder naturlig nok for mange på skeptiker-siden). I september 2009 skrev Tom Wigley, tidligere leder for CRU, for eksempel følgende i en e-post til Phil Jones, daværende leder av CRU:

Here are some speculations on correcting [sea temperatures] to partly explain the 1940s warming blip.
If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 [degrees Celsius], then this would be significant for the global mean — but we’d still have to explain the land blip.
I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and I think one needs to have some form of ocean blip to explain the land blip[.]
[…]
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.

Her var ikke de to CRU-lederne fornøyd med hva temperatur-dataene viste og diskuterte hva de kunne gjøre for å få dataene til å se litt annerledes ut.

Et annet eksempel: Etter å ha fått publisert en borehull-studie i Science i 1995, mottok forskeren David Deming en e-post fra en person i klimaforsker-miljøet. Deming sier ikke hvem personen er, men skriver at det er en viktig (“major”) person. Deming forteller selv:

They thought I was one of them, someone who would pervert science in the service of social and political causes. So one of them let his guard down. A major person working in the area of climate change and global warming sent me an astonishing email that said “We have to get rid of the Medieval Warm Period.” [Min utheving]

Det har blitt spekulert i om personen som sendte e-posten kan ha vært Jonathan Overpeck. Overpeck var Coordinating Lead Author (sammen med norske Eystein Jansen) for kapittelet om palaeoclimate (tidligere tiders klima) i IPCCs 4. hovedrapport fra 2007. Overpeck skrev i en e-post (til blant andre Phil og Mike (antageligvis Phil Jones og Michael Mann)) at han ikke kunne huske å ha sendt en slik e-post til Deming, men utelukket ikke helt at han kunne ha skrevet noe lignende.

Medieval Warm Period (MWP), eller Middelalderens varmeperiode (også kalt Medieval Climate Anomaly), som Deming nevnte, betegner en periode fra ca år 1000 til 1300 hvor det var antatt at det var varmere enn på slutten av 1900-tallet. Om middelalderen faktisk hadde globale temperaturer som var høyere enn på slutten av 1900-tallet skal ikke jeg ta stilling til, men for noen var det tydeligvis viktig å kunne si at temperaturen i middelalderen ikke var høyere enn idag. Som vi husker, førte feilene i MBH98 til at deres temperatur-rekonstruksjon viste en vesentlig lavere temperatur rundt år 1400 enn hva dataene deres egentlig tilsa, 24) og den etterfølgende studien, MBH99, som gikk tilbake til år 1000, viste ingen middelaldersk varmeperiode på den nordlige halvkule — som ser ut til å være det resultatet Mann ønsket.

Disse eksemplene viser at det var flere forskere i miljøet rundt CRU som ønsket å overbevise politikere og andre om at menneskeskapt global oppvarming skjer og er farlig. De brydde seg ikke alltid så mye om hva dataene egentlig viste. Forskerne kan ha vært usikre, men usikkerheten ble sjelden formidlet til offentligheten.

Er RealClimate og SkepticalScience objektive nettsteder?

Michael Mann er en av flere personer som startet nettstedet realclimate.org. Mann sendte i 2006 en e-post hvor han skriver at de ikke kommer til å godkjenne en hvilken som helst kommentar — skeptikerne skulle ikke få lov til å bruke kommentar-feltet på RealClimate som en “megafon”:

Anyway, I wanted you guys to know that you’re free to use [RealClimate] in any way you think would be helpful. Gavin [Schmidt] and I are going to be careful about what comments we screen through, and we’ll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you’d like us to include.

You’re also welcome to do a followup guest post, etc. think of [RealClimate] as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We’ll use our best discretion to make sure the skeptics [don’t get] to use the [RealClimate] comments as a megaphone…

Og i en e-post fra 2009:

Meanwhile, I suspect you’ve both seen the latest attack against [Briffa’s] Yamal work by McIntyre. 25) Gavin [Schmidt] and I (having consulted also w/ Malcolm [Hughes]) are wondering what to make of this, and what sort of response—if any—is necessary and appropriate. So far, we’ve simply deleted all of the attempts by McIntyre and his minions to draw attention to this at RealClimate.

John Cook (som vi husker fra 97 % konsensus-studien) står bak opprettelsen av nettstedet skepticalscience.com. SkepticalScience er en nettside som forsøker å tilbakevise argumenter fra skeptikere. 26)

SkepticalScience ser ut til å være et populært nettsted som kommer høyt opp i Googles søkeresultater, og da jeg for noen år siden ville prøve å finne ut om det var noe i konsensus-påstanden, var SkepticalScience en av de første sidene jeg kom inn på, og ifølge dem var det mange studier som viste at det var nær 100 % enighet om at klimaendringer var menneskeskapte. Jeg ble nesten overbevist, men har siden funnet ut at studiene ikke sier det SkepticalScience prøvde å få meg til å tro.

Selv om mye av det SkepticalScience og RealClimate skriver er riktig, er det altså tydelig at de ikke er objektive nettsider. Men siden det de skriver stemmer bra med det vi får høre i media, er det naturlig at mange tenker at de er objektive. Det er lettere å være skeptisk til de som kommer med informasjon som strider med det vi jevnlig får høre. Og det er bra å være skeptisk. Det er også viktig å være klar over at SkepticalScience og RealClimate ønsker at leserne sine skal bli overbevist om at global oppvarming er menneskeskapt og farlig, like mye som de fleste skeptikere ønsker å overbevise oss om det motsatte.

Hvem kan man så stole på? Det er sikkert mange, men én person jeg stoler på er i hvert fall Stephen McIntyre (det betyr riktignok ikke at han har rett i alt han sier). Selv om han har kritisert Manns studier og andre trering-studier ganske kraftig, har han ikke “valgt side” i klimaspørsmålet. Han understreker stadig at det han selv fant ut på ingen måte er noe bevis for at global oppvarming ikke finner sted, og at det er hvor høy klimasensitiviteten er som er det viktige vitenskapelige spørsmålet (kommer snart tilbake til hva klimasensitivitet er). I innledningen til et foredrag han holdt om Climategates rolle i hockeykølle-graf-historien sa han blant annet følgende (3:59):

[K]eep in mind that nothing that I say tonight proves or disproves global warming. Nor does climate science as a whole stand or fall on proxy reconstructions. If we do nothing about tree rings, we would still be obliged to assess the impact of doubled CO2. As a final preamble, there’s far too much angriness in my opinion on both sides of the debate. People are far too quick to yell “Fraud” at the other side. And I think such language is both self-indulgent and counterproductive. I don’t apply these labels myself, I don’t permit them at ClimateAudit, and don’t believe they serve any purpose. That doesn’t mean you can’t criticize authors — I do so all the time and will do so tonight, but any point you make should be able to be made on the facts rather than the adjectives.

Og avslutningsvis (36:31):

I started my comments with caveats, and I’ll close with some more. The critical scientific issue, as it has been for the past 30 years, is climate sensitivity, and whether cloud and water cycle feedbacks are strongly positive or weekly negative or somewhere in between. This is the territory of Lindzen, Spencer, Kininmonth and Paltridge at this conference, and I urge you to listen to what they have to say. But also keep an open mind, because many serious scientists don’t agree with them and stand behind standard estimates of climate sensitivity of doubled CO2 in perfectly good faith.
[…]
If I were a politician, regardless of what I felt personally, I would also take scientific guidance from official institutions rather than what I might think personally, either as an occasional contributor to academic journals or as a blogger. Although, knowing what I know now, I would try as hard as I possibly could, to improve the performance and accountability of these institutions.

Her er hele foredraget:

Kan man stole på forskning?

Vi tenker kanskje på forskere som uselviske sannhets-søkere, men ifølge Terence Kealey, forfatter av boken The Economic Laws of Scientific Research, er det en myte at forskere vil prøve å falsifisere sine egne teorier. De ønsker å få teoriene sine publisert og tar gjerne snarveier i bruken av statistiske metoder:

One problem is that scientists are much less scientific than is popularly supposed. John Ioannidis […] has shown […] that the poor application of statistics allows most published research findings to indeed be false[.]

There is a perverse reason that scientists use poor statistics: career progression. In a paper entitled “The Natural Selection of Bad Science,” Paul Smaldino and Richard McElreath of the University of California, Merced, and the Max Planck Institute, Leipzig, found that scientists select “methods of analysis … to further publication rather than discovery.” Smaldino and McElreath report how entire scientific disciplines — despite isolated protests from whistleblowers — have, for more than half a century, selected statistical methods precisely because they will yield publishable rather than true results. The popular view is that scientists are falsifiers, but in practice they are generally verifiers, and they will use statistics to extract data that support their hypotheses.

Sett i lys av dette gir hockeykølle-skandalen litt mer mening.

Så svaret på om man kan stole på forskning er både ja og nei. Enkelt-studier er ikke nødvendigvis korrekte eller gode selv om de blir publisert i journaler som krever fagfelle-vurdering. For at man skal kunne stole på forskningen som kommer fra et bestemt fagfelt, er det en stor fordel om det finnes noen som ønsker å finne feil i det som blir publisert. Disse menneskene fins det mange av innenfor klimafeltet, man må bare ikke ignorere dem, men heller utnytte dem i prosessen med å finne ut hva som er god og dårlig forskning.

Klimasensitivitet

Som jeg siterte Stephen McIntyre på tidligere, er hva den såkalte klimasensitiviteten (eller klimafølsomheten) er, det viktigste vitenskapelige spørsmålet når det gjelder CO2. Hvor følsom er den globale gjennomsnittstemperaturen for endringer i CO2-nivå?

Det virker å være stor enighet (også blant mange skeptikere) om at en dobling av CO2-nivået i atmosfæren vil føre til en tilnærmet konstant temperaturøkning. Altså, hvis en dobling av CO2-nivå fra dagens ca 0,04 % til 0,08 % fører til en 2 graders temperaturøkning, vil en dobling fra 0,08 % til 0,16 % også føre til en 2 graders temperaturøkning. Man sier da at klimasensitiviteten er 2 grader.

Klimasensitiviteten kan riktignok også variere avhengig av jordens tilstand og temperatur. Dette skyldes endringer i tilbakekoblingseffekter (feedback-effekter) ettersom jorden blir varmere eller kaldere. I praksis vil klimasensitiviteten sannsynligvis ikke endre seg mye dette århundret. (Jeg skal straks forklare hva tilbakekoblingseffekter er.)

Når klimaforskere snakker om klimasensitiviteten, mener de vanligvis hvor stor temperaturøkning vi vil få ved en dobling av CO2-nivå fra før-industriell tid, altså fra 280 til 560 ppm (0,028 til 0,056 prosent).

Man opererer med flere forskjellige typer klimasensitivitet, blant annet:

  • Equilibrium Climate Sensitivity (ECS), som er hvor mange graders temperaturøkning en dobling av CO2-innholdet i atmosfæren vil føre til når det til slutt blir likevekt mellom CO2-nivå og temperatur. All temperaturøkning ved en økning i CO2-innhold vil nemlig ikke komme med én gang — de største temperaturendringene vil komme tidlig, men det kan ta mer enn 1000 år før temperaturen har økt så mye at det igjen er likevekt.
  • Transient Climate Response (TCR), som er hvor mange graders temperaturøkning en dobling av CO2-innholdet i atmosfæren vil føre til på det tidspunktet CO2-innholdet har blitt doblet hvis CO2-innholdet øker med 1 % per år. Med en økning på 1 % per år, tar én dobling 70 år. (TCR vil alltid være lavere enn ECS.)

Positiv tilbakekobling (for eksempel at økt temperatur fører til mer vanndamp i atmosfæren som fører til ytterligere økt temperatur siden vanndamp er en sterk drivhusgass) eller negativ tilbakekobling (for eksempel at økt temperatur fører til mer lavtliggende skyer som reflekterer mer sollys) vil bety at klimasensitiviteten blir henholdsvis høyere eller lavere enn den ville vært uten tilbakekoblingseffekter (feedback-effekter). Klimasensitiviteten, ECS, ville vært ca 1°C uten tilbakekoblingseffekter27)

Hvis vi skal finne ut hvor mye temperaturen vil stige som følge av økt CO2-innhold i atmosfæren frem mot år 2100, er det mer relevant å se på TCR enn ECS. 28) Riktignok kan en 1 % økning av CO2-innholdet per år (som forutsettes i klimasensitiviteten TCR) være mer enn hva som er realistisk (selv uten politiske tiltak). De siste 10 årene har økningen vært rundt 0,6 % per år, mens økningen fra år 2000 til 2010 var ca 0,5 % per år.

Det er stor usikkerhet i IPCC om nøyaktig hvor mange graders temperaturøkning en dobling av atmosfærens CO2-innhold vil føre til. I sin forrige hovedrapport oppgir IPCC at TCR nesten helt sikkert har en verdi på mellom 1,0 og 2,5 grader. 1,0 grader er relativt uproblematisk, mens 2,5 grader kan være mer utfordrende. Siden videoen under snakker mest om ECS, kan jeg også nevne at IPCC for ECS oppgir en verdi på mellom 1,5 og 4,5 grader. Og dette er faktisk identisk med det såkalte Charney-intervallet helt tilbake fra 1979 (3 grader ± 1,5 grader).

Klimamodeller er viktige i IPCCs beregning av klimasensitiviteten29) Klimamodeller er avanserte datasimuleringer av jordens klima. De er basert på fysiske lover, annen kunnskap man har om verdens klima-prosesser og forventede utslipp av drivhusgasser i ulike scenarier. Siden vi ikke har perfekt kunnskap om alle klima-prosesser, må det også brukes noen tilnærminger, kalt parametriseringer. Start-tidspunktet for simuleringen kan for eksempel være år 1850 og slutt-tidspunktet år 2100. Klimamodellene blir kjørt på superdatamaskiner, og én kjøring kan ta flere måneder. Ifølge IPCCs forrige hovedrapport beregnet klimamodellene klimasensitiviteten, ECS, til å være mellom 2,1 og 4,7 grader, 30) altså litt høyere enn IPCCs endelige anslag (1,5 til 4,5 grader).

Men fra 2012 har det, som det forklares i videoen over, kommet en del studier hvor klimasensitiviteten i mindre grad baserer seg på modeller og i større grad på observasjoner, og disse studiene har alle kommet frem til at ECS er godt under 3 grader — de fleste viser en sannsynlig ECS-verdi på mellom 1,5 og 2 grader.

Videre i videoen forteller de at en studie fra 2018 er spesielt viktig siden den har brukt IPCCs egne datasett til å beregne klimasensitiviteten (8:48):

This 2018 paper by Nicholas Lewis and Judith Curry, published in the American Meteorological Society’s Journal of Climate, is particularly important because it applies the energy balance method to the IPCC’s own datasets, while taking account of all known criticisms of that method. And it yields one of the lowest ECS estimates to date, 1.5 degrees, right at the bottom of the Charney range.

Den mest sannsynlige TCR-verdien er ifølge studien 1,20 grader. (Mer nøyaktig er det median-verdien for TCR som er 1,20 grader. Median-verdi for ECS: 1,50 grader) Studiens hovedforfatter, Nic Lewis, har skrevet et innlegg om studien på ClimateAudit, McIntyres blogg.

Knutti et al. (2017) har laget en oversikt over ulike klimasensitivitets-studier og kategorisert dem ut fra om de er observasjonsbaserte (historical), modellbaserte (climatology) eller basert på tidligere tiders temperaturendringer (palaeo). Knutti et al. (2017) bekrefter at studier som i større grad er basert på observasjoner kommer frem til en lavere klimasensitivitet enn studier som bruker andre metoder:

Knutti et al. er riktignok ikke enige i at man bør stole mest på disse mer observasjonsbaserte studiene. 31)

Men det mener Nic Lewis at man bør gjøre. I rapporten Oversensitive — How the IPCC Hid the Good News on Global Warming (lang versjon her) argumenterer han, sammen med Marcel Crok, for at man bør stole mest på de observasjonsbaserte estimatene for klimasensitiviteten. Deres konklusjon:

[W]e think that of the three main approaches for estimating ECS available today (instrumental observations, palaeoclimate observations, [and climate model] simulations), instrumental estimates – in particular those based on warming over an extended period – are superior by far.

Ifølge Lewis og Crok er det veldig stor usikkerhet forbundet med estimater av klimasensitiviteten basert på tidligere tiders temperatur-endringer. Og klimamodeller vil ikke nødvendigvis simulere fremtidens klima korrekt selv om de klarer å simulere fortidens klima frem til idag. Lewis og Crok skriver:

There is no knob for climate sensitivity as such in global climate models, but there are many adjustable parameters affecting the treatment of processes (such as those involving clouds) that [climate models] do not calculate from basic physics.

Climate sensitivities exhibited by models that produce realistic simulated climates, and changes in climatic variables over the instrumental period, are assumed to be representative of real-world climate sensitivity. However, there is no scientific basis for this assumption. An experienced team of climate modellers has written that many combinations of model parameters can produce good simulations of the current climate but substantially different climate sensitivities. [Forest et al. (2008)] They also say that a good match between [climate model] simulations and observed twentieth century changes in global temperature – a very common test, cited approvingly in the [IPCC assessment report 4] as proving model skill – actually proves little.

Models with a climate sensitivity of 3°C can roughly match the historical record for the global temperature increase in the twentieth century, but only by using aerosol forcing values that are larger than observations indicate is the case, by underestimating positive forcings, by putting too much heat into the oceans and/or by having strong non-linearities or climate state dependency. [Har delt inn i avsnitt for bedre lesbarhet]

I rapporten påpeker Lewis og Crok også feil i en del av de observasjonsbaserte studiene som fant relativt høy klimasensitivitet. 33)

Så selv om det ikke er helt sikkert at klimasensitiviteten er lav, er det faktisk en ganske god mulighet for det.

Videoen nedenfor er et foredrag med Nic Lewis, hvor han snakker om klimasensitivitet (jeg syntes det var veldig interessant):

Business-as-usual CO2-utslipp

For å kunne beregne hvor mye temperaturen vil stige som følge av menneskelige CO2-utslipp, må man, i tillegg til klimasensitiviteten, vite hvor mye CO2 vi kommer til å slippe ut. IPCC har fått klimaforsknings-miljøet til å lage scenarier for hvordan blant annet CO2-utslippene vil kunne utvikle seg frem mot år 2100.

I 2007 ba IPCC klimaforsknings-miljøet om utvikle scenarier for drivhusgass-utslipp for resten av dette århundret. Utslipps-scenariene har navn som begynner med “RCP”, etterfulgt av et tall. Jo høyere tall, jo høyere utslipp.

De fire hoved-scenariene er RCP2.6, RCP4.5, RCP6.0 and RCP8.5.

RCP8.5 er det mest pessimistiske scenariet i IPCCs forrige hovedrapport. Det er et scenario hvor det ikke innføres politiske tiltak for å begrense CO2-utslippene. Dette brukes ofte som et business-as-usual-scenario, men ifølge Moss et al. (2010) er det ikke beregnet som en spådom eller prognose for den mest sannsynlige utviklingen av CO2-utslipp. 34) Likevel er det nok slik mange oppfatter det.

Klimaforskeren Zeke Hausfather skriver om RCP8.5 i en kommentar i Nature:

RCP8.5 was intended to explore an unlikely high-risk future. But it has been widely used by some experts, policymakers and the media as something else entirely: as a likely ‘business as usual’ outcome. A sizeable portion of the literature on climate impacts refers to RCP8.5 as business as usual, implying that it is probable in the absence of stringent climate mitigation. The media then often amplifies this message, sometimes without communicating the nuances.

Hausfather har også, sammen med Justin Ritchie, sett på det internasjonale energibyrået (IEA) sin 2019-rapport for verdens energi-utsikter (World Energy Outlook). Ritchie skriver at IEAs prognoser for CO2-utslipp frem mot 2040 ligger langt under utslippene som forutsettes i RCP8.5-scenariet:

IEA scenarios are a more realistic projection of the global energy system’s current ‘baseline’ trajectory; showing we are far from RCP8.5 [and] RCP6.0. World currently tracking between RCP4.5 [and] lower climate change scenarios – consistent with 1.5˚ to 2.5˚C [warming this century].

Twitter-tråd (Mer utfyllende artikkel her, skrevet av Hausfather og Ritchie)

De skriver at vi vil få en oppvarming på 1,5 til 2,5 grader, men det forutsetter at klimasensitiviteten er relativt høy.

I 2014 beregnet Lewis og Crok at hvis klimasensitiviteten TCR er 1,35 grader, vil det føre til en temperatur-økning på 2,1 grader fra 2012 til ca 2090 i det pessimistiske RCP8.5-scenariet. Og i et mer realistisk scenario enn RCP8.5 vil temperaturen stige mindre:

On the RCP6.0 scenario and using the observational TCR-based method, total warming in 2081–2100 would still be around the international target of 2°C, with a rise of 1.2°C from 2012 rather than the 2°C rise projected by the [climate models].

Nic Lewis har, som vi har sett, senere justert estimatet for TCR ned til 1,20 grader, som isåfall betyr enda litt mindre oppvarming.

Og ifølge IEA ser det altså ut til at vi skal holde oss godt under utslippene i RCP6.0 også.

Akselererende teknologiske utvikling

En grunn til at jeg ikke har vært bekymret for global oppvarming selv om jeg ikke har visst så mye om forskningen som jeg gjør nå, er at den teknologiske utviklingen går fortere og fortere for hvert år.

For nesten 40 år siden oppdaget Ray Kurzweil at datamaskiners regnekraft (egentlig regnekraft mer generelt) forbedret seg på en overraskende forutsigbar måte.

Bildet over er fra Kurzweils bok The Singularity Is Near fra 2005, som jeg leste i 2010. Det viser hvor mye regnekraft (målt i operasjoner per sekund) man på det meste (typisk med superdatamaskiner) kunne få for $1000 (inflasjonsjustert) i ulike år mellom år 1900 og 2000, og hvordan Kurzweil så for seg at utviklingen ville fortsette frem mot år 2100 (som tilsvarer at den historiske trenden fortsetter).

Y-aksen er logaritmisk, noe som betyr at en rett linje tilsvarer eksponentiell vekst — det vil si dobling med jevne mellomrom. Siden grafen bøyer oppover, betyr det at Kurzweil forventer at veksten i regnekraft per dollar er enda raskere enn dette. I praksis betyr det at hver dobling i regnekraft per dollar skjer med kortere og kortere tidsintervaller.

Det ser ut som Kurzweil har truffet bra så langt.

Med den ekstreme datakraften vi vil få tilgang til senere i århundret, vil våre teknologiske evner også bli ekstreme. Kunstig intelligens utvikler seg også ekstremt fort, og kunstig intelligens kan hjelpe oss å utvikle mange andre teknologier raskere enn vi ellers ville klart.

En viktig teknologi Kurzweil mener vi vil utvikle innen 2030 er atomically precise manufacturing (APM), altså at man kan lage fysiske objekter hvor man har kontroll på hvor hvert enkelt atom er plassert (31:52):

There are roadmaps to get to [APM]. The more conservative ones have that emerging by the end of the 2020s. I’m quite confident that [in the] 2030s, we’ll be able to create atomically precise structures which will enable us to create these medical nanorobots which will enable us to overcome essentially every disease and aging process — not instantly, but it’ll be a very powerful new tool to apply to health and medicine.

I tillegg til å kurere sykdom og helbrede uønskede effekter av aldring, vil vi med APM kunne 3D-printe store fysiske gjenstander, inkludert klær, datamaskiner og hus veldig fort.

Så kan man innvende at Kurzweil kanskje er for optimistisk. Men selv om mange av disse teknologiene ikke blir utviklet før mye senere i århundret, vil vi fortsatt ha relativt ekstreme teknologiske evner i år 2100. Jeg ser derfor ingen grunn til å bekymre seg for hva global oppvarming eventuelt kan føre til så langt frem i tid — så sant det økonomiske systemet vårt ikke kollapser, men da har vi uansett langt mer prekære problemer å hanskes med enn global oppvarming.

Kurzweil mener også vi snart vil endre måten vi produserer mat på, noe som kan være relevant for CO2-utslippene (24:35):

We’re going to go from horizontal agriculture, which now takes up 40 % of our usable land, to vertical agriculture, where we grow food with no chemicals, recycling all the nutrients; in vitro muscle tissue for meat; hydroponic plants for fruits and vegetables.

Matproduksjon vil i så fall kreve både mindre ressurser og mye mindre landareal enn idag.

Vil fornybar energi overta?

Dette er et vanskelig spørsmål å svare på. Det er mange som er veldig optimistiske, og jeg er i utgangspunktet det selv også, men det er også mange som er pessimistiske, spesielt skeptikere. 35) Det er tilsynelatende gode argumenter på begge sider.

I 2011 godtok jeg et veddemål med en kamerat om solenergi. På bakgrunn av Ray Kurzweils spådommer hadde jeg snakket varmt om at solenergi om ikke lenge ville gå forbi fossil energibruk i verden. Min kamerat var sikker på det ikke kom til å skje og foreslo et veddemål: Hvis vi får mer energi fra solenergi enn fossile energikilder innen 20 år (fra 2011, altså i 2031), vinner jeg. Ellers vinner han.

Det er først og fremst solcelle-basert solenergi som forventes å ha en stor økning i bruk. Grunnen til det er at solcelle-teknologi er en informasjonsteknologi som dermed kan dra nytte av den teknologiske utviklingen innen datakraft (og nano-teknologi), og den er altså akselererende (eksponentiell).

Videoen under, fra 2016, viser blant annet hvor billig “usubsidiert” 36) solenergi har blitt mange steder og hvor raskt prisen har falt:

Og likte du videoen, kan jeg også anbefale denne nyere (fra 2019) og lengre videoen med Ramez Naam, hvor vi blant annet får se at prisen for solenergi har falt ytterligere siden 2016:

Fremskritt innen nanoteknologi og materialvitenskap vil gjøre solceller mer effektive og billigere. Solcelle-teknologi har stort potensiale som en billig energi-kilde siden fremtidens solceller kan være ekstremt tynne og lette (og bøyelige). De kan dermed integreres på mange slags overflater.

Denne typen solceller har allerede blitt laget og er antageligvis ikke langt unna kommersialisering. Hovedingrediensen er et materiale som kalles perovskitt (engelsk: perovskite; oppkalt etter en russer med navn Lev Perovski). Mens silisium, som brukes i tradisjonelle solceller, må varmes opp til svært høye temperaturer, kan man lage perovskitt i romtemperatur. Blant annet derfor er de billige å lage. Ifølge Sam Stanks:

[A] $100 million perovskite factory could do the job of a $1 billion silicon factory.

Perovskitt er et slags blekk som enkelt kan printes på ulike overflater, for eksempel plast.

Selv om mørke perovskitt-solceller er de mest effektive, kan man lage dem i ulike farger og med ulik grad av gjennomsiktighet — til og med helt gjennomsiktige for solceller som bare fanger opp infrarødt lys. De kan derfor brukes på utsiden av bygninger (også vinduer), og bygningen vil fortsatt kunne se veldig bra ut. (Solceller på bygninger har ikke samme økologiske konsekvenser som solkraftverk kan ha.)

La oss så se hvordan det går med veddemålet mitt. Ifølge ourworldindata.org var energimengden vi i 2010 fikk fra solenergi 33,68 TWh. Det tilsvarte 0,024 % av verdens energiproduksjon. 8 år senere, i 2018, som er siste året Our World In Data har tall for, fikk vi 584,63 TWh fra solenergi, fortsatt bare 0,4 % av verdens totale energiproduksjon, men det har altså vært en 17-dobling av energien vi får fra solenergi på bare 8 år, eller en 15-dobling av andelen.

Da jeg skrev om veddemålet mitt i 2011, skrev jeg, som Kurzweil hadde spådd, at solenergiens andel av verdens energibruk ville doble seg hvert andre år. En dobling hvert andre år, blir 4 doblinger på 8 år, som betyr en 16-dobling. Vi endte opp med å få litt mer enn en 15-dobling, så så langt ser det faktisk ikke så verst ut for veddemålet mitt.

Veksten i sol-energi har riktignok avtatt de siste årene. Fra 2011 til 2019 viser OurWorldInData at det bare var en 11-dobling av solenergi-bruk i verden. Veksten kan muligens ta seg opp igjen når perovskitt-solceller kommer på markedet.

Ray Kurzweil hadde i 2019 fortsatt stor tro på at solenergi og fornybar energi om ikke lenge vil ta over for fossil energi:

[Renewable energy is] doubling every 4 years and I’m confident it will meet 100% of our energy needs by 2030.

Andelen vil selvfølgelig ikke komme helt opp i 100 %, og Kurzweil mener nok ikke det selv heller, men hvis vi bare får mer energi fra solenergi enn fossile energikilder i 2031, så er jeg fornøyd… Det blir uansett interessant å se hvordan utviklingen blir fremover!

Visse typer fornybar energi, spesielt sol og vind, har også noen problemer knyttet til seg:

  • De krever store landområder og kan skade eller fortrenge dyreliv (for eksempel så dreper vindturbiner mange fugler og flaggermus).
  • En høy andel fornybar energi kan føre til høye strøm-priser.
  • Sol og vind produserer vanligvis ikke energi jevnt gjennom døgnet og trenger å bli støttet av andre energi-typer i perioder de ikke selv kan produsere strøm. Reservekraften er vanligvis fossilt brensel.
  • Selv om sol og vind ikke har drivhusgass-utslipp mens de produserer energi, er ikke produksjon og avfallshåndtering like ren. Produksjon av turbiner og paneler krever mye energi og utstrakt gruvedrift.

Jeg skal ikke gå dypere inn i disse problemstillingene, men det ser ut til å være utfordringer som må tas alvorlig.

Man finner veldig smarte mennesker på begge sider av debatten om fornybar energi, så hvordan finner man ut hvilken side som har rett? Det er slett ikke lett.

Det virker for meg som mange på den optimistiske siden ser mer mot fremtiden enn fortiden. De ser hvor fort prisen på solenergi har falt og forventer fortsatt raskt fallende priser. De forventer bedre teknologier i fremtiden. De ser potensielle løsninger på tøffe utfordringer.

Pessimistene, derimot, ser mer på hvordan fornybar energi har prestert tidligere, noe som kanskje ikke er så imponerende. De kan være mer fokusert enn optimistene på utfordringene som må løses. De tror ikke nødvendigvis at de ikke kan løses, men ser kanskje for seg at den sannsynlige kostnaden er høyere enn nytten.

Jeg er generelt optimistisk og tror utfordringene kan løses. Jeg er langt fra sikker, da, og det ville vært interessant å se en diskusjon om fornybar energi i kommentarfeltet.

Ramez Naam er også optimistisk og fokuserer på mulige løsninger:

[D]on’t bet on the forecasters, but on the innovators, they’re the ones actually making it happen[.]

Naam sa dette i forbindelse med hvor raskt Tesla har forbedret batteri-teknologien. US Energy Information Administration spådde i 2013 at batterier skulle bli ca en tredjedel billigere frem mot år 2048. De ble istedenfor 4 ganger billigere på bare 5 år!

Før Naam fortalte om forbedringene i batteri-teknologi, viste han også hvordan mengden energi vi får fra solenergi har vokst, og sammenlignet det med hvordan prognosene til det internasjonale energibyrået (IEA) har vært. Det er litt morsomt å se, for mens bruken av solenergi har økt kraftig, har IEAs prognoser alltid vært (og er fortsatt) at solenergi-bruken nesten ikke skal øke:

Kilde: https://www.carbonbrief.org/profound-shifts-underway-in-energy-system-says-iea-world-energy-outlook (basert på IEA World Energy Outlook 2019).

Også Hausfather og Ritchie har diskutert denne grafen. De skriver, kanskje ikke overraskende, at IEA har blitt kritisert for å være for konservative når det gjelder fornybar energi.

Men et par måneder etter at jeg publiserte første versjon av denne artikkelen, publiserte IEA World Energy Outlook-rapporten for 2020. Og denne gangen er de faktisk positive til solenergi:

Solar becomes the new king of electricity…

Renewables grow rapidly in all our scenarios, with solar at the centre of this new constellation of electricity generation technologies. […] With sharp cost reductions over the past decade, solar PV is consistently cheaper than new coal- or gasfired power plants in most countries, and solar projects now offer some of the lowest cost electricity ever seen.

De snakker antageligvis om prisen slik den oppfattes av kraftprodusentene. En lav pris for solenergi betyr at vi kan forvente at det blir bygget flere solkraftverk. På grunn av subsidier blir ikke nødvendigvis prisen forbrukerne betaler tilsvarende lav. Jeg skal kort si litt mer om dette snart.

Selv om IEA er mer positive til solenergi i år enn tidligere, er de fortsatt vesentlig mindre optimistiske enn Ray Kurzweil, selv i scenariet med lavest drivhusgass-utslipp. Med tanke på IEAs historie med å undervurdere solenergi, ville det ikke vært helt overraskende om deres prognoser fortsatt er for lave.

Et annet spennende alternativ til fossil energi er kjernekraft — fisjon og fusjon.


Fisjonsbasert kjernekraft har fått et dårlig rykte på grunn av noen få alvorlige hendelser (blant annet Chernobyl og Fukushima), men er idag en trygg måte å produsere elektrisitet på. Kjernekraft har veldig lave CO2-utslipp, men er relativt dyr, blant annet på grunn av strenge reguleringer og at det tar lang tid å bygge et kraftverk. Nye teknologiske gjennombrudd, som for eksempel modulære kjernekraftverk eller Thorium-reaktorer, kan kanskje gjøre fisjonsbasert kjernekraft mer lønnsomt.

Fusjon har tradisjonelt alltid vært 20 eller 30 år unna, men det har vært god fremgang i forskningen på det i det siste, og i tillegg til det gigantiske ITER-prosjektet, som er et samarbeid mellom flere land, er det nå mange mindre, private firmaer som jobber mot kommersialisering av fusjons-energi. Kanskje kan vi begynne å få elektrisitet fra fusjonskraftverk allerede rundt midten av 2030-tallet?

Bør fornybar energi overta? Hva med de fattige?

Jeg tror altså ikke det er usannsynlig at fornybar energi kommer til å ta over, men jeg skrev ikke eksplisitt at det er en god ting at fornybar energi overtar. I likhet med spørsmålet om hvorvidt fornybar energi kommer til å overta, så er spørsmålet om hvorvidt fornybar energi bør overta, vanskelig å svare på. Det kommer an på…

På lang sikt er det nok uunngåelig at fossil energi i stor grad vil erstattes av alternative energikilder. Spørsmålet blir da: Bør vi innføre politiske tiltak for å tvinge gjennom mer bruk av fornybar energi idag?

Hvis vi er nødt til å subsidiere fornybar energi betraktelig for at den skal tas i bruk, vil prisen på elektrisitet øke — på en eller annen måte. Enten direkte gjennom strøm-avgift eller fornybar-avgift, eller indirekte på grunn av generelt høyere skatter. Tyskland og Danmark har begge investert kraftig i fornybar energi, og deres strøm-priser (for husholdningsforbrukere) er de høyeste i Europa. De har også den høyeste andelen av skatter og avgifter i strøm-prisen.

Høyere strømpriser går naturlig nok mest ut over de fattigste. Det betyr at subsidiering av fornybar energi kan føre til at vi gjør livene verre for fattige mennesker.

Det er også de fattigste — særlig fattige mennesker i utviklingsland — som vil bli hardest rammet av klimaendringer. Bør vi ikke derfor redusere våre CO2-utslipp for deres skyld?

Problemet med det er at en stor reduksjon i CO2-utslipp bare vil ha en liten effekt på den globale temperaturen mot slutten av dette århundret. Det betyr at vi ved å redusere verdens CO2-utslipp ikke hjelper de fattigste idag, men vi hjelper dem kanskje bittelitt — langt inn i fremtiden.

En mye bedre strategi for å hjelpe de fattige, er dermed å hjelpe dem å bli rikere nå. Jo rikere man er, jo bedre rustet er man til å takle klimaendringer og ekstremvær — og sjansen for å dø i naturkatastrofer synker jo rikere man er. Så en måte å hjelpe på er å tillate fattige land å bruke den billigste energien, selv om det vil føre til høyere CO2-utslipp på kort sikt.

Men fornybar energi — særlig solenergi — blir billigere og billigere i et forrykende tempo, og om det trengs subsidier idag, er det kanskje ikke lenge til prisen vil være lav nok til at subsidier er unødvendig. Siden solen ikke skinner hele døgnet og vi ennå ikke har gode nok løsninger for lagring av energi, kan riktignok ikke solenergi dekke 100 % av et lands energibehov — man trenger vannkraft, fossil energi og/eller kjernekraft i tillegg.

Min konklusjon er at fornybar energi bør overta hvis og når det blir billigere enn alternativene. Man bør helst ikke subsidiere bruken av fornybar energi, men man kan subsidiere forskningen på det, med mål om å gjøre det billigere enn fossil energi. Hvis det skjer, vil alle land bytte til fornybar energi (som Bjørn Lomborg ofte påpeker). Og isåfall vil ikke en overgang til fornybar energi skade andre deler av økonomien eller folks liv.

Mer CO2 har gjort verden grønnere

CO2-nivået i atmosfæren har økt fra ca 280 ppm i før-industriell tid til ca 420 ppm idag. Dagens CO2-nivå er likevel lavere enn det som er optimalt for at trær og planter skal vokse mest mulig, noe som er grunnen til at det er vanlig å øke CO2-konsentrasjonen i drivhus.

Og det økte CO2-nivået i atmosfæren ser allerede ut til å ha hatt en god effekt når det gjelder hvor mye vegetasjon som vokser på jorda. I løpet av bare de 35 årene fra 1979 (da man begynte å med satellitt-målinger) til 2014, har vi fått ca 14 % mer grønn vegetasjon, ifølge blant annet denne studien, som NASA skriver om i en artikkel, hvor det står:

The greening represents an increase in leaves on plants and trees equivalent in area to two times the continental United States.

Hele økningen kan ikke tilskrives effekten av CO2, men mesteparten — ca 70 %, ifølge studien.

Tiltak mot global oppvarming

Bjørn Lomborg er grunnlegger og leder av Copenhagen Consensus Center, en tenketank som henter inn ledende økonomer til å gjøre kost/nytte-beregninger for å finne ut hvor stor positiv effekt potensielt gode tiltak og prosjekter kan ha for verden. Han understreker viktigheten av å ikke kaste bort enorme pengemengder og ressurser på tiltak som ikke fungerer eller som gjør mer skade enn nytte.

Bjørn Lomborg og Copenhagen Consensus Center antar at IPCC har rett i hvor mye temperaturen vil stige fremover i fravær av politiske tiltak og (i hvert fall Bjørn Lomborg) har tatt utgangspunkt i det pessimistiske RCP8.5-scenarioet, hvor temperaturen vil stige nærmere 4 grader fra idag (til 4,1 grader over gjennomsnittet for 1986-2005).

Kilde: Kapittel 12 (fra working group I) i IPCCs forrige hovedrapport (FAQ 12.1 Figure 1)

Ifølge IPCC vil kostnaden ved denne høyere temperaturen være et ca 3 % lavere globalt bruttonasjonalprodukt (BNP) i år 2100 enn vi ellers ville fått 37) (se fra 32:20 i videoen over). Det betyr at hvis verdens BNP ville steget til 450 % av dagens BNP frem mot år 2100 hvis det ikke var for negative effekter av klimaendringer, vil det istedenfor (fordi vi må bruke ressurser på å tilpasse oss til blant annet et høyere havnivå og sterkere stormer) stige til bare 436 % av dagens BNP. Som Bjørn Lomborg sier, det er et problem, men ikke verdens undergang.

På bakgrunn av IPCCs prognoser har Copenhagen Consensus Center vurdert hvor gode diverse klimatiltak er. Paris-avtalen og mange former for CO2-skatt har blitt vurdert som veldig dårlige tiltak med høy kostnad og liten effekt. Jeg skrev tidligere at det er bedre å subsidiere forskning på fornybar energi enn å subsidiere bruken av det. Og det er nettopp dette Bjørn Lomborg også anbefaler, sammen med en lav, men økende, CO2-skatt. Målet med å subsidiere forskning på fornybar energi er at fornybar energi skal bli så billig at den utkonkurrerer den fossile.

Hvis IPCCs prognoser for temperatur-økningen fremover er for høye, slik det kan se ut som, vil selv den lave CO2-skatten Lomborg anbefaler være for høy og risikerer å være mer skadelig enn nyttig. Når vi også tar med at CO2 har positive effekter for hvor mye trær og planter vokser, er det faktisk ikke åpenbart at effekten av økt CO2-innhold i atmosfæren vil være negativ de nærmeste tiårene.

Og på lengre sikt vil vi ha så avansert teknologi at den økte temperaturen eventuelt kan reverseres uten å redusere atmosfærens CO2-nivå. Allerede i 2009 vurderte Copenhagen Consensus Center faktisk to slike tiltak til å være veldig gode tiltak: Marine Cloud Whitening og Stratospheric Aerosol Insertion38) Begge vil føre til at mer sollys blir reflektert av atmosfæren.

To andre tiltak som ble vurdert som henholdsvis veldig godt og godt var Carbon Storage R&D og Air Capture R&D. Air Capture går ut på å fjerne CO2 fra atmosfæren, noe som ifølge K. Eric Drexler (av noen kalt “nanoteknologiens far”) var altfor dyrt i 2013, men som antageligvis vil kunne gjøres til en overkommelig pris om noen tiår:

[T]o have the the 21st century have a planet that resembles what we’ve had in the previous human history will require taking the CO2 levels down, and that is an enormous project. One can calculate the energy required – it’s huge, the area of photovoltaics required to generate that energy is enormous, the costs are out of range of what can be handled by the world today.

But the prospects with a better means of making things, more efficient, more capable, are to be able to do a project of that scale, at low cost, taking molecular devices, removing molecules from the atmosphere. Photovoltaics produced at low cost to power those machines can draw down CO2 and fix the greenhouse gas problem in a moderate length of time once we pass the threshold of having those technologies […] We now have in hand tools for beginning to build with atomic precision, and we can see pathways […] to a truly transformative technology.

Ulempen ved å fjerne CO2 fra atmosfæren er at man mister den positive effekten CO2 har på plantevekst. Derfor tenker jeg i utgangspunktet at det å få atmosfæren til å reflektere mer sollys — spesielt over de varmeste områdene — vil være en bedre løsning. Bjørn Lomborg er også positiv til å få atmosfæren til å reflektere mer sollys:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surly we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

Bjorn Lomborg on the Costs and Benefits of Attacking Climate Change (8:35)

Dårlige klimatiltak kan potensielt ha veldig uheldige konsekvenser. Bjørn Lomborg har i sin nye bok, False Alarm — How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet, sett på to av IPCCs nye fremtids-scenarier, kalt SSP1 (Sustainability – Taking the Green Road) og SSP5 (Fossil-fueled Development – Taking the Highway). Begge er relativt optimistiske med tanke på økonomisk vekst.

Ifølge Riahi et al. (2017), som Lomborg refererer til, vil BNP per innbygger i verden frem til år 2100 i gjennomsnitt øke med 600 % for SSP1, mens det for SSP5 vil øke med hele 1040 %.

Skjermbilde fra Lomborgs False Alarm. BNP per innbygger øker til $182.000 for SSP5 og til $106.000 for SSP1.

Lomborg har så trukket fra kostnadene knyttet til global oppvarming og klimaendringer for de to scenariene. For SSP1 er kostnaden 2,5 % av BNP, mens kostnaden for SSP5 er 5,7 % av BNP:

Skjermbilde fra Lomborgs False Alarm. Selv når man trekker fra kostnadene knyttet til global oppvarming, er folk mye rikere i SSP5 enn SSP1 i år 2100.

Selv etter å ha trukket fra kostnadene knyttet til klimaendringer i de to scenariene, er gjennomsnittlig BNP per innbygger fortsatt vesentlig høyere i det scenariet hvor man ikke legger begrensninger på bruken av fossil energi — $172.000 mot $103.000, noe som betyr en mye høyere velstand for folk flest — og spesielt for mennesker i dagens utviklingsland, som dermed ikke vil være på langt nær like sårbare når naturkatastrofer inntreffer som de er idag. (Og naturkatastrofer og ekstremvær vil naturligvis fortsette å inntreffe uavhengig av hva den globale gjennomsnittstemperaturen er.)

Det er med andre ord ekstremt viktig at vi ikke unødvendig begrenser fattige lands muligheter for økonomisk vekst. Og dette er mye viktigere enn å begrense våre CO2-utslipp, noe som gjelder i enda større grad hvis klimasensitiviteten er lavere enn IPCC og Bjørn Lomborg forutsetter.

Folkeopplysningens hockeykølle-graf

I 2018 hadde Folkeopplysningen på NRK en episode de kalte “Klimakrisa” hvor de tok alarmistenes side. De refererte både til John Cooks 97 % konsensus-studie (som jeg har nevnt tidligere i innlegget) og noe som lignet på Michael Manns hockeykølle-graf. Sistnevnte for å underbygge at det vi har sett av oppvarming siden 1950 er unikt. Siden de ikke brukte Manns graf, bare noe som lignet, skal jeg nå skrive litt om grafen de faktisk brukte.

Mens Manns hockeykølle-graf fra 1999 går 1000 år tilbake i tid, viser Folkeopplysningens graf gjennomsnittstemperaturen på jorden for de siste 22.000 år, og mens Manns graf gjelder for den nordlige halvkule, gjelder Folkeopplysningens graf for hele jorden. Programleder, Andreas Wahl, sier:

– Og vi starter 20.000 år før vår tidsregning. Da var det mer enn 4 grader kaldere, og som du kan se, så er Norge og Oslo dekket av et tjukt lag med is. Og temperaturen, den er… stabil. Bitteliten økning her, som fører til at isen begynner å smelte, slipper ut mere CO2, og temperaturøkningen akselereres. Og det går rolig, i årtusen på årtusen.
– Temperaturen i fortida har hatt flere lokale og kortvarige svingninger som ikke kommer fram her. Men denne grafen viser den globale trenden over tid.
– Først her starter vi mennesker med jordbruk. En rolig økning, stabiliserer seg rett over den rød linja her. Fortsetter… Pyramidene, Stonehenge, verdens første jomfrufødsel. Middelalder; litt høy temperatur i Europa, men ikke høy nok til å påvirke den globale gjennomsnittstemperaturen. Den holder seg stabil gjennom Middelalder, Renessanse, Opplysningstid, helt til vi lander her, i den industrielle revolusjon, der vi graver opp kull og brønner, der vi finner opp fly, biler og oljeplattformer og vokser til sju milliarder mennesker. Da skjer dette. Her er vi idag, og hvis vi fortsetter omtrent som nå, så ser vi for oss dette scenarioet. Dette scenarioet er krevende, men realistisk, og da klarer vi ikke 2-graders-målet. Mens dette her er drømmescenarioet vårt, men det er ekstremt krevende. Uansett hvor det ender, så er det å si at dette her er naturlige, og ikke menneskeskapte endringer, det er temmelig absurd.

Skjermbilde fra Folkeopplysningens episode “Klimakrisa” (11:33).

Folkeopplysningen har kopiert grafen (inkludert de tre forskjellige scenariene for fremtiden) fra denne XKCD-siden, og kildene Folkeopplysningen oppgir er de samme som der (i tillegg til at Folkeopplysningen også oppgir XKCD som kilde):

  • Shakun et al. (2012)
  • Marcott et al. (2013)
  • Annan and Hargreaves (2013)
  • Hadcrut4
  • IPCC

IPCC er antagelig brukt som kilde for de tre fremtidige temperatur-scenariene.

HadCRUT4 er temperaturdata for nyere tid (fra 1850), brukt av IPCC. Met Office Hadley Centre og CRU samarbeider om å vedlikeholde datasettet, derav de første seks bokstavene i navnet. T’en står for “Temperature” og 4-tallet er versjonsnummeret (4 er nyeste versjon).

Annan and Hargreaves (2013) er en studie hvor forfatterne har kommet frem til et estimat for hva gjennomsnittlig temperatur på jorden var da isen var på sitt tykkeste under siste istid. De kom frem til at det på det tidspunktet, for 19-23.000 år siden, var 4.0±0,8 grader kaldere (med 95 % sikkerhet) enn i før-industriell tid. Ifølge studien er dette varmere enn det tidligere studier (som hadde brukt et mer begrenset datasett) hadde kommet frem til.

Shakun et al. (2012) er en studie hvor forfatterne først og fremst konkluderer med at CO2-nivået i atmosfæren i stor grad økte før den globale gjennomsnittstemperaturen steg ved utgangen av forrige istid. Studien bruker også en temperatur-rekonstruksjon som går lenger tilbake i tid enn Marcott et al. (2013).

Marcott et al. (2013) er en studie som har beregnet global gjennomsnittstemperatur for de siste 11.300 årene. Den resulterende temperatur-grafen har ifølge studien en oppløsning på noen hundre år — det vil si at kortvarige store temperatursvingninger ikke vil vises. I en FAQ publisert på RealClimate to uker etter at studien ble publisert (kommentert av McIntyre her), skriver forfatterne:

We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer.

Det betyr at hvis det har vært store, men relativt kortvarige temperatur-svingninger i løpet av de siste 11.300 årene, vil de ikke komme frem på grafen. Jeremy Shakun, som er hovedforfatter av Shakun et al (2012), var også medforfatter av Marcott-studien. Shakun ble intervjuet om Marcott et al. (2013) og fikk spørsmål nettopp om dette med store, kortvarige temperatur-svingninger og svarte da følgende:

No, to be fair, I don’t think we can say for sure isn’t a little 50-year warm blip in there that was much warmer than today? That could be hiding in the data out there. We don’t have the resolution for that, because we have an ocean core data point every 200 years, plus that mud’s all mixed around, so when you really get down to it, you might never see that blip.

Tittelen på XKCD-siden er A Timeline of Earth’s Average Temperature Since the Last Ice Age Glaciation, med undertittel When People Say “The Climate Has Changed Before” These are the Kinds of Changes They’re Talking About.

Det er tydelig at Andreas Wahl og Folkeopplysningen ønsker å formidle det samme, de prøver å få seerne til å tro at oppvarmingen siden 1950 er unik og at lignende raske temperaturøkninger ikke har skjedd før i løpet av de siste 20.000+ årene, men på bakgrunn av kildene Folkeopplysningen oppgir kan man ikke trekke den konklusjonen.

Norske mediers hvite løgner og bevisste villedning

Folkeopplysningen er ikke unik blant norske medier i måten de kommuniserer om klima på. Det Folkeopplysningen gjorde, både når det gjelder konsensus-spørsmålet og hockeykølle-grafen de viste, er at de ikke lyver direkte, men måten de fremstiller forskningen på er lite presis. Likevel blir det sagt på en slik måte at seerne sitter igjen med et inntrykk som er feil, og det er er bare svært observante seere som kan forventes å oppfatte at Folkeopplysningen uttaler seg upresist.

Jeg har nettopp gått gjennom hva Andreas Wahl sa i forbindelse med temperatur-grafen Folkeopplysningen viste, hvor målet til Folkeopplysningen tydeligvis var at seerne skulle sitte igjen med et inntrykk av at forskerne var sikre på at det ikke har vært like raske temperatur-økninger som nå på over 20.000 år.

Når det så gjelder den påståtte konsensusen sa Andreas Wahl:

I 2013 kom nemlig en studie som tok for seg all publisert klimaforskning for å finne ut hvor stor del av forskningen som bekreftet menneskeskapte klimaendringer. Og resultatet? 97 prosent.

Dette er igjen upresist, for hva betyr det egentlig å bekrefte menneskeskapte klimaendringer? Betyr det at forskerne mener mennesker bidrar i det minste litt til klimaendringer, eller betyr det at vi er viktigste årsak? Det studien Folkeopplysningen refererte til viste, var bare at blant de ca 12.000 forskningsartiklene som ble gjennomgått, var det 97 % av den tredje-delen som uttrykte en mening om menneskenes bidrag som mente mennesker bidro til klimaendringer eller global oppvarming — enten litt eller mye. Men når Andreas Wahl sier det han sier, er det vanskelig for en nøytral seer å skjønne at det er dette han mener.

Folkeopplysningens spesielle måte å kommunisere forskning på kan ikke tolkes som noe annet enn bevisst villedning.

Faktisk.no har også hatt en sak på 97 %-konsensusen. Innlegget heter Jo, det store flertallet av verdens klimaforskere er enige om at mennesket påvirker klimaet. At mennesker påvirker klimaet er en lite interessant påstand. Til og med de fleste skeptikere vil si seg enig i det. Et mer interessant spørsmål er i hvor stor grad vi gjør det — er det 1 %, eller er det mer enn 50 %, for eksempel.

Faktisk.no har tatt utgangspunkt i en artikkel fra resett.no: Slik oppsto 97 % myten – og verdens mest standhaftige Fake News. Artikkelen inneholder en del misrepresentasjoner, og faktisk.no tok utgangspunkt i én av disse og konkluderte med “Faktisk helt feil”. Jeg mener det er positivt at Resett prøver å si imot 97 %-myten, men hvis de skal kritisere den, må de være mer nøyaktige enn de er, ellers er de ikke noe bedre enn den andre siden.

Påstanden som ble vurdert som helt feil var:

Det store flertallet av verdens klimaforskere, 66%, (…) konkluderer [ikke] om mennesket kan påvirke klimaet.

Påstanden omhandler Cooks 97 %-studie. Jeg er enig med faktisk.no i at påstanden er feil. Tallet på 66 % er andelen gjennomgåtte studier som ble vurdert som at de ikke uttrykte en mening om menneskenes bidrag til klimaendringer eller global oppvarming. 66 % av studiene er ikke nødvendigvis det samme som 66 % av forskerne, og de fleste forskerne hadde antageligvis en mening, selv om det ikke ble uttrykt i studien 39) (egentlig bare studiens abstract-del — det var bare den som ble vurdert av Cook et al.).

Selv om jeg er enig med faktisk.no i at påstanden i Resett er feil, mener jeg det ville vært mer interessant om de hadde sjekket en påstand om at 97 % av verdens klimaforskere er enige i at menneskelig aktivitet er viktigste årsak til klimaendringer eller global oppvarming — for dette er noe mange (de fleste?) nordmenn tror at forskerne mener.

Faktisk.no nevner faktisk også Andrew Montford, forfatteren av boken The Hockey Stick Illusion, i faktasjekken:

[Kjell Erik Eilertsen viser til] en rapport fra britiske Global Warming Policy Foundation (GWPF), forfattet av Andrew Montford. Der står det at Cook-rapportens metodikk er ensidig, og at den bare beskriver enighet om selvfølgeligheter.

Rapporten de refererer til har tittelen Fraud, Bias and Public Relations — The 97% ‘consensus’ and its critics. Jeg kan absolutt anbefale å lese den — også Cooks studie har nemlig en spennende historie bak seg, med løgner, lekkede data og lekket intern-kommunikasjon.

Faktisk.no imøtegår ikke Montfords kritikk av Cook — istedenfor skriver de bare:

Montford er regnskapsfører med bachelorgrad i kjemi. Han har ikke publisert en eneste fagfellevurdert vitenskapelig artikkel. Derimot er han en kjent klimaskeptiker, blogger og forfatter av boken «The Hockey Stick Illusion» om «den korrupte vitenskapen».

GWPF ble i 2009 etablert som en veldedig stiftelse av den klimaskeptiske britiske politikeren Nigel Lawson. I 2015 ble de gransket av det britiske tilsynet for veldedige stiftelser. De konkluderte med at GWPF ikke drev uavhengig informasjon egnet til undervisning, men agiterte politisk. Organisasjonen ble etter dette splittet opp.

Ifølge wikipedia ble Global Warming Policy Foundation splittet opp i to deler: den eksisterende Global Warming Policy Foundation, som fortsatt skulle være en veldedig stiftelse, og Global Warming Policy Forum, som skulle kunne drive lobby-virksomhet. (Faktisk.no skrev at granskningen ble gjort i 2015, men ifølge wikipedia skjedde det i 2014 og muligens 2013.) Mitt inntrykk er at mye av det Global Warming Policy Foundation publiserer er av veldig høy kvalitet.

Det er for så vidt naturlig at faktisk.no ikke ønsker å argumentere imot at Cook-rapporten bare beskriver enighet om selvfølgeligheter, for faktisk.no gjør i stor grad det samme selv — minner om overskriften i faktasjekken deres: “Jo, det store flertallet av verdens klimaforskere er enige om at mennesket påvirker klimaet”. Samtidig veksler de litt mellom å si at mennesket påvirker klimaet og at menneskelig aktivitet er hovedårsak til klimaendringer. 40) 41)

Hvis journalistene i faktisk.no faktisk har lest Montfords relativt knusende rapport, har jeg vanskelig for å skjønne hvordan de fortsatt kan forsvare Cooks studie, noe de gjør blant annet ved å oppgi Cook et al. sin begrunnelse for hvorfor det er greit å “legge til side” de nesten 8000 studiene hvor det ikke var uttrykt noen mening om menneskenes bidrag til global oppvarming.

Jeg er enig i at menneskelig aktivitet bidrar til klimaendringer, og det er godt mulig mer enn 97 % av de som forsker på klima også mener det, men jeg er også enig med Andrew Montford i at dette er en selvfølgelighet og lite interessant. Hvis man vil finne ut hvor mange forskere som mener menneskelig aktivitet er hovedårsaken til global oppvarming siden 1950, kan ikke Cook et al. (2013) gi svaret på det, og jeg vil på det sterkeste anbefale at media slutter å referere til den studien. (Se heller studien diskutert i fotnote 41.)

Bedring i IPCC og blant klimaforskere?

Climategate-e-postene ga et svært dårlig bilde av klimaforskningen frem til 2010, men er det like ille idag? Jeg har heldigvis funnet noen tegn som tyder på at noe har blitt bedre i IPCC og blant klimaforskere:

1. Ross McKitrick sa følgende i et intervju i 2019:

Now that whole constituency that wants certainty and wants catastrophe and wants the big scary message, it’s beginning to detach itself from the IPCC, because the message in the IPCC reports just isn’t keeping up with where the exaggeration folks want to go. 42)

2. En av de nåværende lederne for IPCC Working Group I (den som har med forskningen å gjøre), Valérie Masson-Delmotte, er enig i McIntyres kritikk av Manns studier. Dette fortalte hun McIntyre i 2006 (9 år før hun ble valgt til å lede Working Group I), men hun ville at McIntyre skulle holde navnet hennes hemmelig. Nylig (antageligvis i 2019) spurte McIntyre henne om det var greit om han brukte navnet hennes, og hun godkjente det. Dette kunne McIntyre fortelle i SoundCloud-intervjuet jeg embed’et tidligere i innlegget (1:00:40).

3. Tom Wigley, som jeg har sitert noen ganger og som har vært leder for CRU, har anbefalt Michael Shellenbergers nye bok Apocalypse Never — Why Environmental Alarmism Hurts Us All. Shellenberger er en klimaaktivist som ikke lenger er alarmist. Han skriver blant annet at klimaendringer ikke fører til verre naturkatastrofer og at mennesker ikke er iferd med å forårsake en sjette masseutryddelse (innlegget jeg linket til her ble først publisert på forbes.com, men ble så fjernet derfra 43)). Shellenberger er også veldig positiv til kjernekraft og negativ til sol- og vindenergi — i stor grad på grunn av de økologiske konsekvensene ved at sol og vind blant annet krever så store arealer. I boken beskriver han hvordan produsenter av fossil- og fornybar energi motarbeider kjernekraft. (Personlig mener jeg han er for ensidig negativ til fornybar energi.)

I samtale med Shellenberger har Tom Wigley sagt at det er feil at klimaendringer truer vår sivilisasjon, at det er feil å overdrive for å få folks oppmerksomhet, og:

All these young people have been misinformed. And partly it’s Greta Thunberg’s fault. Not deliberately. But she’s wrong.

Wigley skrev i sin anbefaling av Apocalypse Never at den kanskje er den viktigste miljø-boken som noen gang er skrevet.

4. Judith Curry, en klimaforsker som har gått over til skeptiker-siden, har i et innlegg fra 2014 på sin blogg fortalt om positive endringer som har skjedd som følge av Climategate og avsløringene rundt hockeykølle-grafen, blant annet:

Transparency has improved substantially. Journals and funding agencies now expect data to be made publicly available, along with metadata. The code for most climate models is now publicly available. As far as I know, there are no outstanding [Freedom of Information Act] requests for data (other than possibly some of Mann’s [Hockey Stick] data and documentation). Climategate shed a public light on the lack of transparency in climate science, which was deemed intolerable by pretty much everyone (except for some people who ‘owned’ climate data sets).

Understanding, documenting and communicating uncertainty has continued to grow in importance, and is the focus of much more scholarly attention. With regards to the IPCC, I feel that [Working Group 2] in [Assessment Report 5] did a substantially better job with uncertainty and confidence levels (I was not impressed with what [Working Group 1] did).

As a result of Climategate, there is little tolerance for the editorial gatekeeping ways of trying to keep skeptical papers from being published.

(I 2019 skrev hun riktignok at det har blitt enda vanskeligere å få publisert i de største og viktigste tidsskriftene. 44))

Life for a scientist that is skeptical of ‘consensus’ climate science or critical of the IPCC is definitely easier post-Climategate.

Til tross for fremgangen, argumenterer hun for at IPCC bør legges ned.

Konklusjon (Summary for Policymakers)

Det er liten grunn til å være redd for klimaendringer — i hvert fall som følge av CO2-utslipp fra menneskelig aktivitet. CO2-utslippene våre er vesentlig lavere enn vi får inntrykk av gjennom media. Det er også en god sjanse for at klimasensitiviteten er lav. Isåfall vil ikke jordens gjennomsnittstemperatur stige mer enn ca 1°C dette århundret (som følge av våre drivhusgass-utslipp).

Det økte CO2-nivået vil dessuten føre til mer vekst hos trær og planter, og det er faktisk ikke helt åpenbart at CO2-utslippene våre vil ha netto negativ effekt de nærmeste tiårene. Og på lengre sikt vil vi ha så avansert teknologi at de negative effektene lett kan håndteres eller til og med reverseres.

Dersom Ray Kurzweil har rett, vil vi dessuten kunne få mesteparten av energien vår fra fornybare energikilder om bare ca 10 år. Dette er riktignok spekulativt og forutsetter flere teknologiske gjennombrudd, men solenergi har hatt en eksplosiv vekst og det blir i hvert fall spennende å se hvor lenge den raske veksten kan fortsette.

Men det er også viktig at vi ikke sløser store pengemengder og ressurser på tiltak som gjør mer skade enn nytte. Når det gjelder fornybar energi, er det bedre å subsidiere forskningen på det enn bruken av det. Hvis målet er null-utslipp av CO2, må usubsidiert alternativ energi bli så billig at det utkonkurrerer fossil energi.

For fattige land er det viktigere med økonomisk vekst enn reduserte CO2-utslipp. Desto rikere man er, desto høyere er nemlig sannsynligheten for å overleve ekstremvær og naturkatastrofer. Effekten av reduserte CO2-utslipp, derimot, vil ikke merkes før langt frem i tid — i tillegg vil den være veldig liten.

Det har vært mye uærlighet blant en del klimaforskere, men en del ting har blitt bedre etter Climategate. Jeg tror derimot ikke media har forbedret seg tilsvarende. Jeg ønsker meg at media blir mer ærlige når de skriver om klima og at de slutter med klimaskremslene sine. Det vil isåfall kunne gi bedre mental helse for mange unge mennesker som idag er redde for fremtiden.


Fotnoter:

1) Alle e-postene fra andre runde av Climategate kan lastes ned herfra.

Online-boken jeg linket til i hovedteksten har en litt sarkastisk tone. En bok med mer nøytral tone — som kanskje også er bedre — er “Climategate: The CRUtape Letters”, men jeg har ikke funnet noen online-versjon av den.

2) For å finne hvor mange studier som ble kategorisert av Cook et al. (2013) som at de mente mennesker har bidratt til minst 50 % av oppvarmingen siden ca 1950, kan vi ta utgangspunkt i rådataene fra Cook et al. (2013) og skrive et lite Python-program, som jeg har gjort her, hvor jeg også har kopiert innholdet i datafilen fra Cook et al. (2013) inn i filen datafile.txt:

Koden over er bare et bilde (siden jeg ikke klarte å unngå at koden fikk automatisk fokus når jeg embed’et). Trykk på bildet for å komme til en siden hvor koden (kan) kjøres.

Poenget er å telle opp alle studiene som ble kategorisert med endorsement-nivå 1, som er de studiene som, ifølge Cook et al. (2013), uttrykte at mennesker er hovedårsak til global oppvarming.

Endorsement-nivået for en studie er siste siffer på linjen som inneholder informasjon om den aktuelle studien (i datafile.txt). Dette kan man se på linje 19 i datafile.txt, som beskriver formatet på linjene under:

Year,Title,Journal,Authors,Category,Endorsement

Når man er på siden med koden fra bildet over, vil koden automatisk bli kjørt, og man kan se resultatet til høyre (hvis man har bred nok skjerm — ellers kan man trykke på Spill av-knappen). Det skal gi følgende resultat:

{‘1’: 64, ‘2’: 922, ‘3’: 2910, ‘4’: 7970, ‘5’: 54, ‘6’: 15, ‘7’: 9}

Det betyr at det var 64 studier som hadde endorsement-kategori 1. Endorsement-kategoriene finner vi også i rådata-filen:

Endorsement
1,Explicitly endorses and quantifies AGW as 50+%
2,Explicitly endorses but does not quantify or minimise
3,Implicitly endorses AGW without minimising it
4,No Position
5,Implicitly minimizes/rejects AGW
6,Explicitly minimizes/rejects AGW but does not quantify
7,Explicitly minimizes/rejects AGW as less than 50%

AGW står for Anthropogenic Global Warming og betyr menneskeskapt global oppvarming.

3) Uten bruk av hovedkomponenter, er en mulig måte å rekonstruere temperaturen på å ta et relativt enkelt gjennomsnitt av alle kronologiene, men da risikerer man å gå glipp av viktige mønstre i temperatur-dataene, som for eksempel at temperaturen hypotetisk sett går mye opp i det 20. århundret mange steder. Men i det enkle gjennomsnittet vil ikke dette nødvendigvis være synlig fordi temperaturen kanskje går en del ned andre steder. Hovedkomponent-analyse kan også bidra til å få frem slike underliggende mønstre.

4) Hele tittelen for MBH98 er Global-scale temperature patterns and climate forcing over the past six centuries. Studien er ikke lettlest. Du er herved advart.

5) Hele tittelen for MBH99 er Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations.

6) En annen ting som virket rart var at et enkelt gjennomsnitt av proxy-seriene i MBH98 bare viste at temperaturen varierte relativt jevnt rundt et konstant nivå — det var ingen spesiell trend — ingen voldsom temperatur-økning utover i det 20- århundret. Temperatur-rekonstruksjonen i MBH98 viste likevel at det hadde vært en temperatur-økning på 1900-tallet, og at de høye temperaturen her var unike for de siste 600 årene:

Skjermbilde fra en presentasjon McIntyre og McKitrick holdt for et ekspert-panel i National Academy of Sciences (NAS) i 2006. Øverst: Et enkelt gjennomsnitt av alle proxy-seriene i MBH98. Nederst: Den endelige temperatur-rekonstruksjonen i MBH98.

I teorien kunne det være gode forklaringer på forskjellen, men resultatet var mistenkelig og ga motivasjon til videre undersøkelser.

7) Manns svar til McIntyre og McKitrick ble publisert i form av to korte artikler av freelance-journalisten David Appell, på hans nettside (arkivert av Wayback Machine her og her). Det var mye rart i Manns svar i tillegg til påstanden om at feil data var brukt: Mann kritiserte blant annet McIntyre og McKitrick for å ha bedt om en Excel-fil (noe de ikke hadde gjort), og for at de ikke hadde brukt alle 159 proxyene – til tross for at det i Manns studie sto at det var 112 proxyer, ikke 159. Det var ganske greit for McIntyre og McKitrick å svare på kritikken, og svaret deres kan du se her. Noen dager senere kom et formelt svar fra Mann. Mann forklarte at McIntyre og McKitrick ikke hadde beregnet hovedkomponentene på riktig måte. Mann hadde nemlig brukt en stegvis prosedyre, men denne var ikke beskrevet i studien hans. I McIntyre og McKitricks svar til Mann dokumenterer de blant annet at det ikke er mulig å gjenskape fremgangsmåten i Manns studie selv etter at de fikk tilgang til den nye FTP-nettsiden.

8) Mann hadde skrevet:

Here is an email I sent [to McIntyre] a few weeks ago in response to an inquiry. It appears, by the way, that he has been trying to break into our machine[.] Obviously, this character is looking for any little thing he can get ahold of.

McIntyre har skrevet en kort kommentar til e-posten i en kommentar under et blogginnlegg på ClimateAudit.

9) Eller ned, men Manns program snudde slike grafer opp/ned, slik at kurver som pekte nedover i det 20. århundret ble tolket som at de pekte oppover. Fra boken The Hockey Stick Illusion: Meanwhile, any [series] with twentieth century downticks were given large negative weightings, effectively flipping them over and lining them up with upticks.

10) Det andre området er Gaspé, sør-øst i Canada, hvor MBH98 hadde brukt en kontroversiell proxy fra sedertrær. I likhet med trærne fra California, hadde trering-tykkelsene en utpreget hockeykølle-form med uptick i det 20. århundret. Også i likhet med trærne fra California samsvarte ikke den kraftige økningen i trering-tykkelse med målt temperatur i området.

I perioden fra 1404 til 1447 besto Gaspé-kronologien bare av 1-2 trær. Mann hadde dessuten ekstrapolert kronologien tilbake til år 1400, slik at den også kunne brukes for den tidligste perioden i MBH98. Gaspé-kronologien var den eneste i studien som hadde blitt ekstrapolert på denne måten.

Gaspé-kronologien var dessuten inkludert to ganger i MBH98, én gang som enkeltstående kronologi, og én gang som del av første hovedkomponent (PC1) for Nord-Amerika (NOAMER).

(Se McIntyre og McKitricks Energy & Environment-artikkel fra 2005.)

11) Ross McKitrick har skrevet:

If the flawed bristlecone pine series are removed, the hockey stick disappears regardless of how the [principal components] are calculated and regardless of how many are included. The hockey stick shape is not global, it is a local phenomenon associated with eccentric proxies. Mann discovered this long ago and never reported it.

Grunnen til at McKitrick kunne si at Mann visste om det, er at det på FTP-nettsiden hvor dataene lå var en katalog med det mistenkelige navnet BACKTO_1400-CENSORED, hvor det var gjort utregninger uten trærne fra California. Trærne det er snakk om er først og fremst det som på engelsk heter strip-bark bristlecone pines, en type furu med bark som deler seg.

12) McIntyre hadde riktignok en annen teori. Tre-stammene til disse trærne var ikke runde, det var store variasjoner i trering-tykkelse avhengig av hvilken vinkel man målte trering-tykkelse ut fra. McIntyre mente asymmetrien kunne skyldes grener som hadde brukket på 1800-tallet, muligens på grunn av tung snø. Asymmetrien kan dessuten ha gjort det utfordrende å vite hvor man burde måle trering-tykkelsen. Linah Ababneh samplet trærne på nytt i 2006 og fant ikke den samme store økningen i trering-tykkelse på 1900-tallet. McIntyre forteller litt om disse tingene i denne videoen. Flere teorier for den tilsynelatende voldsomme økningen i trering-tykkelse ble forøvrig diskutert i McIntyre og McKitricks 2005-artikkel i Energy & Environment.

13) Verifikasjonstallene forteller hvor mye vi kan stole på temperatur-rekonstruksjonen. Slik jeg har forstått det og litt mer konkret for MBH98, er sier verifikasjonstallene hvor bra den rekonstruerte temperatur-grafen stemmer med termometer-data i studiens verifikasjons-periode. Verifikasjons-perioden var 1856-1901, mens 1902-1980 var kalibrerings-perioden. Kalibrerings-perioden brukes til å finne ut hvordan sammenhengen mellom trering-tykkelse og temperatur er. Verifikasjon-perioden, hvor man også har termometer-data, brukes til å sjekke hvor bra sammenhengen man fant i kalibrerings-perioden stemmer i denne perioden. For MBH98 var korrelasjonen R2=0,2, som betyr ganske dårlig sammenheng. Jeg syntes det var vanskelig å skjønne hva R2 betyr for eldre tidsperioder, men jeg tror McIntyre forklarte det på en ganske forståelig måte i en kommentar til et blogg-innlegg om tolkningen/betydningen av R2:

[T]he 1400 step used a subset of proxies available in the later steps. It produced a time series for the period 1400-1980 that was scaled (calibrated) against instrumental (an early HadCRU) for 1901-1980. Statistics comparing the 1856-1901 segment of the reconstruction time series to 1856-1901 observations are the verification statistics.

Jeg tolker dette slik: De proxyene som går helt tilbake til perioden man vil beregne R2 for brukes til å lage en temperatur-kurve som går helt fra den tidsperioden og frem til idag. Proxyer som ikke går så langt tilbake ignoreres i beregningen. R2 for den aktuelle tidsperioden blir da hvor bra kurven matcher temperatur-målinger i perioden 1856-1901.

14) RE står for Reduction of Error og var mye brukt blant klimaforskere, men var ikke vanlig å bruke ellers. I MBH98 var betegnelsen β brukt istedenfor RE. I MBH98 skrev Mann at en RE-verdi høyere enn 0,0 ville bety at man kunne ha stor tillit til temperatur-rekonstruksjonen. Men ved å bruke såkalt Monte Carlo-analyse hvor McIntyre og McKitrick testet Manns algoritme med tilfeldige data-serier (red noise), fant de ut at terskel-verdien for RE var 0,59, noe Manns graf ikke tilfredsstilte for den tidligste perioden (fra år 1400), som betyr at resultatet deres ikke var statistisk signifikant for den perioden. Se A Brief Retrospective on the Hockey Stick av Ross McKitrick, seksjon 2 – Our Critique of the Method. (R2 var dessuten lav for hele perioden før 1750, noe som tyder på manglende statistisk signifikans i hele perioden fra 1400-1750.)

Som McKitrick skriver, hadde også Mann brukt Monte Carlo-analyse for å finne terskel-verdien for RE. Men på grunn av feilen de hadde i hovedkomponent-algoritmen, var terskel-verdien de fant feil, noe som førte til at det så ut som temperatur-rekonstruksjonen i MBH98 var bedre enn den faktisk var.

15) R2 eller r2 er mye oftere brukt enn RE blant forskere, og var også mye brukt av klimaforskere. En fordel med R2 fremfor RE er at man ikke trenger å gjøre Monte Carlo-analyse for å kunne tolke R2-verdien. Se A Brief Retrospective on the Hockey Stick av Ross McKitrick, seksjon 2 – Our Critique of the Method.

16) Bak betalingsmur, men kan leses her. Tittelen er “Global Warring In Climate Debate, The ‘Hockey Stick’ Leads to a Face-Off”.

17) Temperatur-rekonstruksjonen fra MBH99 vises blant annet i et “spagetti-diagram” sammen med temperatur-rekonstruksjoner fra flere andre studier i kapittel 6 (fra working group I) om “palaeoclimate”.

18) John Stewart spøkte om “hide the decline”-e-posten på The Daily Show. Jeg har ikke klart å finne videoen, men Stephen McIntyre gjenforteller Stewarts spøk i denne videoen.

19) I videoen blir ikke alle McIntyres slide’er vist, men spesielt interesserte kan se dem her.

20) McIntyre skrev i et blogginnlegg i 2013 at det til da fortsatt ikke hadde vært mye fremgang når det gjelder hva som blir publisert i den fagfelle-vurderte litteraturen:

The IPCC assessment has also been compromised by gatekeeping by fellow-traveler journal editors, who have routinely rejected skeptic articles on the discrepancy between models and observations or pointing out the weaknesses of articles now relied upon by IPCC. Despite exposure of these practices in Climategate, little has changed. Had the skeptic articles been published (as they ought to have been), the resulting debate would have been more robust and IPCC would have had more to draw on [in] its present assessment dilemma.

21) Judith Curry har sagt det på denne måten:

Simply, scientists are human and subject to biases. Further, they have personal and professional stakes in the outcomes of research – their professional reputation and funding is on the line. Assuming that individual scientists have a diversity of perspectives and different biases, then the checks and balances in the scientific process including peer review will eventually see through the biases of individual scientists. However, when biases become entrenched in the institutions that support science – the professional societies, scientific journals, universities and funding agencies – then that subfield of science may be led astray for decades and make little progress.

22) Vi har sett at det var vanskelig for Stephen McIntyre å få tilgang til de underliggende data og metoder fra Manns studier. Det samme var også tilfelle med mange andre studier. Skeptikerne etterspurte etter hvert data under Freedom of Information Act, men også det viste seg å være vanskelig. Boken Climategate: The CRUtape Letters beskriver dette nærmere, særlig i forbindelse med at skeptikerne ønsket data for værstasjonene som ble brukt i beregningen av global gjennomsnittstemperatur, slik at de kunne etterprøve beregningene.

23) Brandon Shollenberger (ikke forveksle med Michael Shellenberger) har skrevet flere blogginnlegg hvor han dokumenterer at John Cook (og SkepticalScience) har vært uærlig:

24) Men verifikasjonstallene for MBH98 var så dårlige at MBH98 egentlig ikke kunne si så mye om hva temperaturen var så langt tilbake i tid).

25) McIntyre kritiserte klimaforskernes bruk av kronologien fra Yamal-halvøya nord i Russland. Kronologien hadde hockeykølle-form, men var basert på veldig lite data (få trær) i nyere tid. McIntyre foreslo senere å slå sammen data fra Yamal med data fra nærliggende områder (blant annet Polar Urals). I 2013 begynte CRU (og Briffa) å bruke en kombinert Yamal-kronologi som lignet veldig på McIntyres tidligere forslag. Den nye kronologien hadde ikke hockeykølle-form. (Se også Andrew Montfords innlegg om The Yamal deception.)

26) På skepticalscience.com (forsiden) står det:

Scientific skepticism is healthy. Scientists should always challenge themselves to improve their understanding. Yet this isn’t what happens with climate change denial. Skeptics vigorously criticise any evidence that supports man-made global warming and yet embrace any argument, op-ed, blog or study that purports to refute global warming. This website gets skeptical about global warming skepticism. Do their arguments have any scientific basis? What does the peer reviewed scientific literature say?

De har for så vidt et poeng i at mange skeptikere er for lite skeptiske til argumenter fra skeptikere. Så jeg vil anbefale å være litt skeptisk til argumenter fra begge sider — både fra alarmister og skeptikere.

27) IPCC mener det er veldig sannsynlig at tilbakekoblingseffekter fra vanndamp og albedo (hvor mye sollys som reflekteres fra jorda) er positiv, altså at disse effektene bidrar til å øke klimasensitiviteten, mens tilbakekoblingseffekter fra skyer er usikker (selv om de antar at også den er positiv):

The water vapour/lapse rate, albedo and cloud feedbacks are the principal determinants of equilibrium climate sensitivity. All of these feedbacks are assessed to be positive, but with different levels of likelihood assigned ranging from likely to extremely likely. Therefore, there is high confidence that the net feedback is positive and the black body response of the climate to a forcing will therefore be amplified. Cloud feedbacks continue to be the largest uncertainty. The net feedback from water vapour and lapse rate changes together is extremely likely positive and approximately doubles the black body response [altså at klimasensitiviteten, ECS, dobles fra ca 1°C til ca 2°C].

28) IPCC er enig og har skrevet:

For scenarios of increasing [radiative forcing], TCR is a more informative indicator of future climate change than ECS.

29) Zeke Hausfather, en klimaforsker som blant annet jobber med klimamodeller, forklarer:

“Sensitivity” is something that emerges from the physical and biogeochemical simulations within climate models; it is not something that is explicitly set by modellers.

30) IPCC skriver:

The Coupled Model Intercomparison Project Phase 5 (CMIP5) model spread in equilibrium climate sensitivity ranges from 2.1°C to 4.7°C[.]

31) Knutti et al. (2017) skriver:

Our overall assessment of ECS and TCR is broadly consistent with the IPCC’s, but concerns arise about estimates of ECS from the historical period that assume constant feedbacks, raising serious questions to what extent ECS values less than 2 °C are consistent with current physical understanding of climate feedbacks.

Det er også mange andre klimaforskere som er uenige med Lewis i at vi bør stole mest på de mer observasjonsbaserte studiene i beregningen av klimasensitivitet. En artikkel på CarbonBrief kommenterer en ny studie (Sherwood et al. (2020)), som konkluderer med at en lav klimasensitivitet er usannsynlig.

33) Se under Appendix i den lange versjonen av Lewis og Crok sin klimasensitivitet-rapport.

34) I artikkelen skriver de:

The RCPs provide a starting point for new and wide-ranging research. However, it is important to recognize their uses and limits. They are neither forecasts nor policy recommendations, but were chosen to map a broad range of climate outcomes. The RCPs cannot be treated as a set with consistent internal logic. For example, RCP8.5 cannot be used as a no-climate-policy reference scenario for the other RCPs because RCP8.5’s socioeconomic, technology and biophysical assumptions differ from those of the other RCPs.

Og i kapittel 12 (fra working group I) i IPCCs forrige hovedrapport står det:

It has not, in general, been possible to assign likelihoods to individual forcing scenarios.

35) Ifølge Judith Curry:

Skeptics generally support nuclear energy and natural gas, but are dubious of rapid expansion of wind and solar and biofuels.

36) Det har blitt argumentert for at solenergi (og vind) får urettferdige fordeler i konkurransen med andre energikilder, og at dette er indirekte subsidier til solenergi, som gjør at prisen forbrukerne må betale for strøm øker.

37) 90 % sikkert mellom 0,5 % og 8,2 %.

At IPCC mener klimaendringer vil ha relativt lave kostnader kan vi også lese i kapittel 10 (fra working group II) i deres forrige hovedrapport:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement).

38) Spørsmålet økonomene skulle svare på var: “If the global community wants to spend up to, say $250 billion per year over the next 10 years to diminish the adverse effects of climate changes, and to do most good for the world, which solutions would yield the greatest net benefits?”

39) Som del av Cook et al. (2013) ble det også sendt e-post til forfatterne av studiene de gjennomgikk hvor det ble spurt hvordan forfatterne selv ville kategorisere studien(e) sin(e). En høyere andel (64,5 %) uttrykte nå en mening om menneskenes bidrag, og igjen var det 97 % av de som uttrykte en mening som mente mennesker bidrar til klimaendringer. Cook et al. (2013) oppga ikke hvor mange av forfatterne som oppga at studien deres støttet påstanden om at menneskelig aktivitet var viktigste årsak til global oppvarming siden 1950, men Dana Nuccitelli, som er medforfatter av Cook et al. (2013) og skriver på skepticalscience.com, skrev i kommentarfeltet til et blogginnlegg at det var 228 (av 2143) studier hvor forfatterne oppga at studien deres støttet denne påstanden:

The self-rating column looks like it has 228 Category 1 results. There were a further 18 where 1 author rated a paper as a 1, but a second author rated it as a 2.

40) Faktisk.no skrev ikke bare at forskerne mener mennesker bidrar til klimaendringer, men også at mange forskere mener vi er viktigste årsak. Faktisk.no skriver at NASA skriver:

En rekke fagfellevurderte publiserte studier viser at 97 prosent eller mer av aktive publiserende klimaforskere er enige: Det er ekstremt sannsynlig at global oppvarming siste århundre skyldes menneskelig aktivitet.

Dette er en oversettelse av:

Multiple studies published in peer-reviewed scientific journals show that 97 percent or more of actively publishing climate scientists agree: Climate-warming trends over the past century are extremely likely due to human activities.

Kilden NASA oppgir er en studie fra 2016 med tittelen “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming“. Hovedforfatter er John Cook, så jeg kaller den Cook et al. (2016). Flere av medforfatterne er i likhet med Cook forfattere av andre konsensus-studier som også diskuteres i denne studien, blant andre Naomi Oreskes, Peter Doran og William Anderegg.

NASAs påstand er en overdrivelse av hva studien sier. Cook et al. (2016) konkluderer:

We have shown that the scientific consensus on [Anthropogenic Global Warming] is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology.

Dette er heller ikke helt nøyaktig. Studien definerer konsensus:

The consensus position is articulated by the Intergovernmental Panel on Climate Change (IPCC) statement that ‘human influence has been the dominant cause of the observed warming since the mid-20th century'[.]

Men Cook et al. (2016) har også oppgitt “definition of consensus” for de ulike studiene, og for flere av studiene samsvarer ikke konsensus-definisjonen med IPCCs definisjon. Vi har allerede sett at Cook et al. (2013) ikke bruker IPCCs definisjon. For Doran and Zimmerman (2009) er konsensus-definisjonen “Human activity is a significant contributing factor in changing mean global temperatures”. For Stenhouse et al. (2014) er konsensus-definisjonen “Humans are a contributing cause of global warming over the past 150 years”. For Carlton et al. (2015) var spørsmålet “Do you think human activity is a significant contributing factor in changing mean global temperatures?”

For hver studie hvor det var mulig har Cook et al. (2016) oppgitt tall både for alle “spurte” (eller lignende) og for publiserende klimaforskere (et sub-sett av “alle”). Dette viser at blant forskere med mange publikasjoner innen klima-vitenskap er det flere som støtter konsensus-definisjonen enn blant andre forskere:

Dette kan nok — i hvert fall til en viss grad — skyldes at det har vært uforholdsmessig vanskelig for skeptikere å få publisert i vitenskapelige klima-tidsskrifter.

I bildet over er “C13” Cook et al. (2013), “DZ1”, “DZ2” og “DZ3” er Doran and Zimmermann (2009), “S141”, “S142” og “S143” er Stenhouse et al. (2014), og “C151” og “C152” er Carlton et al. (2015). Alle disse hadde altså en svakere konsensus-definisjon enn IPCC. “A10200” er et subsett av Anderegg et al. (2010) — de 200 forfatterne som hadde publisert flest klimarelaterte artikler (det var 1372 spurte totalt). Konsensus-resultatet for alle spurte (som var 66%) er ikke plottet inn, og det er også andre studier som ikke er plottet inn.

41) Faktisk.no skriver:

En spørreundersøkelse fra 2016 blant 1868 vitenskapsmenn viste at det blant de med med minst ti fagfellevurderte vitenskapelige publiseringer er 90 prosents enighet om at menneskeskapte klimagassutslipp er den viktigste driveren for klimaendringene.

De linker til en studie av Verheggen med flere, hvor John Cook er en av medforfatterne. Studien har tittelen “Scientists’ Views about Attribution of Global Warming“. Studien har publiseringsdato 22 juli 2014, og spørreundersøkelsen ble ifølge studien gjennomført i 2012 (ikke 2016).

Hvis man vil finne hvor mange forskere som mener mennesker er hovedårsak til global oppvarming, er dette en mye bedre studie enn Cook et al. (2013).

Verheggen et al. (2014) sendte en spørreundersøkelse med 35 spørsmål til 7555 forskere — de mottok svar fra 1868 av dem. Spørsmål 1 var “What fraction of global warming since the mid-20th century can be attributed to human-induced increases in atmospheric [greenhouse gas] concentrations?” Svarene fordelte seg som følger (blå punkter viser prosentandelene for alle 1868 som svarte):

Det var 66 % av de 1868 som svarte på spørreundersøkelsen som mente menneskenes bidrag til global oppvarming som følge av utslipp av drivhusgasser var mer enn 50 %. Men det var ganske mange som mente spørsmålet var vanskelig å svare på. Av de som ga et kvantitativt svar var andelen 84 %.

Første del av spørsmål 3 lignet på spørsmål 1, men svar-alternativene var mer kvalitative enn kvantitative: “How would you characterize the contribution of the following factors to the reported global warming of ~0.8 °C since preindustrial times: [greenhouse gases], aerosols, land use, sun, internal variability, spurious warming?” Første del av spørsmål 3 gikk altså på drivhusgasser, og svarene fordelte seg slik (dessverre viser ikke figuren prosentandelene for alle 1868 som svarte, men siden de fire kategoriene er omtrent like store, vil et enkelt gjennomsnitt gi en god tilnærming):

Det var vesentlig færre som svarte at de ikke visste på dette spørsmålet enn på spørsmål 1.

De som svarte “Strong warming” anses naturlig nok å støtte konsensus-definisjonen at menneskelige utslipp av drivhusgasser har bidratt til mesteparten av temperatur-økningen siden 1950. Verheggen et al. (2014) har vurdert at noen av de som valgte “Moderate warming” også støtter konsensus-definisjonen — de som ikke hadde valgt “Strong warming” på noen av del-spørsmålene for spørsmål 3 — noe som ikke virker urimelig, men som samtidig ikke er helt åpenbart at er greit.

Uansett fant de på denne måten at 83 % av de 1868 som svarte på undersøkelsen støttet konsensus-definisjonen, eller 86 % av de som ikke var undetermined.

Tallet faktisk.no har brukt kommer frem på følgende måte:

Excluding undetermined answers, 90% of respondents, with more than 10 self-declared climate-related peer-reviewed publications, agreed with dominant anthropogenic causation for recent global warming. This amounts to just under half of all respondents.

Igjen er jeg litt skeptisk til at det fokuseres så mye på de som har publisert flest studier siden det altså har vært uforholdsmessig vanskelig for skeptikere å få publisert i den fagfelle-vurderte litteraturen. En positiv ting med denne studien er at de har inkludert noen skeptikere som bare har publisert ting utenom fagfellevurderte tidsskrifter (“gray literature”).

Selv om Verheggen et al. (2014) på mange måter ser ut til å være en god studie, har de misrepresentert resultatet fra andre studier. De skriver:

However, Oreskes, Anderegg et al., and Cook et al. reported a 97% agreement about human-induced warming, from the peer-reviewed literature and their sample of actively publishing climate scientists […]. Literature surveys, generally, find a stronger consensus than opinion surveys. This is related to the stronger consensus among often-published — and arguably the most expert — climate scientists.

Men som vi har sett, hadde Cook et al. (2013) en helt annen konsensus-definisjon enn Verheggen et al. (2014). Mens konsensus-definisjonen i Verheggen et al. (2014) gikk på menneskelig aktivitet som hovedårsak, var konsensus-definisjonen bare på at mennesker bidrar til global oppvarming i Cook et al. (2013).

Senere skriver de riktignok:

Different surveys typically use slightly different criteria to determine their survey sample and to define the consensus position, hampering a direct comparison. It is possible that our definition of “agreement” sets a higher standard than, for example […] Doran and Kendall-Zimmermann’s survey question about whether human activity is “a significant contributing factor”.

Det er ikke bare mulig, det er helt sikkert.

42) Riktignok er gjerne IPCCs sammendrag (Summary For Policymakers), mer alarmistiske enn rapportene de sammenfatter. Richard Tol, en nederlandsk økonom som har bidratt mye til IPCCs rapporter, har sagt følgende om kapittelet han var convening lead author for i IPCCs forrige hovedrapport:

That [many of the more dramatic impacts of climate change are really symptoms of mismanagement and poverty and can be controlled if we had better governance and more development] was actually the key message of the first draft of the Summary for Policymakers. Later drafts … and the problem of course is that the IPCC is partly a scientific organisation and partly a political organisation and as a political organisation, its job is to justify greenhouse gas emission reduction. And this message does not justify greenhouse gas emission reduction. So that message was shifted from what I think is a relatively accurate assessment of recent developments in literature, to the traditional doom and gloom, the Four Horsemen of the Apocalypse and they were all there in the headlines; Pestilence, Death, Famine and War were all there. And the IPCC shifted its Summary for Policymakers towards the traditional doom and gloom [message].

43) Climate Feedback, en fakta-sjekker for Facebook, har kritisert Shellenbergers artikkel. Shellenberger har svart, og Mallen Baker har kommentert konflikten på YouTube.

44) Curry skriver:

The gate-keeping by elite journals has gotten worse [in my opinion], although the profusion of new journals makes it possible for anyone to get pretty much anything published somewhere.

Hun nevner også to andre ting som har blitt verre de siste årene:

  • Politically correct and ‘woke’ universities have become hostile places for climate scientists that are not sufficiently ‘politically correct’
  • Professional societies have damaged their integrity by publishing policy statements advocating emissions reductions and marginalizing research that is not consistent with the ‘party line’

  • Oppdatert 19. nov 2020. Endret tekst om “hide the decline”, presiserer nå (som i engelsk versjon) at nedgangen er i trering-tykkelse, ikke temperatur. Endringen er markert med class=”updated_v3″ i kildekoden. Fjernet også “bak betalingsmur” for Sherwood et al (2020), siden den studien ikke lenger er bak betalingsmur.
  • Oppdatert 8. nov 2020. Vesentlige endringer er markert med class=”updated_v2″ i kildekoden. Forrige versjon er arkivert av Wayback Machine her.

Why The Future Will Be An Unimaginable Utopia Beyond Our Wildest Imaginations

Humanity is moving ever faster towards a perfect world, but we’ll probably never get all the way to perfect. A somewhat plausible alternative to this utopian future is that humanity destroys itself. Typical science fiction dystopias, on the other hand, are quite unrealistic, in my view.

Interstellar is one of my favorite movies, but there’s one thing about it I don’t like; it’s set in a dystopian future where sandstorms and blight make it increasingly difficult for humans to survive. Although science fiction is my favorite genre, it annoys me a bit that science fiction movies are often dystopian to some degree.

Unfortunately, news media also have a tendency to focus on the negative, so it’s not surprising if you have a negative view of the world today. And with science fiction movies often being dystopian, it’s not surprising if you have a negative view of the future as well. But if we look at the bigger picture – the trends – how the world is changing – we actually see that some very positive changes have been – and are – happening.

Terrorism, war, famine, natural disasters, economic recessions, global warming, climate change, overpopulation… We hear about this in the media all the time. What we don’t hear about nearly as much is that we’re living longer, getting richer, eradicating extreme poverty, becoming more peaceful, curing diseases and working on curing aging.

And in the last 100 years, the cost of food has come down by a factor of 10, electricity cost by a factor of 20, transportation cost by a factor of 100 and communication cost by a factor of 1000! This is all according to Peter Diamandis in his excellent TED Talk Abundance is our future.

Many people are probably aware that the world in many ways has gotten better over the last centuries. But did you know that the risk of dying from violence today is lower than at any other time in human history? That the risk of dying from natural disasters is also lower than at any other time in human history? With all the focus on climate change, many people are surprised to hear this.

In a sense, we already live in a utopian world. If we go back just 100 years, people’s lives were a lot less pleasant. As I wrote in another blog post:

[100 years ago] traveling was slow, air-conditioning was uncommon, there was no radio or television, and just a few movies. You could listen to recorded music, but the quality was poor, there was less music to choose from and you couldn’t download it from the Internet, because, of course, there was no internet. There were no mobile phones. Medical care was both more painful and less effective, and women had a near 1% chance of dying giving birth.

So from the perspective of someone living in 1919, the world of 2019 would seem quite utopian and hard to imagine. Yet we know there are huge problems in the world today as well. Likewise, the world will be so much better a few decades from now, and that future world will seem utopian and hard to imagine for 2019-humans, but there are still going to be problems, though they won’t be as big as today’s problems.

But how can we be sure that the future will be better than the present, that the positive trends will continue? Well, no one can know exactly what will happen in the future. But some things can actually be predicted with a high degree of certainty if we just assume that our economic system doesn’t break down all over the world.

One of the things that’s easy to predict is that our technology is going to improve over time. It has actually turned out that for one aspect of technological change we can even predict with surprising accuracy exactly how fast progress will be. What I’m referring to is the cost of computation – how much computing power you can get per constant dollar. In the 1980s, Ray Kurzweil noticed how predictable the cost of computing had been in the previous decades and extrapolated this into the future. So far his predictions have been astonishingly accurate.

Just as we can’t predict what one molecule in a gas will do – it’s hopeless to predict a single molecule – yet we can predict the properties of the whole gas, using thermodynamics, very accurately. It’s the same thing [with technology]. We can’t predict any particular project, but the result of this whole worldwide, chaotic, unpredictable activity of competition and the evolutionary process of technology is very predictable. And we can predict these trends far into the future.

– Ray Kurzweil, The accelerating power of technology (TED Talk, 2005)

Cheap and efficient computation is a very important factor for technological progress in general, so cheaper and more efficient computation is good news. It means we can accomplish more in a shorter period of time. And the amount of computing power we have access to doesn’t just increase by the same amount each year – it increases exponentially, which means that it will increase by a larger amount this year than it did last year, and by an even larger amount next year. Since humans have evolved to think linearly, this means the future is arriving faster than most of us expect. This, too, is good news, since it means we can fix the world’s big problems sooner than otherwise and improve the lives of more people sooner.

Continued technological progress will have huge implications for human health:

Of course, everyone won’t get access to these technologies and cures right after they’re developed. They’ll be expensive at first, but if there’s enough competition, the prices will soon fall to affordable levels, as the quality also improves.

You may worry that curing aging will lead to overpopulation. I’ve previously argued that that’s not the case. Yes, there will be more people, but longer lives won’t lead to an exponential increase in population, so when technology progresses exponentially, we’ll be able to handle the increasing population. More people isn’t just a bad thing either. More people also means more people who can work on solutions to the world’s grand challenges, so we can solve them faster. The markets for everything will also be bigger, so there’s more money to be made for those who are able to solve problems – which makes it more likely that they will be solved. And if we are to believe Peter Diamandis, the world’s biggest problems are the world’s biggest business opportunities.

Human ingenuity and the magic of the price system

We sometimes hear that we’ll run out of chocolate in a few decades. According to some scientists the reason is that cacao plants can only grow in highly specific conditions near the equator, and if global warming causes the temperature in these areas to rise by just 2°C, these plants will die. They’re probably right that the plants will die if the temperature rises, but as long as people enjoy eating chocolate and are willing to pay for it, we won’t run out of chocolate.

I don’t know exactly how the problem will be solved provided it turns out to be a problem, but one way or another, we will have chocolate or something very similar in 2050. The reason is human ingenuity, the price system, and that the market for chocolate is huge, so there’s a lot of money to be made. If cacao plants start dying, the price of chocolate increases. The fewer cacao plants, the higher the price of chocolate. And the higher the price of chocolate, the greater the incentive to come up with new ways to make chocolate, and the more money and effort investors and entrepreneurs will be willing to spend trying to come up with new solutions.

In this video, Steve Horwitz explains how the price system prevents us from running out of resources:

Going back to the overpopulation issue, you might now – at least if you watched the video – understand how the price system and human ingenuity will be able to handle an increasing population – it’s the same argument that we used for chocolate: when a resource, such as food, becomes scarce, prices increase, giving incentives to produce more food more efficiently. The more scarce the resource, the higher the price, and the more time and money investors and entrepreneurs will spend trying to find solutions – either finding or producing more of the resource, or finding a substitute. And when they do solve it, competition and the law of supply and demand cause prices to go back down.

As for global warming, if that becomes a big problem, we’ll be able to remove excess CO2 and other greenhouse gases from the atmosphere at relatively low cost in a few decades. Currently, however, the costs of removing CO2 from the atmosphere are out of range of what can be handled by the world today, according to nanotech pioneer K. Eric Drexler. Until then, the best solution would probably be to invest in renewable energy research, so that cleaner energy sources can out-compete fossil fuels.

But what if our worst fears about global warming had already become reality? In that case we obviously wouldn’t be in a position where we could afford to wait decades before coming up with solutions. So what would happen?

Since that would be such a gigantic problem, affecting more or less everyone on Earth, no doubt there would be a lot of different approaches. Although expensive, one of the solutions would probably be to remove greenhouse gases from the atmosphere, but since it doesn’t solve the problem immediately, but would take many years or even decades, there would be a strong demand for other solutions too – solutions that solve people’s problems right here and now.

And although not every challenge can be attributed to climate change, people are today already adapting to challenging climate or weather conditions around the world. In The Netherlands they have a long history of building dikes to prevent flooding. In Bangladesh they’ve created floating farms to cope with recurrent floods. More robust crop species have been engineered to resist drought, flooding, heat, cold and salt.

So we are already adapting to difficult conditions. We can expect this to happen on a larger scale in a worst-case climate scenario since the higher the cost of a problem, the more money we’re willing to spend to fix it. So extreme climate change would be expensive, but we would adapt and eventually solve the problem.

In general, we’re willing to spend money on things that improve our lives – things that make our lives easier, better, and more meaningful or interesting. And providing things that people are willing to pay for can be very profitable. That’s another reason to expect the future to be better than the present.

So, will humanity destroy itself?

In general, I believe that the future of humanity is technovolatile – meaning either a utopia beyond our wildest imaginations, or extinction. Given the technological capabilities that will be unlocked this century, anything in between (i.e. a traditional dystopia) is hard to imagine.

This quote is from an answer by Maxim Kazhenkov to the question What will our future look like, dystopian or utopian? on Quora. I tend to agree with what he’s saying here. So the way I view it, there is a chance that humanity will destroy itself. The reason is that we’re developing more and more potent technologies. This means individuals or small groups of people will have the power to wreak ever more havoc.

But I do remain more optimistic than pessimistic in this regard as well. The technological dangers won’t come as a surprise so we’ll have time to implement defences. In his 2001 essay The Law of Accelerating Returns, Ray Kurzweil also expresses a slightly optimistic view while discussing the dangers of self-replicating nano-technology:

As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self replicating entity that didn’t exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the immune system that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them. No one would suggest we do away with computers, local area networks, and the Internet because of software viruses.

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology. Although true, this strengthens my observation. The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them. It also means that our response to the danger is that much less intense. Conversely, when it comes to self replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious.

What do you think? Will the future be utopian, dystopian or something in between, or will humanity become extinct before the end of the century? (Michio Kaku argues that if we can just make it past this century, we may be in the clear.)

A 97 % Climate Consensus Lie – What You Need To Know About ‘Cook et al. (2013)’

Cook et al. (2013), “Quantifying the consensus on anthropogenic global warming in the scientific literature” might be the most commonly cited study claiming there’s a consensus among climate scientists that “humans are causing climate change”.

Cook et al. went through the abstracts of 11,944 peer-reviewed papers, published between 1991 and 2011, those that contained the terms “global warming” or “global climate change” – after some search results had been filtered out.

Cook et al. then categorized the papers according to the authors’ expressed views about global warming, whether and to what extent humans are causing it. At first, they didn’t publish all their numbers, but their data file came out some weeks after the paper itself was published. And you can now find the data file in the Supplementary data section of Cook et al. (2013).

The data file contains one line for each of the papers they looked at. The last number on each line is the endorsement level, which goes from 1 to 7, where 1 means the authors agree with the standard definition of consensus, that humans have caused at least half of all global warming since 1950. If you count up everything, you’ll find the following numbers: 1)

1. Explicit, quantified endorsement (standard definition of consensus) 64
2. Explicit, unquantified endorsement 922
3. Implicit endorsement 2,910
4a. No position 7,930
4b. Expression of uncertainty 40
5. Implicit rejection 54
6. Explicit, unquantified rejection 15
7. Explicit, quantified rejection 9

These numbers are copied from the article “Climate Consensus and ‘Misinformation’: A Rejoinder to Agnotology, Scientific Consensus, and the Teaching and Learning of Climate Change“, written by David R. Legates and others. I will refer to it as “Legates et al.”.

As you can see, 7930 articles expressed no opinion about the role of humans in regards to global warming or climate change. Of the remaining 4014 paper abstracts, 97.1 % expressed implicitly or explicitly that humans do contribute to global warming, but not necessarily that humans are the main cause.

If we don’t exclude the papers whose abstracts didn’t express an opinion about the role of humans, we find that 32.6 % of the papers agreed that humans contribute (at least somewhat) to global warming. 2)

But according to the data file from Cook et al., only 64 papers expressed the view that human activity is the main cause of global warming. These 64 correspond to 1.6 % of the papers that had expressed an opinion, or 0.5 % of all the papers.

Legates et al. then went through these 64 papers and found that only 41 of them were categorized correctly. If so, that corresponds to 1.0 % of the papers that had expressed a view about the role of humans, or 0.3 % of all the papers that Cook et al. went through.

John Cook, the lead author of Cook et al., was co-author of another article, Bedford and Cook (2013), where they write:

Of the 4,014 abstracts that expressed a position on the issue of human-induced climate change, Cook et al. (2013) found that over 97 % endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.

Notice that it says “main cause”. But now that we have seen the numbers from Cook’s data file, we know that’s not true, and it’s most likely a deliberate lie.


Footnotes:
1) Endorsement levels 4a and 4b don’t exist in the data file, only level 4 does, with a count of 7970, as expected. But the number of papers rated as “Uncertain on AGW” (40) was included in the Results section of Cook et al.

2) In addition to categorizing the papers based on their abstracts, Cook et al. also emailed 8547 authors and asked them to categorize their own papers. They received 1200 responses. Cook et al. writes: “Compared to abstract ratings, a smaller percentage of self-rated papers expressed no position on AGW (35.5%). Among self-rated papers expressing a position on AGW, 97.2% endorsed the consensus.” And by “endorsed the consensus”, we have to assume that he still means “agrees that humans contribute at least somewhat to global warming”. (Cook et al. did not provide the number of self-rated papers for each of the seven endorsement levels)

Other notes:
Other studies have also concluded that there’s a near-consensus among climate scientists that humans are the main cause of global warming or climate change. I have not looked into those studies, but Legates et al. has criticized some of them (those that are cited by Cook et al.).

John Cook is also the man behind skepticalscience.com, which according to the website “gets skeptical about global warming skepticism.”

The 7 Best Reasons For Being A Libertarian

Libertarians want to severely reduce the size of government. Some libertarians even want to eliminate governments altogether.

To most people that probably sounds crazy. What in the world are they thinking, you may wonder. Well, I’ll try to explain.

The reasons why people want smaller governments fall into two main categories; moral and consequentialist. For libertarians, these two types of arguments lead to the same conclusion – that libertarianism is the superior political system. As Murray Rothbard put it, the utilitarian and the moral, natural rights and general prosperity, go hand in hand.

The moral argument
There are several similar versions of the moral argument. I like to argue that all interpersonal relationships should be voluntary – you shouldn’t be allowed to force someone to do something they don’t want. Naturally, this also means you shouldn’t be allowed to take something that belongs to someone else. You as an individual person shouldn’t be allowed to do that, and a large group of people, e.g. the government, shouldn’t be allowed to do so either, even though a majority should think it’s okay. Taxes are thus immoral, according to this argument, unless you have a voluntary contract with the government where you agree to paying taxes.

That’s all I wanted to say about the moral argument for now. People have different kinds of morals, and not everyone will find the libertarian moral argument compelling. So what I really want to focus on in this article are the consequentialist arguments.

Many – perhaps most – libertarians believe the government does more harm than good. Of course, that’s a statement that’s hard to prove or disprove, but I think David Friedman has done a good job of substantiating this claim in his book The Machinery Of Freedom. I’m going to include several quotes from The Machinery Of Freedom below.

Following are what I think are the best consequentialist arguments for libertarianism.

Competition is better than monopoly
If the goal is to give consumers the best possible products and services at affordable prices – greater prosperity, in other words – then competition is better than monopoly. New companies will try to steal your customers by making a better or cheaper product. To be able to compete, you will have to improve as well. The more elements of monopoly there are, the more one loses this very important incentive for progress.

Strangely, many people argue that an unregulated market economy always leads to monopoly. David Friedman, in the chapter Monopoly I: How To Lose Your Shirt in The Machinery Of Freedom, explains really well the difficulties of maintaining a monopoly in a free market. He concludes:

Monopoly power exists only when a firm can control the prices charged by existing competitors and prevent the entry of new ones. The most effective way of doing so is by the use of government power. There are considerable elements of monopoly in our economy, but virtually all are produced by government and could not exist under institutions of complete private property.

That’s not to say you can’t have elements of monopoly in a free market. In many cases one single firm can have a dominating position, but in a free market that’s actually not such a terrible situation because, unlike for government-created monopolies, there is potential competition:

Even a natural monopoly is limited in its ability to raise prices. If it raises them high enough, smaller, less efficient firms find that they can compete profitably. […] [The natural monopoly] can make money selling goods at a price at which other firms lose money and thus retain the whole market. But it retains the market only so long as its price stays low enough that other firms cannot make a profit. This is what is called potential competition.

– David Friedman, The Machinery Of Freedom, Chapter 6: Monopoly I: How To Lose Your Shirt

Faster development of medical drugs
One of the most heavily regulated areas of Western society is health and medicine. People don’t want the drugs they’re taking to be dangerous, so they want government to approve drugs that are safe. That probably has had the intended effect of making the drugs that are on the market more safe. However, there are huge hidden costs, as this strategy does not minimize the number of deaths.

Regulators really don’t want to approve an unsafe drug – that’s very understandable, so they try to be cautious, and so instead of approving the drug, they sometimes ask for more tests and documentation. In some cases that may be the right thing to do. In others it may not. If a new effective drug is being delayed by the approval process, people who could have been saved will die waiting for the drug to be approved:

[I]n 1981 […] the [Food and Drug Administration (FDA) – the federal agency in charge of approving medical drugs in the US] published a press release confessing to mass murder. That was not, of course, the way in which the release was worded; it was simply an announcement that the FDA had approved the use of timolol, a ß-blocker, to prevent recurrences of heart attacks.

At the time timolol was approved, ß-blockers had been widely used outside the U.S. for over ten years. It was estimated that the use of timolol would save from seven thousand to ten thousand lives a year in the U.S. So the FDA, by forbidding the use of ß-blockers before 1981, was responsible for something close to a hundred thousand unnecessary deaths.

-David Friedman, The Machinery Of Freedom, Chapter 21: It’s My Life

If the government did not regulate drugs, it’s very likely that private firms would be created that could rate the drugs made by the drug companies, for a fee. This is how it works in many other areas. These firms would then compete with each other, and it would be extremely important for them that people trusted them. Too many mistakes and they’re out of business. So they, too, need to be cautious, but at the same time, drug companies want them to be fast and affordable, so they also have an incentive not to be overcautious. See this video for a better explanation:

It wouldn’t be forbidden to sell unrated drugs or drugs with a really bad rating. However, people would know that there are big risks associated with those drugs. But when a potentially dangerous drug has the potential to save your life, the risk is probably worth it. It would be for me, anyway.

Government agencies aren’t as agile as private firms need to be, since private firms generally want to keep up with the competition. The Food and Drug Administration (FDA), is no exception. According to Peter Huber, author of The Cure in the Code: How 20th Century Law Is Undermining 21st Century Medicine:

The FDA is still largely stuck in trial protocols that do not let us exploit fully our ability to tailor drugs precisely to the patient’s molecular profiles. The only way you can find out how a drug is going to interact with different profiles is actually to prescribe it to the patients. The FDA is still very much anchored in what are called randomized blind protocols. There’s no learn-as-you-go process in the early trials, where you learn about the different ways that different patients can respond to the same drug and then refine the way you prescribe that drug.

[…]

[T]he protocols these institutions are using, they’re now 50 years old. I think agencies develop some inertia. They get very good at doing something they’ve done for a long time and, when revolutionary change occurs, it’s difficult to adapt.

Although I believe that a libertarian society in most ways would be better than what we currently have, I actually have a good life. I really can’t complain. But others 1) aren’t so lucky. I am getting older, though, and medical science still can’t cure every disease, or aging itself. Eventually, all diseases and even aging will be cured, but until then, one of my biggest concerns is that I’ll get a deadly disease that’s presently incurable.

People older than me will generally be in a worse position than I’m in when it comes to their health. And if science and technology moves too slow, even young people will eventually get to the point where their health becomes a problem. To put it bluntly, they may die… That’s the reason I think it’s so important that science and technology moves forward as fast as possible; the sooner we can cure all diseases and aging, the more people can be saved, allowing them to live for thousands of years or more in excellent health (if they want).

About 100,000 people die every day from aging or age-related diseases. So just a small speed-up of the research process can result in millions of lives saved. Equivalently, just a small slow-down of the research process can result in millions of lives lost. As you probably remember, David Friedman said that the FDA indirectly confessed to mass murder by delaying approval of a drug that was obviously safe. I suspect that the effect of medical drug regulation and other governmental regulations that slow down technological progress are orders of magnitude bigger than that.

If progress has been slowed by just a handful of years, we’re looking at about 200 million lives that could potentially be saved worldwide, but won’t be.

Until now, we’ve talked about the number of lives lost. However, a better measure would probably be the number of life-years lost. If something causes one person to die ten years prematurely, that’s 10 life-years lost. If something causes 100 people to die one year earlier, that’s 100 life-years lost. The more life-years that are lost, the worse.

For the 100,000 people who died because they couldn’t get ß-blockers, the average number of extra years they would have lived with ß-blockers, would probably be somewhere between 1 and 10 (on average). However, people who are alive and can take advantage of effective rejuvenation therapies when they’re developed, could easily live for thousands of years. And while the quality-of-life of the people who could have lived a few extra years with ß-blockers might be relatively low, I expect most people who are alive in a highly technologically advanced post-aging world to be quite happy (since psychological problems such as depression probably also can be treated very well at that point).

Less war
Fighting wars and killing people in foreign countries is arguably the worst thing governments do – at least one of the most brutal. I certainly don’t want to pay so that the Norwegian government can bomb other countries (which they did in Libya a few years back) – I find that highly immoral, but as long as I want to live in the country where I was born, I don’t have much of a choice – I can either pay my taxes or go to jail.

Wars are very expensive, though, so why are they fought?

Does the average American gain something from his government’s wars in the Middle East? It’s hard to think of even a single benefit for the average American. There are lots of costs, though. Most obviously it costs a lot of money, which are paid by the American taxpayers. Bombing people makes the families and friends of the victims angry, it’s thus making a lot of enemies for the US, raising the risk of terrorist attacks. The higher risk of terrorist attacks is the reason for comprehensive security measure at e.g. airports, which is an inconvenience – thus a cost – to law-abiding citizens. The security measures aren’t free money-wise either, so they naturally lead to higher airfare prices.

If wars in foreign countries are a net cost to Americans, it may seem strange that they’re fought in the first place. But even though wars aren’t a benefit overall, that doesn’t mean nobody benefits. The weapons producers obviously have a lot to gain, so it’s in their interest to lobby politicians to buy weapons from them. This is one aspect of the so-called military-industrial complex. From wikipedia’s article on war profiteering:

The phrase “military-industrial complex” was coined by President Dwight D. Eisenhower in his 1961 Farewell Address. This term describes the alliance between military leaders and arms merchants. Military officials attempt to obtain higher budgets, while arms manufactures seek profit. President Eisenhower warned the American people that going to war might not serve the interest of the nation, rather the institution of the military and weapons-producing corporations. The Iron Triangle comes into play here due to war profiting industries who make financial contributions to elected officials, who then distribute taxpayer money towards the military budget, which is spent at the advantage of arms merchants. The military-industrial complex allows for arms-producing corporations to continue to accumulate significant profit.

This means that the people who decide that their country should go to war (politicians) are not the same people who are paying for the war (taxpayers). That’s a problem because it may lead to more wars being fought than what’s strictly necessary in order to defend the country, as we’ve seen in the case of the US, some European countries, and probably many other countries.

Now, try to imagine a society with no government. You can think of it as pretty similar to today’s society, except that the useful tasks that are today performed by the government would be provided by private firms or charities that people choose to be customers of/pay money to. Such a society, where there is no government, private property is respected and you have freedom of association, is called anarcho-capitalist.

Perhaps the best way to see why anarcho-capitalism would be so much more peaceful than our present system is by analogy. Consider our world as it would be if the cost of moving from one country to another were zero. Everyone lives in a housetrailer and speaks the same language. One day, the president of France announces that because of troubles with neighboring countries, new military taxes are being levied and conscription will begin shortly. The next morning the president of France finds himself ruling a peaceful but empty landscape, the population having been reduced to himself, three generals, and twenty-seven war correspondents.

We do not all live in housetrailers. But if we buy our protection from a private firm instead of from a government, we can buy it from a different firm as soon as we think we can get a better deal. We can change protectors without changing countries.

– David Friedman, The Machinery Of Freedom, Chapter 30: The Stability Problem

Democratic governments need consent, or at least not too much opposition, from the citizens in order to start wars. Since the real reasons to start wars have more to do with money and power than humanitarian goals, politicians have to lie to the public to get their consent. In Iraq, they lied about Saddam Hussein having weapons of mass destruction. On other occasions, supposed chemical weapons attacks have been used as the excuse.

It would also be more difficult to get consent from the governed if taxes had to be raised to finance each new war. Therefore wars aren’t exclusively financed by taxes. According to Matthew McCaffrey in a talk about the economics of war, governments also finance wars through borrowing and inflation. Inflating (or increasing) the money supply (printing money) is something governments can do since they control the country’s currency and money system. 2) Inflation eventually leads to higher prices, but this is a cost that’s much less visible to people than direct taxes. McCaffrey talks about war financing at 17:54 – 25:00 and 31:53 – 36:33 in his talk about the economics of war.

I should mention that the number of people killed in war has declined lately. Since 1946 the number of war deaths has been at an historical low, and the period since 1946 has been called The Long Peace. The atom bomb was developed during World War II – not long before 1946 – and no two countries with atomic bombs have ever fought a war with each other. So one of the reasons for The Long Peace could be the deterrent effect of the atomic bomb. As Yuval Noah Harari writes in the book Homo Deus:

Nuclear weapons have turned war between superpowers into a mad act of collective suicide, and therefore forced the most powerful nations on earth to find alternative and peaceful ways to resolve conflicts.

In addition, Harari emphasizes the fact that the world has been moving from a material-based economy into a knowledge-based economy:

Previously the main sources of wealth were material assets such as gold mines, wheat fields and oil wells. Today the main source of wealth is knowledge. And whereas you can conquer oil fields through war, you cannot acquire knowledge that way. Hence as knowledge became the most important economic resource, the profitability of war declined and wars became increasingly restricted to those parts of the world – such as the Middle East and Central Africa – where the economies are still old-fashioned material-based economies.

(Some libertarians want to have a strong military, so the less war argument – at least in the form expressed here – may not apply to all brands of libertarianism.)

Open borders and free trade

According to economists’ standard estimates, letting anyone take a job anywhere would roughly double global production – a bigger gain than any other economic reform known to man. […] We are talking about trillions of dollars of extra wealth creation, year after year.

– Bryan Caplan, The Case for Open Borders

So, luckily for libertarians, we’re generally in favor of open borders. We also want smaller governments, so we don’t want governments to give money to immigrants. However, private individuals, charities etc may still help those who can’t find work or are otherwise struggling.

The main reason wealth creation would go so much up with open borders is that it’s easier to be productive in more developed countries. As Bryan Caplan has said, If you lived in Syria or Haiti, you wouldn’t be very productive either. An article in The Economist explains:

Workers become far more productive when they move from a poor country to a rich one. Suddenly, they can join a labour market with ample capital, efficient firms and a predictable legal system. Those who used to scrape a living from the soil with a wooden hoe start driving tractors. Those who once made mud bricks by hand start working with cranes and mechanical diggers. Those who cut hair find richer clients who tip better.

[…]

And the non-economic benefits are hardly trivial, either. A Nigerian in the United States cannot be enslaved by the Islamists of Boko Haram.

Most of the benefits of open borders go to the people who move from poor countries and their families (since many emigrants send money back to them). A policy of open borders is thus a policy that will reduce global inequality. However, according to the same Economist article, people in rich countries would see more poverty – they “would see many more Liberians and Bangladeshis waiting tables and stacking shelves”, but their poverty in a rich country would be “much less severe” than in their home country.

Many people – even some libertarians – worry that crime rates will go up, but, according to the article:

If lots of people migrated from war-torn Syria, gangster-plagued Guatemala or chaotic Congo, would they bring mayhem with them? It is an understandable fear (and one that anti-immigrant politicians play on), but there is little besides conjecture and anecdotal evidence to support it. Granted, some immigrants commit crimes, or even headline-grabbing acts of terrorism. But in America the foreign-born are only a fifth as likely to be incarcerated as the native-born. In some European countries, such as Sweden, migrants are more likely to get into trouble than locals, but this is mostly because they are more likely to be young and male. A study of migration flows among 145 countries between 1970 and 2000 by researchers at the University of Warwick found that migration was more likely to reduce terrorism than increase it, largely because migration fosters economic growth.

Open borders should also apply to trade, of course. There shouldn’t be any tariffs or import quotas. Trade restrictions help certain industries in the country that imposes them, while hurting their foreign competitors, but it only helps certain industries – most people are harmed by higher prices and fewer alternative products.

Almost all economists, whether libertarian or not, actually agree that free trade would be better than protectionist policies when it comes to raising people’s standard of living. Milton Friedman talks about free trade and this agreement among economists in the video below, where he says:

We call a tariff a protective measure. It does protect – it protects the consumer very well against one thing. It protects the consumer against low prices. And yet we call it protection. Each of us tends to produce a single product. We tend to buy a thousand and one products.

So why don’t all countries have free trade? It’s because of the internal logic of the political marketplace where concentrated interest groups have more power to influence policies than the general public, which is a dispersed group with less to gain than the special interest in each particular case.

No war on drugs
The use of recreational drugs often has negative health consequences for the user. That’s one of the reasons why many people don’t want them to be legal.

But according to libertarianism, there shouldn’t be laws against doing things that don’t harm others. So-called victimless crimes shouldn’t be crimes at all.

Not punishing victimless crimes will free up time and resources for the legal system to focus on more serious offenses. But the war on drugs has so many other bad consequences in addition. Kurzgesagt – In a Nutshell, in the video embedded below (3:01) explains:

Prohibition may prevent a certain amount of people from taking drugs, but in the process it causes huge damage to society as a whole. Many of the problems we associate with drug use are actually caused by the war against them.

For example prohibition makes drugs stronger. The more potent drugs you can store in as little space as possible, the more profit you’ll make. It was the same during alcohol prohibition which led to an increased consumption of strong liquor over beer.

The prohibition of drugs also led to more violence and murders around the world. Gangs and cartels have no access to the legal system to settle disputes so they use violence. This led to an ever-increasing spiral of brutality. According to some estimates, the homicide rate in the US is 25 to 75 percent higher because of the war on drugs, and in Mexico, a country on the frontline, an estimated 164,000 have been murdered between 2007 and 2014, more people than in the war zones of Afghanistan and Iraq in the same period, combined.

But where the war on drugs might do the most damage to society is the incarceration of nonviolent drug offenders. For example, the United States, one of the driving forces of the war on drugs, has 5% of the world’s total population but 25% of the world’s prison population largely due to the harsh punishments and mandatory minimums.

Minorities suffer because of this. Especially, African-Americans make up 40% of all US prison inmates, and while white kids are more likely to abuse drugs, black kids are 10 times more likely to get arrested for drug offenses.

No welfare trap
One of the first things that comes to mind if you’re going to find arguments in favor of having a government is probably that we obviously need a government to take care of the poor and needy. But I don’t think it’s that obvious.

A question one should ask in this regard is whether the poor would be helped in the absence of a government. With a very small government or no government at all, people would keep most or all their earnings for themselves, meaning they can decide exactly how to spend it (or save it). Today, a lot of working people struggle to make ends meet. Keeping what they now pay in taxes would help their economic situation even though they would have to pay for services out of pocket, that are today paid for by taxes. The reason they would have more left over is that prices would get lower in a more competitive market, and because there’s an overhead to collecting taxes.

Even today, with such high taxes and an expectation that it’s the government that’s responsible for helping the poor, people still give money to charitable organizations. Especially Americans are exceptionally generous. If people had more money left over, obviously more money would also be donated to charities.

Most of the people who donate would be interested in giving money to those organizations that were best at helping. Not that many people would be interested in researching which organizations were most effective at helping, though, but some definitely would, and their conclusions might reach the general public through the media. There would thus be competition between charitable organizations. Those that were best at helping would tend to get the most money.

With such a strong incentive to help effectively, one would expect these organizations to give better help than what the government, which is a monopolist, does today.

And when we look at how governments do try to help poor people, we see that there’s ample room for improvement. Often there’s an abrupt cut-off in money paid when earnings exceed a certain limit. This means that working more or getting a raise will cause total income to go down, which gives a strong incentive to not accept the raise or work more hours. This effect is called the welfare trap, because it traps people in relative poverty. It makes it harder to get out of poverty because in order to do so, you typically have to accept a temporary decline in income.

It’s pretty obvious that welfare traps are bad, so in a competitive market for welfare services, the necessary effort would surely be put in to get rid of welfare traps. Also, the best way to help poor people is to enable them to make their own living, so when welfare organizations are evaluated, how well they do this would be an important criterion. It should also be in the welfare organization’s own interest to enable their clients to make their own living, since doing so means they can pay out less money.

From my home country, Norway, we have another example of governments trying to help, but not doing it very well. People who’ve had a paid job earlier, but are now unemployed, can get money from a governmental organization called NAV. However, in order not to get their unemployment benefits cut, NAV clients need to attend various courses while they’re unemployed. Fair enough. However, there aren’t that many different courses to choose from, and for some of the courses it can be hard to get a spot. Many then end up taking a job search training course that lasts for six months. From what I understand, it’s not a very good course… People who have been unemployed for a long time often end up taking the course several times since they can’t get spots in more relevant courses. This seems like a terrible waste of time – and taxpayer money. There’s got to be a better way to help!

I’m not going to pretend to know what the exact best way to help is. But what I do know is that in a competitive market, it’s easier to discover the good solutions.

Freedom makes us happy

[T]he value to individuals of being able to run their own lives is typically greater than the value to anyone else of being able to control them – or in other words, […] increases in liberty tend to increase total utility.

– David Friedman, The Machinery Of Freedom, Chapter 42: Where I Stand

A libertarian society is a tolerant society. A society that doesn’t have laws against any voluntary/peaceful actions has proven that its citizens are mostly tolerant people who accept that other people have different priorities than themselves.

In a tolerant society people who are a little bit different than everyone else has a better chance to thrive. They can follow their dreams and desires and do what they want with their lives to a greater extent than in other types of societies. Thus, it’s very likely that people on average would be happier in a libertarian society than in other, less tolerant, ones.

 

So, that was my list of the seven best consequentialist arguments for being a libertarian. Do you agree that these are good arguments? Are there other arguments I should have included? Don’t agree at all? Then, what are your best arguments for not being a libertarian? Let me know in the comments.


1) Others could be sick or old people, people living in war zones, people who can’t get out of poor countries, or those that have been imprisoned for victimless crimes, for example.

2) We’ll see if decentralized cryptocurrencies like Bitcoin can do something about that.

Lev lenge nok til å leve “evig”

Medical interface

Å ta godt vare på egen helse kan være mye viktigere enn du tror.

En del mennesker velger å leve et nokså usunt liv siden dét for noen kan være mer gøy enn å leve sunt, samtidig som man tenker at et sunt liv bare ville gitt 5-10 år mer, noe som ikke er så mye i den store sammenhengen. Det er en helt forståelig prioritering.

Men i løpet av de nærmeste tiårene vil det skje mye innen medisinsk teknologi og biologisk forskning. For to år siden skrev jeg at hvis du lever om 30 år, er det gode sjanser for at du kan leve om 1000 år også. I korte trekk skyldes det at den teknologiske utviklingen er akselererende. Informasjonsteknologier, det vil si teknologier som benytter seg av datamaskiner (regnekraft) i stor grad, har en eksponentiell utvikling – det betyr at de utvikler seg raskere og raskere hvert eneste år. Og medisin har begynt å bli en informasjonsteknologi. Vi kan nå blant annet lese og redigere gener og 3D-printe organer.

Hvis man kan reparere mennesker såpass godt om 15 år at en 80-åring kan bli som en 65-åring, har vedkommende fått 15 år ekstra. På den tiden vil det etter all sannsynlighet skje vesentlig større fremskritt innen biologi og medisin enn det gjorde i de første 15 årene, slik at om 30 år vil det være mulig å reparere kroppen mye bedre enn om 15 år, og vedkommende kan kanskje få en kropp som en 40-åring.

Disse tallene er litt vilkårlig valgt, men du skjønner sikkert logikken.

For å unngå å dø av aldring, heter det på engelsk at man må oppnå longevity escape velocity, noe som kan oversettes til unnslipningshastighet for aldring. Hvis din forventede levealder øker med mer enn ett år for hvert år som går, har du nådd unnslipningshastigheten for aldring.

Mennesker som er for gamle idag vil ikke nå unnslipningshastigheten for aldring, mens de som er unge ikke vil ha noen som helst problemer med det. Men det vil være en del mennesker som vil være helt på grensen til å kunne leve lenge nok til å få nytte av de medisinske fremskrittene som kommer til å komme. For disse menneskene kan det være helt avgjørende å leve relativt sunt. Gjør de ikke det, kan de dø før de er 70, mens hvis de faktisk tar grep og forbedrer livsstilen sin, kan de få muligheten til å leve ekstremt mye lenger.

Men det er selvfølgelig ingen garantier. Man kan ha uflaks og få en dødelig sykdom i morgen eller bli utsatt for en alvorlig ulykke. Så det man kan gjøre er å gi seg selv en størst mulig sjanse til å nå unnslipningshastigheten for aldring. Og med tanke på hvor mye det er å vinne på det, vil jeg sterkt anbefale å gjøre eventuelle tiltak for å redusere ens egen risiko for å dø de neste par tiårene.

Is Aging The World’s Biggest Problem?

What’s the world’s biggest problem? Is it possible to say? Well, to be able to compare alternatives, you at least need some criterion to evaluate against, and what should that criterion be?

Here are some possibilities:

  • To what extent something makes humans less happy
  • How much suffering something inflicts in humans
  • How much suffering something inflicts in humans and other animals
  • To what extent something affects the average life-span of humans
  • The cost of the problem in dollar terms
  • Existential risks – something that threatens to wipe out the entire human race

Most of these alternatives focus on humans, and that’s what I’m going to continue to focus on in this article. I think many people – maybe most people – will find that natural. However, I don’t think it’s obvious that the focus should be solely on humans, and you can argue that factory farming may be a bigger problem than any of the problems humans face, and I can’t really say you’re wrong.

It’s also quite hard to put numbers on some of these alternatives, like how much something affects people’s feeling of happiness.

The most straight-forward criterion to use, I think, is calculating how many years of life the various problems take away from an average person – how the problems affect the average global life-expectancy.

So, what are the problems that humanity faces? Here are some potential problems, in alphabetical order:

  • Aging
  • Climate change/global warming
  • Diseases
  • Hunger and poverty
  • Meteorite impact
  • Overpopulation/lack of resources
  • Pollution
  • War and terrorism

Climate change/global warming: The number of deaths from natural disasters and extreme weather has decreased substantially over the past century. From 2010-2015 the average was about 70,000 deaths annually from these causes. How many of these were caused by the climate being warmer or different now than previously is hard to say. Potentially, the number could be less than 0, but even if we assume that these deaths were all caused by climate change, average global life expectancy would only be changed by a tiny fraction of a year due to climate change. It’s not impossible that the climate could get more hostile in the future, but with better technology and more people escaping poverty, I don’t think it’s unrealistic that the number of climate-related deaths will continue to go down.

Non-age-related diseases: Most people are killed by diseases, but most of those diseases are age-related. (Age-related diseases will be included in the discussion about aging below.) About two thirds of all deaths are caused by aging (mainly age-related diseases), so let’s assume the remaining third of deaths are caused by non-age-related diseases. That’s a small exaggeration, but I’m not aiming to find the exact numbers here. Let’s further assume deaths from non-age-related diseases occur on average when people are halfway in their expected life-spans, so when they’re about 36 years old, which is probably an under-estimate since the immune system gets weaker as we get older. But if we use these numbers, it means that non-age-related diseases take away about 10-15 years of life per person on average.

Hunger and poverty: Hunger, and especially poverty, are big problems. I guess poverty is rarely a direct cause of death, but there’s a big gap in average life-expectancy between rich and poor countries. So indirectly, poverty could be responsible for a lot of deaths. The average life-expectancy world-wide is 71.4 years. In Japan, the country with the highest life-expectancy, the average is 83.7 years. So eradicating hunger and poverty will at most extend the average human life-expectancy by about 12 years. I expect a relatively high percentage of deaths from non-age-related diseases will be avoided if hunger and poverty are eliminated.

Meteorite impact: If the Earth is hit by a large asteroid, the entire human race could be wiped out. If such an asteroid is bound to hit Earth in the next decade or two, there may be nothing we can do about it, but further into the future, we’ll probably be able to protect ourselves from most such impacts. We saw that the average life-expectancy is 71.4 years; the average age of everyone alive is close to 30 years, so an extinction event that wiped out the entire human race would take away a little more than 40 years of life on average for every person on Earth.

Overpopulation/lack of resources: On a global scale, I don’t think the Earth is overpopulated. By that I mean we’re able to produce enough food for everyone. However, there are crowded places where people starve, but in those cases the problems have other root causes, such as poverty, lack of education, war and/or bad governance. Even if no one should ever die of aging, I don’t think overpopulation will be a big issue. The reason is that the number of children born per woman (the fertility rate) is going down as the standard of living goes up, and the fertility rate has more than halved during the last 50 years.

Pollution: Crowded places, especially in poor countries, also tend to be polluted, and local pollution causes a lot of people to die prematurely. In India the average life-expectancy is reduced by 1-2 years due to pollution.

War and terrorism: War and terrorism are still big problems that kill many people, about 100,000 people were killed in 2014. This was a very high number compared with previous years, but even so, it almost doesn’t impact the global average life-expectancy.

And then we have aging…
Aging, including age-related diseases, kills about 90% of the population in developed countries, and about two thirds of the population overall.

Due to aging, the risk of any one person dying in the next year is increasing exponentially over time, doubling about every eight years after age 35. If the risk of dying had stayed constant from we were 20 years old, we would live to a little over 1000 years on average. In the US, the numbers are actually about 600 years for boys and 1700 years for girls, due to boys being a little more “crazy” than girls…

So aging takes away about 1000 years of life for the average person in developed countries. It also reduces the average life-expectancy significantly in the rest of the world, but to play it very conservatively, let’s ignore that. About a sixth of the global population live in developed countries, so dividing 1000 years by six, we find that curing aging will at least cause the average life-expectancy to increase by 166 years, probably a lot more than that. Aging thus eclipses all the other global problems we’ve discussed, including extinction events,1) so in this sense aging is definitely the world’s biggest problem! 2)

The worst part about aging, I would say, is that it causes people to die – that people’s lives are lost forever. But aging also has other very bad consequences, such as disability, pain and suffering. Many old people lose their independence and aren’t able to contribute to society. The costs of aging are extreme both in terms of money, suffering and years of life lost.

On the positive side, I think aging will be solved in the relatively near future. However, considering the gravity of the problem, I don’t think nearly enough resources are being spent on trying to solve it. Other problems are important to solve, too, and with more than 7 billion people on the planet, we can certainly work on all problems at the same time.

However, I don’t think very many people are aware of how important it is to solve aging and to rejuvenate old people. About 100,000 people die every day from aging and age-related diseases, so if we can speed up aging research so that aging is solved just one day sooner, we could potentially save 100,000 lives! So please consider helping. I’m not a researcher myself, but I know that SENS Research Foundation does a very good job in this area. You can donate to SENS here. Another way to help is to let others know about the problem. As mentioned, far too few people are aware of it.

Do you agree that aging is the world’s biggest problem? Should we use other criteria than average life-expectancy to determine what’s the world’s worst problem? Would that lead to a different result?


1) Extinction events also cause future humans not to be born, and can thus be said to be a bigger problem than aging, although it would take away fewer years of life per person alive today. However, the main reason I would say a meteorite impact or other extinction event is not a bigger problem than aging is that the risk of it happening any time soon is very low.

2) The average life-expectancy could rise further if the world is made more safe, something which could be achieved by e.g. self-driving cars and finding treatments for non-age-related diseases.

Why I’m Not Worried About Climate Change

In Norway, where I live, there’s a lot of focus on climate change in the media, and it seems many people have a negative view of the future because of this, and I fear this situation might not be unique to Norway.

According to the media, there’s a scientific consensus that humans are the main cause of global warming and climate change, and if we don’t reduce our CO2 emissions drastically, there will be enormous negative consequences.

It is true that human CO2 emissions are causing the amount of CO2 in the atmosphere to rise, and with CO2 being a greenhouse gas, I’d say it’s very likely that this is causing the average global temperature to be higher than it otherwise would be. Most of us can probably agree on that.

Then there’s the question of whether the measured increase in temperature over the last century is caused mainly by human activity or not. I wouldn’t be surprised if the answer to this is that yes, human activity is the main cause. However, scientists might not be as convinced about this as the media is telling us.

One oft-cited paper that supposedly shows that almost all climate scientists think humans are the main cause of global warming is Cook et al. (2013), titled “Quantifying the consensus on anthropogenic global warming in the scientific literature.” Cook et al. have gone through the abstracts of thousands of climate science articles and for each abstract noted what view, if any, is expressed about the role of humans in regards to global warming. The paper itself does not claim that 97 % of climate scientists believe humans are the main cause of global warming, but that’s how it’s been interpreted, and Cook, the lead author, was co-author of another paper (Bedford and Cook (2013)) which claimed that

Of the 4,014 abstracts that expressed a position on the issue of human-induced climate change, Cook et al. (2013) found that over 97 % endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.

David Friedman, an economist who checked the numbers and wrote a blog post about it, notes that “John Cook surely knows the contents of his own paper. Hence the sentence in question is a deliberate lie.

What Cook et al. actually found, based on their own data, was merely thatOf the approximately one third of climate scientists writing on global warming who stated a position on the role of humans, 97 % thought humans contribute [at least] somewhat to global warming.” And it found that the number of abstracts that expressed the view that humans are the main cause of global warming is 1.6 % of those that had stated a position on the role of humans, or 0.5 % of all the abstracts.

These numbers were calculated based on opinions expressed in paper abstracts, and not interviews with the scientists, so the real number is probably higher than 1.6 %, maybe even a lot higher, but that could not be shown based on the data included in the Cook et al. paper.

Whether humans are the main cause of global warming or not is not that important, though. Whether mainly caused by humans or not, the consequences of global warming will be the same.

An important question to ask at this point is whether a higher average global temperature is mostly good or mostly bad. People who worry about climate change tend to focus on the negative effects, while the other side may tend to focus too much on the positive effects. Naturally, there are both positive and negative effects, but quantifying them is hard, so determining whether global warming will be good or bad on net is not feasible.

According to Matt Ridley, the most important positive effects of global warming include “fewer winter deaths; lower energy costs; better agricultural yields; probably fewer droughts; maybe richer biodiversity.” More CO2 in the atmosphere also leads to a greener planet since more CO2 makes plants grow faster. If we also include very low probability events, there’s a theoretical chance that more CO2 in the atmosphere can prevent another ice age.

Another question to ask is whether the measured temperature increase over the last century is correct. Measuring global temperature is harder than you might think, and although we have gotten better at it, the uncertainties in the average global temperature are still quite large, at least ±0.46°C, according to this 2010 paper. According to the paper’s abstract, this means that the rate and magnitude of 20th century warming are unknowable. There probably has been an increase in temperatures, but we cannot know exactly how fast temperatures are rising.

But let’s assume the official numbers are correct and that humans are indeed the main cause of global warming. Should you be worried? I actually don’t think so.

Take a look at this graph from The Economist:

The graph shows that, since 1900, the number of deaths from natural disasters has plummeted. This is extremely good news, and if we take into account that the population has risen by about a factor of 5 over the same time period, this means that your chances of dying from natural disasters has decreased five times more than the graph shows.

What might at first glance seem worrying, though, when you look at the graph is the steep rise in the number of natural disasters. However, what’s important to realize is that this is the number of reported natural disasters, not the actual number of disasters. I don’t think there are many people who actually believe that the annual number of natural disasters was almost zero in 1900, while today it’s several hundred. A higher percentage of the actually occurring natural disasters are obviously being reported today than previously, so the graph isn’t really telling us anything meaningful about the trend in the number of natural disasters.

What it does hint at, though, is that the number of reported deaths from natural disasters – like the number of disasters – may have been underreported in the past. If that’s the case, today’s number of deaths from natural disasters are even lower compared with a century ago.

The reason for this huge increase in safety from natural disasters is that we are richer and more resourceful than in 1900 – our technology is better, so we can more easily protect ourselves from nature.

Technological progress is exponential, and so improves faster every year. This means that if there are no one-off huge disasters like volcanic eruptions, we should expect the number of deaths from natural disasters to continue to decrease in the foreseeable future.

What this also means is that it will be easier – and thus less expensive – to influence the global climate in the future than it is today, since we’ll have more capable technologies in the future.

Reversing global warming could thus be possible and feasible in the future. However, at the present time, doing something about it on a global scale is extremely expensive. So, another important question to ask is whether the potential problem of global warming should be solved globally; or locally by adapting to the changes. According to the 50 to 1 project, the global solution is about 50 times more expensive than adapting to a changing climate. They (claim that they) made the calculations based on data that’s accepted by the IPCC (Intergovernmental Panel on Climate Change).

Also, bear in mind that global warming will have positive as well as negative effects. If we allow global warming to continue, we will definitely reap some benefits from it. In addition, by adapting instead of trying to stop or reverse global warming, we save trillions of dollars that can be used for other (hopefully better) things: To cure disease and aging, alleviate poverty, clean up the world’s oceans, educate the uneducated, et cetera.

But wouldn’t it be good if renewable energy sources could take over for fossil fuels? And don’t we need politicians to regulate our behavior for that to happen? Actually, we don’t. This is going to happen regardless of politics. Ray Kurzweil, the famous futurist who works on artificial intelligence as a lead engineer at Google, recently gave a speech 1) at a technology conference in Norway. When asked about climate change, he replied:

Well, I mentioned that renewables – solar in particular, but also wind and geothermal – are growing exponentially. If you add them all together, it’s only 5 or 6 doublings from 100 % at about 2 years per doubling. So we’re not far from [them] really being able to provide all of our energy needs, and they’ll be subject to this very significant deflation rate, so it’ll be relatively inexpensive, and the sunlight is free. As I mentioned, we have 10,000 times more [sunlight] than we need to meet all of our energy needs. So that is happening. I’ve talked to people responsible for the, for example, Saudi Arabian fund. Their view is that they have 20 years. I think that’s actually optimistic, but they do realize that that business model of exploiting fossil fuels is not going to last forever.

That renewables are going to take over for fossil fuels may be hard to believe for many people, but take a look at how the price of solar energy, and the amount of energy we get from solar, have evolved over the past decades:

As the price has come down, the amount of energy we get from solar has gone up. Today, in sunny parts of the world, unsubsidized solar is actually the very cheapest form of energy, cheaper than any fossil fuel, including natural gas and coal. And now that we don’t even have to subsidize solar for it to be competitive, the market forces have taken over and the rise of solar just cannot be stopped!

Although CO2 is taken up by plants and oceans, most of the CO2 we produce will linger in the atmosphere for a very long time, so even if we stop emitting it, the amount of CO2 in the atmosphere might almost not go down. My very unscientific opinion is that that’s probably not a big problem. But let’s assume that it is a problem, as K. Eric Drexler, author of Engines Of Creation – The Coming Era Of Nanotechnology (1986), thinks.

If it does turn out to be a big problem, it will be much cheaper to fix it a few decades into the future than to fix it now, even though the amount of CO2 we have to remove is larger in the future. The reason is, again, that technology advances exponentially. We’ll have much more capable technologies, and so the price of accomplishing these things will also be much lower. Here’s Drexler:

[T]o have the the 21st century have a planet that resembles what we’ve had in the previous human history will require taking the CO2 levels down, and that is an enormous project. One can calculate the energy required – it’s huge, the area of photovoltaics required to generate that energy is enormous, the costs are out of range of what can be handled by the world today.

But the prospects with a better means of making things, more efficient, more capable, are to be able to do a project of that scale, at low cost, taking molecular devices, removing molecules from the atmosphere. Photovoltaics produced at low cost to power those machines can draw down CO2 and fix the greenhouse gas problem in a moderate length of time once we pass the threshold of having those technologies […] We now have in hand tools for beginning to build with atomic precision, and we can see pathways from [here] to a truly transformative technology.

Drexler is a nanotechnology pioneer, and what he’s saying is that we’ll be able to use nanotechnology to build machines at the molecular level – with atomic precision. These can be mass produced at extremely low cost, and some of those machines can be designed to capture CO2 from the atmosphere. Capturing enough CO2 to matter from the atmosphere today would be prohibitively expensive, but by using the molecular machines Drexler envisions, it could be done much more cheaply, especially if we have enough almost free solar energy to power the process.

Conclusion

  • Climate change may not be as big a problem as the media is telling us
  • Humans may or may not be the main cause of global warming
  • We cannot know whether the net effect of global warming will be good or bad
  • The number of deaths from natural disasters has plummeted
  • Adapting to climate change is probably a lot less expensive than trying to “fix it” on a global level
  • Renewables are going to take over for fossil fuels during the next two decades

And if climate change turns out to be mostly bad, it will be possible and relatively cheap to remove CO2 from the atmosphere with nanotechnology in the future. Also, the more pressing a problem is, the more resources will be spent on trying to solve it, so, in my opinion, climate change (or global warming) is not a threat to human civilization, and I don’t think you should worry about it.


1) Unfortunately it’s behind a paywall.

In A Hundred Years, There May Be No Politicians

Although the current way of organizing society may seem like the only natural way, democracy and the nation state are relatively recent phenomena, and we shouldn’t take for granted that they will stay with us forever. Most people would probably agree with that, but 100 years isn’t exactly forever… Could it really change that fast?

Society has already changed enormously over the past couple of centuries, and technological progress has been a very important factor in bringing about this change. The rate of technological change has been speeding up until the present day and is expected to continue to accelerate in the foreseeable future.

It’s hard to predict how society will change in the future. It’s probably a lot easier to predict some of the ways in which technology is going to improve. A somewhat correct understanding of the future of technology will at least make it easier to predict societal change than if you don’t have this knowledge.

So let’s start with the future of technology.

Many science fiction movies depict a future in which one specific technology has advanced well beyond today’s capabilities while every other technology has seemingly stagnated at current levels. An example is the movie Passengers, where they’ve had interstellar travel for at least a century, yet they still age and die exactly like people do today… Naturally, it’s extremely unlikely that the future will unfold in this way.

Scientists and engineers all over the world are working hard to make progress in countless areas, all at the same time, and progress in one field may hugely benefit other fields, so sort of the only likely future is one where a wide range of technologies have advanced far beyond current levels. And when we know how fast technology is advancing today, and that it’s actually accelerating, and we also know that the acceleration is expected to continue in the future, then 100 years suddenly seems like a very long time.

In this article I will assume that Ray Kurzweil is essentially correct in his predictions about the future. Kurzweil is one of the most optimistic futurists in terms of time scales. But he also has a very good track record for his predictions. If you disagree with Kurzweil’s vision of the future you may also disagree with this article’s conclusion that there may be no politicians in 100 years.

Kurzweil’s main prediction is about how fast computer technology is advancing, an aspect of the future he says is amazingly predictable. If computer technology isn’t advancing as fast as he predicts, the other specific technologies he predicts will likely also be similarly delayed.

Kurzweil started making predictions around 1980, and now, in 2017, computers are almost exactly as capable as he predicted back then. That’s good reason not to dismiss his future predictions out of hand although some might almost seem a little too incredible.

Some of Kurzweil’s predictions:

  • Within 10 years, most diseases will be conquered. During the following 10 years, the remaining ones will be conquered as well.
    So, what does that mean? It doesn’t mean that every human on Earth will be cured of their diseases by 2037, but that we at least have available treatments for almost all diseases (at least 75% of currently untreatable diseases), although some may still be expensive or not approved in every country. Prices should go down rather quickly, though, since medicine is becoming an information technology (a technology that utilizes computers and computing to a large degree).
  • Around 2030 human life expectancy will increase by more than one year per year due to new medical technologies making use of nanotechnology. That’s not just infant life expectancy – that’s your life expectancy. Which means that – barring accidents or armageddon – you may live essentially forever.
    An organization that does really good work in the area of rejuvenation technology is SENS Research Foundation (you can donate here).
  • The nonbiological (artificial) intelligence created in the year 2045 will be one billion times more powerful than all human intelligence today. Many people are fearful of artificial intelligence outpacing human intelligence. However, it won’t be an us versus them scenario. Instead we’ll merge with the “machines”:
  • By around 2035 we’ll have the technology to connect our brains directly to the Internet. This will allow you to get answers to your questions merely by thinking. Several companies, including Elon Musk’s NeuralLink, are already working on brain-computer interfaces, though the technology used to implement the brain-computer interface will be quite different and more advanced in the 2030s than now. In the 2030s we’ll probably have tiny robots – nanobots – in our brains (and bloodstream). These nanobots should be able to communicate with our neurons and each other, transferring data to and from the brain. It might become possible to download knowledge and skills to the brain in this way. 1)
  • In the 2040s people will spend most of their time in virtual reality. Virtual reality is another technology that will make use of nanobots in the brain. The nanobots could control what signals reach our neurons, so by filtering out data from our real senses and instead providing fake sensory data, the potential is no less than 100% realistic virtual reality. We will also be able to interact with real human beings in virtual reality. This will likely lead to many business meetings taking place in virtual reality and fewer people traveling long distances in the real world.
  • By the 2030s, we’ll have real nano-technology in the form of atomically precise manufacturing. Or maybe even before 2030. This year Kurzweil said that “We know exactly how to create the medical nano-robots if we have the enabling factor of being able to do atomically precise manufacturing. That’s coming. We’ll have that by… Well, I’ve been saying the early 2030s. That’s increasingly looking conservative. We may have it in the 2020s.”
    Atomically precise manufacturing means we’ll be able to build physical objects with atomic precision. 3D-printers will be created that can make objects where every single atom is positioned according to a predetermined plan (a computer file).

When Ray Kurzweil says we’ll have some technology by a certain year, he does not mean that the technology will be mature at that time, and it may not yet be widespread. Some of the technologies mentioned will probably not reach maturity until some time in the second half of the century. Still, that’s actually not that many decades away…

Let’s discuss how some of these technologies could contribute to transferring tasks away from governments, and over to the private sector.

  • Aging is cured
    If we can stay healthy, energetic and productive indefinitely, then there’s no need for a retirement age. Without retirement pensions, the state’s largest expense goes away. But don’t worry; increasing automation will lead to lower prices, so you don’t have to work that much, anyway. Since it will be cheaper for the state to pay for life extension treatments than to pay out pensions, governments may actually offer life extension treatments free of charge (well, paid for by the tax payers) to all its citizens some time in the relatively near future.
  • Diseases are cured
    If we’re healthier, then the government’s expenditure on sick pay, medicare, medicaid (or similar programs for countries outside the US), and maybe even hospitals (if nanobots can heal us from within) should go down significantly.
  • Brain connected to the Internet and information downloading
    If we can learn and acquire new skills simply by downloading information to the brain, then there’s no real need for schools (as a place to learn), which is another big expense for governments today. This, in combination with biotechnologies that can heal and enhance human beings, could also enable anybody who wants a job to get one. Even if downloading information isn’t that cheap when the technology is relatively new, companies could pay the cost for new employees to learn (download) the skills they need. The reason I think companies would be willing to make that investment is that the cost of downloading knowledge would still be very low in comparison to the added value the employee could provide with the downloaded knowledge.
    If anyone can get a job, then very few people will need charity to make ends meet, which means the amount of money governments would need to spend on a social safety net would go way down. And then the task of helping the needy may even be performed adequately by private charities.
  • Atomically precise manufacturing
    When everyone can make almost any physical object with their own 3D-printer, then global trade will look very different than today. The need to transport things around the world will likely decrease substantially. That means governments aren’t going to get that much revenue from tariffs. Other trade barriers will also lose their importance. It also means that banning or restricting certain items, such as guns, recreational drugs or medical drugs, will be difficult or impossible. I think it’s likely that the laws will change in response to this, so that we don’t continue spending excessive amounts of resources fighting “wars” that just cannot be won, such as the current war on drugs. This should mean less government spending on police services.

I’m not the first person to point out that new technologies are going to impact politics. In a series of articles libertarian author Onar Åm has explained how several new technologies have been, are, and are going to disrupt politics, taking power away from politicians, something which should lead to smaller governments. In part five of the series he attempts to explain why these emerging technologies are disrupting politics:

When everyone has affordable access to a technology, it becomes practically impossible to control. People then don’t need to ask politicians to be able to do the things they want. They can just do them. That’s precisely why these technologies are politically disruptive.

Others have said essentially the same thing using the term decentralization. “The 21st century technologies are decentralized rather than centralized”, according to Kurzweil.

Peter Diamandis uses the term democratization. Technologies that are being digitized become disruptive (though growth could be deceptively slow at first), then they become demonetized (cheap), then dematerialized (virtual), and finally democratized. (According to Diamandis, these are the 6 Ds of technology disruption.)

Actually, Ray Kurzweil himself recently talked a little about whether technology will end the nation state:

In the video Kurzweil says that nation states, due to the Internet and its globalizing effect, have already become less influential than they used to be, and that he thinks they’re going to continue to get less influential.

He doesn’t say, however, that nation states will go away entirely, but he doesn’t say it won’t happen, either.

So as far as I know, neither Kurzweil, nor Diamandis, nor Åm has said that governments are going to disappear completely, and thus it’s probably not obvious that that will actually happen. Even David Friedman, who has written an entire book describing how all the tasks that are now performed by governments, could instead be performed privately (in a system called anarcho-capitalism), suspects governments will still exist 100 years from now: 2)

I think predicting that far ahead is hard. The most likely [anarcho-capitalist] scenario, I think, is for anarcho-capitalism online and states still controlling realspace. If enough of life is online, states end up competing for very mobile taxpayers, so are more like landlords than states.

But the world is changing very rapidly due mostly to technological change, so any prediction is more nearly a guess.

So I cannot be sure that governments and politicians will disappear in 100 years. But I do feel quite confident that governments will get significantly smaller towards the end of the century, possibly sooner. And the possibility that they may actually disappear altogether is at least worth consideration, in my opinion.

Bitcoin/blockchain
In addition to Kurzweil’s predictions, there’s one very exciting technology we need to discuss, namely internet money (e.g. Bitcoin) and one of its underlying technologies, the blockchain.

I think the internet is going to be one of the major forces for reducing the role of government. The one thing that’s missing, but that will soon be developed, is a reliable e-cash.

– Milton Friedman

As he explains in this video, Milton Friedman thought that the Internet, in combination with “a reliable e-cash”, would make it harder for governments to collect taxes. He made this statement in 1999. Now, 18 years later, we do have e-cash – in the form of Bitcoin (among others). It may not yet be as reliable as Milton Friedman intended, but we’re getting there.

Bitcoin is a form of internet money or e-cash, also called a cryptocurrency. It’s a decentralized currency – which means that it’s not controlled by a central authority, such as a government, a central bank, or even a company. It’s also peer to peer, which means that the two parties to a payment transaction don’t need to involve a trusted third party in order to avoid the double spending problem. They only need to trust Bitcoin itself (the Bitcoin software).

The total number of Bitcoins in the world is limited and will never exceed 21 million. At the moment almost 80% of all Bitcoins that will ever exist have been created, and after the year 2140, no more new Bitcoins will be created.

Although new Bitcoins are still being created today, the demand for Bitcoin has increased at a much faster rate than the number of Bitcoins, so the value of a Bitcoin has risen exponentially (with some ups and downs, though), and one Bitcoin is currently valued at more than $4000. Contrary to this, government currencies, like the US dollar, tend to lose value, due to government-controlled expansion of the money supply (the more money in circulation, the less one unit of money will be worth – all else equal).

Bitcoin is not perfect, though; still in its infancy, it’s far from a mature technology. I’ve heard people comparing it to the Internet of the early 1990s. However, it’s also not static. It’s being improved continuously, something which is absolutely necessary in order to cope with a growing number of users and transactions. Bitcoin also has many competitors, and if Bitcoin isn’t able to solve one particular problem, another cryptocurrency probably will and could potentially take over for Bitcoin as the biggest cryptocurrency. Billions of dollars are being invested in these technologies. E-cash is definitely here to stay.

Bitcoin is built on top of the blockchain, which is a distributed database that stores every single payment transaction made with Bitcoins. In other words, every Bitcoin transaction ever made is stored forever, and it’s stored on countless computers all around the world, which makes it extremely hard to make fake changes.

These properties – that everything is stored and can’t be changed – can be very useful in other areas too, not just for money and payments.

However, it may be hard to implement a successful application using the blockchain without also including a cryptocurrency in the implementation, since an incentive is needed to make people invest the computing power necessary to validate transactions. For Bitcoin it’s possible to create new Bitcoins in a process called mining, a digital equivalent to mining gold. When people are mining Bitcoin, they’re simultaneously validating transactions. So the Bitcoins they receive as a reward functions as an incentive to validate the transactions.

Keeping this in mind, the blockchain can also be used to keep track of who owns what (ownership). It can be used to store contracts – in Ethereum some types of contracts (“smart contracts“) can even be executed automatically when some condition is met. Ethereum, by the way, has implemented its own currency, Ether. It would also be possible to store things like birth certificates, wills, and who is married to who on the blockchain, just to name a few examples.

Today, most governments are heavily involved in these areas, and in some countries this works well, but in other countries where the government may be more corrupt, it unfortunately does not. With the new blockchain technology, people in poor third world countries might finally stop having to worry about their property being confiscated and given to someone else, since, if ownership is logged on the blockchain, they can prove that they’re the owner. Secure property rights are extremely important for economic development, so this could actually translate into many more people escaping poverty much faster.

Bitcoin can also provide banking services without a bank, and Bitcoin and the blockchain technology might have a bigger impact in developing countries than in the developed countries – at least in the short term, and especially if transaction fees can be kept low enough. Also, people in countries where the currency experiences runaway inflation will have a great incentive to start using Bitcoin.

Some anarchists believe that Bitcoin and the blockchain technology on its own is going to bring an end to the nation state. I’m not sure how many non-anarchists are of the same belief, so it may be just wishful thinking. Personally, I believe we’re going to see the end of governments at some time, but the reason, I think, will be a combination of Bitcoin/blockchain, and other technologies as discussed above.

Jeff Berwick is one anarchist who believes that Bitcoin and blockchain will bring an end to governments:

It goes way beyond just money. Money is important enough. This could be where everything is based, and there’s so much innovation going on now. There’s a company called BitNation, for example, and they’re trying to, essentially, put governance on the Blockchain, so all contracts, all property deeds, everything would be on the Blockchain. They’re actually starting in Africa, because many of these countries never had a sort of a system of private property, and a way to have a good system to tell who owns what, and that’s why they’ve had problems for so many years, or even centuries. But they’re trying to put that into place.

People start using these sort of systems very quickly. There becomes really no need for government whatsoever, I think there’s no need for it now. But for people who think there is some need for it still, contracts, adjudication and things like that, that can all be taken out by these blockchain technologies, so that’s why you’re seeing billions of dollars actually going into this right now. Many people don’t realize it. This is sort of where the Internet was in 1994, is where Bitcoin is in 2015/2016. And I was there for the start of the Internet, so I remember. It’s very similar. Your average person still doesn’t know what a big deal is going on behind the scenes, but this is going to revolutionize the world. It’s not just going to be Bitcoin. In fact, who knows about Bitcoin? It could be gone in a couple of years because the innovation is happening so fast, and there’s going to be so many things built on top of this technology, that it’s going to change the world.

But what if governments also start using Bitcoin or other cryptocurrencies? Maybe they issue their own cryptocurrencies? Could Bitcoin still be a force for smaller governments?

I think Yes. Bitcoin and other non-governmental alternatives would still be available, in competition with the state currencies. That probably means that the inflation rate for the government currencies can’t be that high, or else more people will switch to e.g. Bitcoin, which is getting more valuable over time – not less.

So if I can choose between money that appreciates in value or money that gets worth less, all else equal, that’s an easy choice. But all else isn’t going to be equal, so many people are going to choose government coins over Bitcoin anyway. But the bigger the difference in inflation/deflation, the more people are going to choose Bitcoin.

So it’s not unnatural to think that the state, in competition with Bitcoin, will have to lower its inflation rate – create less new money – if they want people to continue to use their money. If they can’t create as much money as they would like, the government will lose some of its power – or options. I’m in particular thinking about the option to fight wars in faraway countries, which is arguably the worst and most brutal thing states do.

War is extremely expensive, and if you or I am asked to pay so that our government can make war in the Middle-East, we would probably say No thanks if we could, or we would protest if taxes were raised to finance the war. But the way wars are financed is by creating – or printing – money, which is much less visible to most people. In other words, wars are financed by decreasing the value of the currency. If Bitcoin, as a competitor to government currencies, is able to lead to less inflation in the government currencies, that could actually make it harder for governments to go to war, which could mean less wars and a more peaceful future.

 

Are there some set of tasks that, no matter how technologically advanced we become, we absolutely need governments to perform? According to the minarchist libertarian position, there should be a government, but it should be limited to protecting the rights of the individual, so it should be responsible for police, courts and national defence, and nothing else. However, both policing services and arbitration is done by private companies today in addition to being done by governments, so it’s not such a long stretch to imagine that they could be completely privatized. And in the future, when the world is much wealthier, even more globalized than today, where governments may be really small, where people can get almost everything they need in a decentralized way, and where governments can’t finance wars with inflation, then who has anything to gain from war? I think world peace is achievable this century, and then, who needs a military?

So for me, a much more logical end point than minarchy is anarchy, where there is no government at all. And although it may sound terrible to most people right now, I think you won’t mind it if/when that future world arrives.

Does anything point towards bigger governments?
OK, I’ve talked mostly about things that are pointing towards smaller governments in the future. But we should also consider the opposite. What factors could lead to bigger governments, or at least slow the decrease in size? I see two main possibilities in the short to medium term:

  • Universal basic income
    The fear of robots taking over many jobs that are today performed by humans has made the idea of a universal basic income more popular, with some countries and organizations doing experiments with basic incomes. I wouldn’t be surprised if it’s implemented more broadly in the coming decades. If so, I hope it will be a replacement for existing government welfare programs, not an addition to them.
    If a basic income is implemented as a replacement for the existing welfare programs, that could open up for a private welfare market (since some people will still need more help than what a basic income can provide), where there is competition, leading to better solutions than today’s. If that works out well, then in the longer term, government welfare (including basic income) might not be necessary (since it’s taken care of by the private sector and also since fewer people will need help, as discussed above).
  • Government pays for rejuvenation treatments
    Since at some point it will be cheaper for governments to pay for rejuvenation treatments than to pay old-age pensions, I don’t think it’s unlikely that they will offer to pay for rejuvenation treatments in exchange for not paying out pensions. Although the cost of paying for rejuvenation treatments is lower than the cost of paying out pensions, paying for most people’s rejuvenation treatments will be a significant cost – at least at first. If there’s enough competition despite governments’ involvement in the rejuvenation market, prices will continue to decrease over time, and if rejuvenation treatments get affordable for almost everyone, the need for government to pay will also go away.

I’m sure there are several other important items that could point toward bigger governments, but the way I see it, there are far more things about future technologies that are pointing towards smaller governments than towards bigger government. But we’ll see. Hopefully, both you and I will be alive to see the actual outcome in 2117.

Do you think governments will disappear? What are your best arguments that we will or will not have politicians and governments in 100 years?


1) I’m not quite sure about the timeframe for downloading knowledge and skills. In his 2005 book The Singularity Is Near, Kurzweil writes: “The nature of education will change once again when we merge with nonbiological intelligence. We will then have the ability to download knowledge and skills, at least into the nonbiological portion of our intelligence. […] We don’t yet have comparable communication ports in our biological brains to quickly download the interneuronal connection and neurotransmitter patterns that represent our learning […] a limitation we will overcome in the Singularity.”

Downloading knowledge and skills instantly into our biological brains is a technology that will be developed in the second half of this century, according to Kurzweil’s earlier book, The Age Of Spiritual Machines (1999). However, Nicholas Negroponte (cofounder of the MIT Media Lab and co-creator of Wired Magazine) has predicted that in less than 30 years you may take a pill, which dissolves, goes to the brain via the bloodstream, and alters the brain in just the right ways so that you may learn some topic – the examples he used were English and Shakespeare.

2) From an email conversation, quoted with permission.