Why Everyone Should Be A Climate Skeptic

The Hockey Stick Illusion, Climategate, The Media’s Deliberate Deception And Other Reasons For Optimism

Many people believe the media is exaggerating the dangers of human CO2 emissions. After reading this post, you’ll have a greater understanding of why that is.

As we’ll see, there’s been a lot of dishonesty in the climate debate. This applies not only to the media, but also to some high-profile researchers.

Hockey stick temperature graph

I’m going to tell you a bit about the temperature graph above. This may sound boring, but it’s actually a pretty incredible story, and you might be a little shocked. The graph’s underlying studies were used as references to say that 1998 was the warmest year in the last 1000 years and that the 1990s was the warmest decade. The graph has been called the hockey stick graph due to its shape. It was included six times in the Third Assessment Report of the IPCC. (IPCC stands for the Intergovernmental Panel on Climate Change — it’s a part of the UN). The graph was also included in their 4th Assessment Report, from 2007. The main author of the graph’s underlying studies is Michael Mann. As we’ll see, the studies are weak and have important flaws.

In this post, I’m also going to quote some private emails sent to and/or from scientists at the Climatic Research Unit (CRU). CRU is a component of the University of East Anglia in Norwich, UK. With about 15 employees, it’s not a big institution. It is, however, considered to be one of the leading institutions concerned with climate change.

The reason I’ve been able to quote private emails is that on November 19, 2009, about 1000 emails were leaked from the CRU. This episode has been called Climategate. Here you can read parts of the most sensational emails from the first round of Climategate, along with comments to give context. In 2011, another 5,000 emails leaked from CRU were released to the public. 1)

I’ll use the words alarmist and skeptic to describe two common opinions held by scientists and the public about CO2 and policy measures. Although the words aren’t necessarily the most neutral, they are widely used and easy to understand. I define them as follows:

  • Alarmist: Thinks CO2 will lead to dangerous global warming and wants political action. (Please feel free to suggest a better word than alarmist.)
  • Skeptic: Believes alarmists exaggerate the dangers of CO2 and doesn’t want political action.

As you may have guessed, I am a skeptic. I may thus tend to rely too much on skeptical arguments and too little on arguments from alarmists. If you find factual errors, please let me know, so I can correct them.

As a contributor to global warming, my main focus will be on CO2. This is the media’s main focus as well, and the IPCC also believes CO2 is the most important factor:

Carbon dioxide (CO2) represents about 80 to 90% of the total anthropogenic forcing in all RCP scenarios through the 21st century.

Shortcuts to the different parts of this article:

Consensus?

We often hear that 97% of climate scientists believe that climate change is man-made. But what does that really mean? Does this mean that almost all climate scientists believe humans are:
a) the main cause of climate change? Or
b) a contributor to climate change, but not necessarily the main cause?

A couple of years ago, I gave a short presentation on the 97% consensus to some of my colleagues at work. Before I started the presentation, I asked this question. Everyone in the audience (about 25 people, all engineers) answered alternative a — that the scientists believe humans are the main cause of climate change. Maybe you’ve been under the same impression yourself?

Although the media usually doesn’t say this explicitly, it’s easy to get that impression. But it’s wrong. It is true that the IPCC believes that humans are the main cause. From the Summary for Policymakers in their most recent assessment report from 2013/2014:

It is extremely likely [95-100% certain] that human influence has been the dominant cause of the observed warming since the mid-20th century.

But there seems to be no good studies confirming that the agreement on this among climate scientists is as high as 97%. Usually, a 2013 meta-study by John Cook and others is used as a reference for the claim. The study is entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature”, but is usually referred to as just “Cook et al. (2013) ”.

Cook et al. (2013) assessed the abstracts of almost 12,000 research papers. These papers were all published between 1991 and 2011 and contained the text “global warming” or “global climate change”. The articles were categorized based on the authors’ expressed views on whether, and to what extent, humans contribute to global warming or climate change — for example, whether humans contribute, don’t contribute, or contribute at least 50% (ie more than all other factors combined). There was also a category for articles that didn’t explicitly say that humans contribute, but where this was implicit or implied.

Cook et al. (2013) found that about two thirds of the papers didn’t express an opinion about the extent of the human contribution to climate change. However, of the remaining one third of papers, 97% of abstracts expressed the view that humans do contribute to global warming or climate change. But only 1.6% of these remaining papers clearly stated that humans were the main cause. The fact that so few paper abstracts explicitly stated that humans are the main cause of global warming or climate change was not mentioned in Cook et al. (2013). But the raw data can be found under Supplementary data, so verifying this is relatively easy. 2)

In another article, also from 2013, where Cook was a co-author (Bedford and Cook 2013), the results of Cook’s meta-study were misrepresented. In the article, we find the following sentence:

Of the 4,014 abstracts that expressed a position on the issue of human-induced climate change, Cook et al. (2013) found that over 97 % endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.

Notice that it says main cause, but this does not agree with the raw data from Cook et al. (2013).

Although the agreement isn’t as high as 97% among researchers or climate scientists that human activity is the main cause of climate change, there’s still a clear majority that believes this. I’ll return to this topic in two later footnotes (footnotes 40 and 41).

Here’s a video which, in addition to criticizing Cook et al. (2013), also criticizes other consensus studies:

Background

Earlier this year, I received a Facebook message from Martha Ball, Tim Ball’s wife (probably after she’d seen a post I wrote about Cook’s consensus study). Tim Ball is a well-known climate skeptic, and has been sued by Michael Mann for defamation. Mann is, as mentioned in the introduction, the main author of the hockey stick graph studies. Mann’s studies have received a lot of criticism, and as we’ll see, some of the criticism is highly justified.

The reason for the lawsuit against Tim Ball was that Ball had said:

Michael Mann at Penn State should be in the state pen [penitentiary], not Penn State [Pennsylvania State University].

After more than eight years in the Canadian judiciary, Tim Ball won the lawsuit in 2019 on grounds that Mann did not seem interested in making progress. Mann was ordered to pay legal fees — $100,000, according to Martha Ball. Mann, on the other hand, claims he was not ordered to pay Ball’s legal fees, but the print-out from court seems to make it clear that he was:

[19] MR. SCHERR [counsel for Tim Ball]:  I would, of course, ask for costs for the defendant, given the dismissal of the action.

[20] MR. MCCONCHIE [counsel for Michael Mann]:  Costs follow the event. I have no quarrel with that.

[21] THE COURT:  All right. I agree. The costs will follow the event, so the defendant will have his costs of the application and also the costs of the action, since the action is dismissed.

In 2018, the program series Folkeopplysningen on NRK (a Norwegian TV channel) had a climate episode where they showed something similar to Mann’s hockey stick graph. It showed how the global average temperature had changed since 20,000 years before our era, when it may have been more than 4 degrees colder than today. The graph showed no rapid changes until the 20th century, when the graph suddenly shot straight up. I didn’t quite understand how they could know the temperature so accurately so far back in time, and ever since I watched the program I’d wanted to know more about the topic, but it never reached the top of my priority list. But now that I’d received a message from Martha Ball, I thought it might be a good opportunity to finally learn about it. So I asked if she could point me to some good learning resources about the hockey stick graph. She recommended googling McIntyre and McKitrick. And so I did.

I then found the book The Hockey Stick Illusion — Climategate and the Corruption of Science, written by Andrew Montford. The book is a detailed, exciting and shocking story about how the Canadian mining consultant Stephen McIntyre tries to reproduce and verify Michael Mann’s hockey stick graph based on Mann’s published studies.

I’ll soon give a summary of the hockey stick graph story. The summary is based on the following interview from 2019 (which I’ll call the SoundCloud interview), where McIntyre tells his version of the story:

I definitely recommend reading or hearing more detailed retellings than mine. According to Ross McKitrick, McIntyre’s co-author, The Hockey Stick Illusion is the best place to start if you want to learn about the hockey stick. Matt Ridley, author of The Rational Optimist, has also recommended the book, calling it one of the best science books in years. I can also recommend McIntyre and McKitrick’s presentation to a National Academy of Sciences expert panel in 2006 (also recommended by McKitrick).

There was also a very good article, Kyoto Protocol Based on Flawed Statistics, in the Dutch magazine Natuurwetenschap & Techniek in 2005. The article covers many of the same topics as I do, but in more detail. The author is Marcel Crok.

Short intro to tree rings and terminology

Before I start, I’ll briefly explain some words that are important for the story. This should make it easier to follow along.

We only have direct temperature measurements with thermometers from about 1850. To estimate temperatures on Earth before then, we have to rely on one or more proxies — more indirect temperature indicators. The most common proxy in Mann’s hockey stick studies (and the one most relevant to this story) is tree rings. Under certain conditions, and for certain types of trees, a tree will grow more in a hot year than in a cold year. Based on the tree ring widths, it’s thus possible to know the temperature at the place where the tree grew — within some uncertainty range.

How the tree rings of a single tree have changed over time (from year to year) is called a tree ring series.

Temperature isn’t the only factor affecting the growth of trees. Samples are thus gathered from several trees at each geographical location. Preferably, 10 or more trees should be sampled. Usually many more are. The tree ring results for all these trees together is called a tree ring chronology. The temperature can be estimated more precisely for a chronology than for a single tree.

Mann had used a relatively advanced statistical method to reconstruct the Earth’s temperature (for the Northern Hemisphere) from the proxies. His studies were the first temperature reconstruction studies to use this method, called principal component analysis, or PC analysis, or just PCA). Principal component analysis itself wasn’t new, but its application in the context of temperature reconstructions was.

If you have a lot of proxy data (for example many tree ring chronologies) within a small geographical area, you can merge the chronologies in the area using principal component analysis. This results in a more even geographical distribution of proxies across the world (or hemisphere). 3)

An important early step in the calculation of principal components is centering all the tree ring widths in a chronology around 0. This is achieved by subtracting the average tree ring width from each individual tree ring width in the chronology. As we’ll see, Mann had used a slightly different method…

The hockey stick illusion

(Footnotes provide supplementary information. Click on the footnote number to navigate to the footnote, click the number again to navigate back.)

For McIntyre, it all started when he received a brochure in the mail (1:13 in the SoundCloud interview):

Well, I got interested in climate through sheer bad luck, I think. In 2002, the Canadian government was promoting the Kyoto treaty, and as part of their promotion they sent a brochure to every household in Canada, announcing that 1998 was the warmest year in a thousand years, and the 1990s was the warmest decade, and I wondered, in the most casual possible way, how they knew that.

McIntyre found that the claim that the 1990s were the warmest decade of the last 1,000 years came from two studies, published in 1998 and 1999.

The 1998 study had attempted to reconstruct the average temperature in the Northern Hemisphere for the last 600 years. The study from 1999 extended the time interval by 400 years. It thus showed the reconstructed temperature for the last 1000 years (also for the Northern Hemisphere).

The studies are often just referred to as MBH98 4) and MBH99 5), from the first letter of the authors’ last names and the year the study was published. The authors are, for both studies: Michael Mann, Raymond Bradley and Malcolm Hughes. Mann is the main author. The story, as I present it here, is mostly about MBH98.

McIntyre sent an email to Mann asking where he could find the data that MBH98 was based on. Mann had forgotten where the data was, but replied that his colleague (Scott Rutherford) would locate it. Rutherford said the data wasn’t all in one place, but he would get it together for McIntyre. A couple of weeks later, McIntyre was given access to the data on an FTP website. McIntyre thought it was strange that no one seemed to have asked for the data before. Had no one audited such an influential study?

But now that they’d taken the trouble to find the data for him, McIntyre felt obligated to investigate further. In the beginning, he had no particular goal in verifying/auditing Mann’s studies — he saw it more as a kind of big crossword puzzle to be solved.

There were some things in the data McIntyre had received that didn’t make sense, 6) and it was difficult to reproduce Mann’s results since the exact procedure was not described in Mann’s studies. McIntyre therefore sent a new email to Mann asking whether the data was correct. Mann would not answer the question and made it clear that he didn’t want to be contacted again: “Owing to numerous demands on my time, I will not be able to respond to further inquiries.”

McIntyre then published his first article with Ross McKitrick and reported on the problems he had found. The article is entitled Corrections to the Mann et. al. (1998) Proxy Data Base and Northern Hemispheric Average Temperature Series, but is usually referred to as MM03. Again, the letters are the authors’ last names and the digits are the year of publication. In MM03‘s abstract, McIntyre and McKitrick summarize the most important errors they found in Mann’s study:

The data set of proxies of past climate used in [MBH98] for the estimation of temperatures from 1400 to 1980 contains collation errors, unjustifiable truncation or extrapolation of source data, obsolete data, geographical location errors, incorrect calculation of principal components and other quality control defects.

Without these errors, the graph’s hockey stick shape disappeared:

The particular “hockey stick” shape […] is primarily an artefact of poor data handling, obsolete data and incorrect calculation of principal components.

Ross McKitrick, McIntyre’s co-author, is (and was) an economics professor, he had modeled CO2 tax in his doctoral dissertation, and McIntyre had seen him talking about the Kyoto Protocol on TV. Also, they both lived near Toronto, Canada. So when McIntyre contacted McKitrick, it was both to have someone verify his results, and also to have someone to publish together with. In regards to the latter, McKitrick’s experience with publishing scientific articles made him a good match.

Mann’s response to McIntyre and McKitrick’s first article was that they had used the wrong data set. According to Mann, the correct data was available on the FTP website McIntyre had been given access to. But it turned out the data was on another FTP website, which McIntyre had not had access to, but he was given access now. Mann also said that McIntyre and McKitrick should have contacted him when they found problems with the study (which McIntyre had in fact done).

Mann further explained that Rutherford had made some mistakes when he put the data together. 7) And it is probably true that McIntyre had received the wrong data. From The Hockey Stick Illusion:

It looked very much as if the version of pcproxy.txt that Rutherford had sent [to McIntyre] had been originally prepared for Rutherford’s own paper. In preparing these figures, he seemed to have introduced errors into the database – the same errors that had alerted McIntyre to the possibility that there were serious problems with MBH98.

So some of the errors McIntyre found in the beginning may have been introduced by Rutherford (who was Mann’s assistant). Thus, MBH98 probably did not contain all the errors McIntyre pointed out in MM03. Admittedly, McIntyre could not know this. It was Rutherford who had made the mistake, and Mann hadn’t checked when McIntyre asked if the data was correct.

McIntyre and McKitrick now realized that to be able to reproduce Mann’s results, they’d need access to Mann’s computer code. McIntyre wrote another e-mail to Mann, requesting to see the code. Again, Mann is uncooperative:

To reiterate one last time, the original data that you requested before and now request again are all on the indicated FTP site, in the indicated directories, and have been there since at least 2002. I therefore trust you should have no problem acquiring the data you now seek.

In his reply, Mann once again explained that he wasn’t interested in responding to further emails from McIntyre.

After this, many new files and directories appeared on the new FTP site. McIntyre had previously clicked around so much on this site that Mann had practically accused McIntyre of breaking into the server 8). So it was clear that the new files and directories hadn’t been available before – at least there had been no easy way to find them. McIntyre’s theory was that a robots.txt file had prevented these files and directories from being indexed by search engines. And since there were no links to them, there was no way to find them unless one knew the exact web address.

Among the files that appeared was a file containing the computer code to calculate the principal components. It appeared Mann had used a slightly unconventional way of calculating principal components. This was in spite of the fact that MBH98 stated that “conventional principal components analysis” had been used. Instead of using a suitable programming language (such as R) or an existing code library for the calculation, Mann had written the algorithm himself (in Fortran).

In standard principal component analysis, data is centered before doing further calculations. Centering is done by subtracting the average for the whole series for each value. The average of the resulting series is thus zero. This can be illustrated as follows:

Unrealistic example of two tree ring series. Left: Original data. Right: Both series are centered, that is, moved down so they’re equally much above 0 as below 0. The image is composed of two screenshots from The Hockey Stick Illusion. (X axis is time, y axis is tree ring width. The example is unrealistic because tree ring widths vary much more from year to year than this.)

But Mann had subtracted a different average. Instead of subtracting the average for the whole period, he had subtracted the average for the 20th century. This led to the resulting temperature graph very easily getting a sharp uptick in the 20th century: 9)

Unrealistic example of two tree ring series. One has an uptick in the 19th century. The other has an uptick in the 20th century. Left: Original data. Middle: Correctly centered. Right: Incorrectly centered with Mann’s algorithm. When the series are centered correctly, the flat parts of the curves are at the same level (close to zero). When centered with Mann’s algorithm, the flat part of the curve is too far down for curves with a 20th century uptick. Curves that deviate a lot from 0 (either above or below) are given greater weight in the final temperature reconstruction. Mann’s erroneous algorithm thus has a tendency to create temperature reconstructions with a hockey stick shape. This even happens with random data (random walk/red noise). The picture is composed of three screenshots from The Hockey Stick Illusion.

As a result, the trees in an area of California with a 20th century uptick were given almost 400 times greater weight than the trees in another area of the United States without a similar uptick in the 20th century. In fact, removing the trees from this one area (and possibly one more 10)) from Mann’s data resulted in a temperature reconstruction where the 15th century had higher temperatures than those at the end of the 20th century — even when using Mann’s erroneous algorithm! 11) And then one could no longer claim that 1998 was the warmest year in a thousand years.

Dashed line: The hockey stick graph from MBH98. Solid line: How the graph would have looked with correct principal component analysis and corrected data (including removal of bristlecone pines and Gaspé cedar trees). Screenshot from Ross McKitrick’s article What is the ‘Hockey Stick’ Debate About? from 2005. McIntyre and McKitrick don’t argue that their graph gives a correct picture of the temperature over the last 600 years, only that Mann’s hockey stick graph is not correct based on the proxies MBH98 chose to use. There’s also a large uncertainty in the reconstructed temperature before 1750.

The reason why these trees had wider tree rings in the 20th century was not only related to temperature. Mann also acknowledges this in the second hockey stick study (MBH99):

A number of the highest elevation chronologies in the western U.S. do appear, however, to have exhibited long-term growth increases that are more dramatic than can be explained by instrumental temperature trends in these regions.

Donald Graybill had taken the tree ring samples. He and Sherwood Idso published an article (in 1993) in which they hypothesized that the large increase in tree ring widths for these trees could be a direct effect of the increased CO2 content in the atmosphere (CO2 fertilization). 12)

To determine how reliable the temperature reconstruction in MBH98 was, Mann had calculated verification statistics 13), and these were included in MBH98. The verification statistics they had used were RE 14) and R2 15), but R2 was only shown for the period after 1750, which McIntyre thought was strange.

R2 is usually between 0 and 1 (but can also be negative), and the higher, the better. An R2 value of 1 means perfect correlation, 0 means no correlation, and a negative value means negative correlation. In the SoundCloud interview, McIntyre says that 0.4 or 0.5 would be good. But it turned out that for periods prior to 1750, R2 ranged from 0.00 to 0.02 in Mann’s study. This means one can have very little confidence in the study’s results for periods before 1750. It wasn’t strange, then, that Mann didn’t wish to publish R2 for the earlier time periods, although, of course, he should have done so.

In 2005, McIntyre and McKitrick published a thorough critique of MBH98 in Energy & Environment (MM05). In it, they wrote that Mann probably had calculated R2 for the earlier time periods as well. 16) If so, these verification statistics should have been published in the study. But now that they weren’t, the fact that R2 was missing for the earlier time periods should still have been caught by the peer review process. Unfortunately, this hadn’t happened.

The article in Energy & Environment gave McIntyre a front-page article in The Wall Street Journal 17). This, in turn, led to the House Energy and Commerce Committee asking Mann if he had calculated R2 and what the result was. Mann failed to answer whether he had calculated R2, but said that R2 wasn’t a good verification statistic, and that he hadn’t used it. The committee also asked Mann to publish the source code for the study, which he did. The source code showed that Mann had in fact calculated R2 for the earlier time periods, which McIntyre had assumed.

So it was quite obvious that Mann knew that his study wasn’t particularly good. That the study wasn’t good is further supported by one of the leaked Climategate emails. (Quotes with blue text are from the Climategate emails). Tim Osborne, the current CRU director, had asked Mann for some intermediate results (residuals) that could be used to calculate R2 without going through all previous steps in the calculation. Mann sent this to Osborn, but warned that it would be unfortunate if it came out. Mann wrote:

p.s. I know I probably don’t need to mention this, but just to insure [absolute clarity] on this, I’m providing these for your own personal use, since you’re a trusted colleague. So please don’t pass this along to others without checking w/ me first. This is the sort of “dirty laundry” one doesn’t want to fall into the hands of those who might potentially try to distort things…

Tom Wigley, former CRU director, also knew Mann’s study was poor. In an email to Phil Jones, Wigley wrote:

I have just read the [McIntyre and McKitrick] stuff critcizing MBH. A lot of it seems valid to me. At the very least MBH is a very sloppy piece of work — an opinion I have held for some time.

In a comment on his own blog (ClimateAudit), McIntyre has written two examples of what Mann should have written as a disclaimer to his study:

Readers should be aware that the reconstruction contained herein badly fails many standard cross-validation tests, including the R2, CE, sign test and product mean test, some of which are 0. Accordingly, the apparent skill of the RE statistic may be spurious and the reconstruction herein may bear no more relationship to actual temperature than random numbers. Readers should also be aware that the confidence intervals associated with this reconstruction may be meaningless and that the true confidence interval may only be natural variability.

Readers should be aware that the reconstruction contained herein cannot be replicated without the use of bristlecone pines. Some specialists attribute 20th century bristlecone pine growth to nonclimatic factors such as carbon dioxide or other fertilization or to nontemperature climate factors or to a nonlinear response to temperature. If any of these factors prove to be correct, then all portions of the reconstruction prior to [the year] 1625 will be invalidated.


I’d like to also mention that McIntyre and McKitrick are not the only ones who’ve criticized MBH98/99. One other criticism relates to the extensive use of tree rings in the study. Tree rings are better suited for finding year-to-year temperature variations than for finding temperature-trends over longer periods of time:

[W]hile tree rings are excellent at capturing short frequency variability, they are not very good at capturing long-term variability.

– James Speer, Fundamentals of Tree-Ring Research (pdf)

Further recommendations

In my summary of the hockey stick story, I’ve focused on some of the most important flaws in Mann’s studies and the fact that Mann has been uncooperative. But this is only one part of the story. The story is also about how the IPCC broke its own rules to be able to use the hockey stick graph also in its fourth assessment report. 18) Andrew Montford, author of The Hockey Stick Illusion, wrote a blog post about this before he started writing the book.

The video below is also about the hockey stick graph. It explains how the hockey stick graph ended up in IPCC’s third assessment report in 2001, and how data was removed from a tree ring study that showed decreasing tree ring widths in the 20th century. Tree ring widths are, as we remember, used as a proxy for temperature. (Decreasing tree ring widths in a time of rising temperatures would cast doubt on the validity of tree rings as a proxy for temperature.) The story is backed up by Climategate e-mails, including Phil Jones’ famous 1999 email, where he writes:

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) [and] from 1961 for Keith’s to hide the decline. 19)

Another video that may be of interest is the video below. It’s about the leaked Climategate emails and the subsequent British Climategate inquiries. The presenter is Andrew Montford, author of The Hockey Stick Illusion. (Despite the video’s German title, the talk is in English). Montford has also written a report on the inquiries.

The last video I’ll currently recommend is a talk by McIntyre himself. In the video, McIntyre discusses some problems with the tree ring chronologies used in Mann’s studies (and other similar studies): 20)

Peer review

Peer review is used by scientific journals to determine whether to publish a paper or study submitted to the journal. The study will be forwarded to a small number of experts in the study’s subject area, who’ll assess its quality. The experts usually aren’t paid extra for peer reviewing studies, and they typically spend around one working day on the review.

On October 27, 2009, Phil Jones sent an e-mail in which he wrote:

The two Canadians she refers to [McIntyre and McKitrick] have never developed a tree-ring chronology in their lives and McIntyre has stated several times on his blog site that he has no aim to write up his results for publication in the peer-review literature.
I’m sure you will be of the same opinion as me that science should be undertaken through the peer-review literature as it has been for over 300 years. The peer-review system is the safeguard science has developed to stop bad science being published.

It’s somewhat ironic that Phil Jones says McIntyre should publish in peer-reviewed journals, since Jones and Mann did everything they could to actually prevent McIntyre, McKitrick and other skeptics from publishing in peer-reviewed journals.

In 2003, a study critical of Mann’s temperature reconstruction, was published in Climate Research. The authors were Willie Soon and Sallie Baliunas. Following the study’s publication, Mann wrote:

The Soon & Baliunas paper couldn’t have cleared a ‘legitimate’ peer review process anywhere. That leaves only one possibility–that the peer-review process at Climate Research has been hijacked by a few skeptics on the editorial board.
[…]
There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn’t get published in a reputable journal.
This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature”. Obviously, they found a solution to that–take over a journal! [Emphasis added]

Mann and other climate scientists with connections to CRU had apparently been able to control a lot of what was published in peer reviewed climate journals. They’d managed to keep most skeptic papers out of the peer-reviewed literature. It was thus easy to meet criticism — they could simply ask why the skeptics didn’t publish in the peer-reviewed literature. As an example of this, see Mann’s response to a journalist who had referred to a skeptic article:

Professional journalists I am used to dealing with do not rely upon unpeer-reviewed claims off internet sites for their sources of information. They rely instead on peer-reviewed scientific research, and mainstream, rather than fringe, scientific opinion.

How did they achieve this level of gatekeeping control? One explanation is that there was almost always at least one person from this group of climate scientists who was asked to peer review new climate studies in their field. When they recommended against publishing a study, their advice was usually taken.

It can also, to some extent, be explained in a simpler way: As Judith Curry writes, researchers — like most other people — tend to more easily trust conclusions that agree with what they themselves believe to be true. Review comments that address papers with opinions contrary to the reviewer’s own, will thus naturally be more critical. If a large majority of scientists in a field have similar opinions, it then becomes more difficult for other opinions to get through the peer-review process.

Some researchers who failed to get their study published in climate journals simply gave up.

The climate scientists at CRU also wanted to discredit Soon and Baliunas. Tom Wigley writes:

Might be interesting to see how frequently Soon and Baliunas, individually, are cited (as astronomers).
Are they any good in their own fields? Perhaps we could start referring to them as astrologers (excusable as … ‘oops, just a typo’).

Following is an example of how scientists on the alarmist side expected someone on their side to be asked to peer-review new studies: The day before McIntyre and McKitrick’s first paper (MM03) was published in Energy & Environment, and before Mann had seen the paper, Mann writes in an email:

My suggested response is:
1) to dismiss this as [a] stunt, appearing in a so-called “journal” which is already known to have defied standard practices of peer-review. It is clear, for example, that nobody we know has been asked to “review” this so-called paper [Emphasis added]
2) to point out the claim is nonsense since the same basic result has been obtained by numerous other researchers, using different data, elementary compositing techniques, etc. Who knows what sleight of hand the authors of this thing have pulled. Of course, the usual suspects are going to try to peddle this crap. The important thing is to deny that this has any intellectual credibility whatsoever and, if contacted by any media, to dismiss this for the stunt that it is..

Here we also see Mann having a clear opinion about the study before actually reading it… Mann wrote the above after receiving an e-mail (with unknown sender) stating that MM03 would soon be published. The unknown person writes:

Personally, I’d offer that this was known by most people who understand Mann’s methodology: it can be quite sensitive to the input data in the early centuries. Anyway, there’s going to be a lot of noise on this one, and knowing Mann’s very thin skin I am afraid he will react strongly, unless he has learned (as I hope he has) from the past….”

If you’d like to read more about how Mann, Jones and others discussed peer review internally, check out this online book on Climategate and search for “email 1047388489”. (The emails quoted in the book are commented on by the author.) Following Climategate, The Guardian also wrote an article about how climate scientists, including at the CRU, tried to prevent skeptical papers from being published.

When Phil Jones writes “The peer-review system is the safeguard science has developed to stop bad science being published”, it’s ironic also because the peer review process failed to prevent Mann’s flawed hockey stick studies from being published.

As noted, Phil Jones wrote the email I quoted from above (first quote under the section on peer review) in late 2009 — just a few weeks before Climategate. The IPCC’s most recent assessment report was published in 2013 and 2014, not that many years later. As Phil Jones pointed out in an email, IPCC only considers studies that are peer-reviewed:

McIntyre has no interest in publishing his results in the peer-review literature. IPCC won’t be able to assess any of it unless he does.

So it’s perhaps not surprising that IPCC concludes that most of the warming since 1950 is human-caused. So when the IPCC says they’re at least 95% certain about this, that may be an exaggeration of how certain they should be — given the artificially low number of skeptical papers in the peer-reviewed literature. 21)

In general, peer review is no guarantee that a study is correct. And this is something scientists are aware of, wrote Charles Jennings, former editor of Nature, on Nature’s peer review blog in 2006:

[S]cientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.

As mentioned, the amount of time normally spent on peer review is limited. When Stephen McIntyre was asked to review an article for Climatic Change, he was prepared to do so diligently and asked the editor, Stephen Schneider, for access to the study’s underlying data. Schneider then said that in the 28 years he’d been editor of Climatic Change, no one had previously requested access to data. McIntyre told this in the SoundCloud interview I embedded earlier (7:39), where he went on to say:

[Schneider] said, “If we required reviewers to look at data, I’d never be able to get anyone to review articles”. I think the important point is for people unfamiliar with the academic journal process to understand the level of due diligence, and what is a relevant level of due diligence. I’ve come to the opinion that it’s unreasonable to expect unpaid reviewers to […] do a full audit of an article — it’s too much work, and people aren’t going to do that.

Since peer review won’t uncover all flaws of a study, it’s important to also have other processes that can help to uncover flaws. And generally, such processes do exist. For example, the results of a study may not agree with some scientist’s own assumptions, and (s)he may thus suspect errors. In that case, it may be interesting for them to look more closely at the study and write up a separate study or article, criticizing the original study. The new paper then also needs to get through the peer review process for it to be published in a scientific journal.

In the field of climate science, there were many scientists interested in writing critical papers. But since the alarmists managed to keep so many critical papers out of scientific journals, normal quality assurance was lost. 22) Additionally, it was difficult for skeptics to get access to the data behind the alarmists’ studies. 23) These things help explain how Mann’s study, despite its many flaws, ended up in an IPCC assessment report. And not just as one study among many, but as the most prominent study!

Who can we trust

In the climate debate, we have two sides with wildly opposing views. And it’s difficult for someone outside the field to know who’s right and who’s wrong. Either side probably has some good arguments. But when I learn that someone has deliberately deceived the public (for example, Michael Mann (by withholding important information) and John Cook 24)), at least I know I can’t trust them. Even though a lot of what they say is true, it’ll be difficult to tell truth from lie, so, to me, their words carry less weight.

From the Climategate emails, it’s clear there are others, too, on the alarmist side, that one shouldn’t trust completely. (Of course, you shouldn’t trust everyone on the skeptic side either.) As an example, Tom Wigley, former CRU director, wrote in an email to Phil Jones, then director of CRU:

Here are some speculations on correcting [sea temperatures] to partly explain the 1940s warming blip.
If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 [degrees Celsius], then this would be significant for the global mean — but we’d still have to explain the land blip.
I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and I think one needs to have some form of ocean blip to explain the land blip[.]
[…]
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.

Here, the two CRU directors weren’t quite happy with the temperature data and discussed how they could change it.

Another example: In 1995, after Science had published a borehole study he’d written, David Deming received an e-mail from a person in the climate science community. Deming doesn’t say who the person is, but writes that it is a major person. Deming writes:

They thought I was one of them, someone who would pervert science in the service of social and political causes. So one of them let his guard down. A major person working in the area of climate change and global warming sent me an astonishing email that said “We have to get rid of the Medieval Warm Period.” [Emphasis added]

It’s been speculated that the person who sent the email was Jonathan Overpeck. Overpeck was Coordinating Lead Author for the chapter on paleoclimate (climate of the past) in IPCC’s 4th assessment report from 2007. In an email to Phil and Mike (probably Phil Jones and Michael Mann), Overpeck wrote that he couldn’t remember having sent such an e-mail to Deming. But he also didn’t entirely rule out the possibility that he had written something similar.

Deming mentioned The Medieval Warm Period (MWP), also called Medieval Climate Anomaly. This refers to a period from about the year 1000 to 1300 when temperatures were assumed to have been higher than at the end of the 20th century. I don’t know when temperatures were higher, but for some it was clearly important to be able to say that Medieval temperatures were not higher than today’s.

As you may recall, the errors in MBH98 led to their temperature reconstruction showing a significantly lower temperature around the year 1400 than what their data actually indicated. 25) And the subsequent study, MBH99, which reconstructed temperatures back to the year 1000, showed no Medieval Warm Period for the Northern Hemisphere. This may have been exactly the result Mann wanted.

These examples seem to show that several scientists with connections to CRU wanted to convince politicians and others that human-caused global warming is happening and is dangerous. That the data didn’t always support their views was less important. The scientists may have had doubts, but those were rarely communicated to the public.

Are RealClimate and SkepticalScience unbiased websites?

Michael Mann is one of several people who started the website realclimate.org. In 2006, Mann sent an e-mail, explaining that they won’t approve just any comment — the skeptics shouldn’t be allowed to use the RealClimate comment section as a “megaphone”:

Anyway, I wanted you guys to know that you’re free to use [RealClimate] in any way you think would be helpful. Gavin [Schmidt] and I are going to be careful about what comments we screen through, and we’ll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you’d like us to include.

You’re also welcome to do a followup guest post, etc. think of [RealClimate] as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We’ll use our best discretion to make sure the skeptics [don’t get] to use the [RealClimate] comments as a megaphone…

And in a 2009 e-mail:

Meanwhile, I suspect you’ve both seen the latest attack against [Briffa’s] Yamal work by McIntyre. 26) Gavin [Schmidt] and I (having consulted also w/ Malcolm [Hughes]) are wondering what to make of this, and what sort of response—if any—is necessary and appropriate. So far, we’ve simply deleted all of the attempts by McIntyre and his minions to draw attention to this at RealClimate.

John Cook (from the 97% consensus study) started the website skepticalscience.com. SkepticalScience is a website that intends to refute arguments from skeptics. 27)

SkepticalScience seems to be a popular website with a very good search engine ranking (at least with Google). A few years ago, I was looking into the consensus argument. SkepticalScience was then one of the first sites I came across. And according to them, several studies showed there was a near 100% agreement that climate change was man-made. I was almost convinced, but have since learned that the studies don’t say what SkepticalScience wanted to convince me about.

Although a lot of what SkepticalScience and RealClimate write is correct, it’s also clear that they aren’t unbiased websites. But since their message agrees well with what we hear in the media, it’s only natural that many people think they’re not biased. It’s easier to be skeptical of information that contradicts what we regularly hear. And being skeptical is a good thing. It’s also important to be aware that SkepticalScience and RealClimate want their readers to be convinced that global warming is man-made and dangerous, just as much as most skeptics want to convince us otherwise.

So who can we trust? We can probably trust many people, but one person that I trust, is Stephen McIntyre. (That doesn’t mean everything he says is true.) Although he’s criticized Mann’s studies and other tree ring studies quite strongly, he hasn’t “chosen side” in the climate debate. He consistently emphasizes that what he himself found in no way is proof that global warming isn’t taking place, and that determining the value of the climate sensitivity is the important scientific question. (I will soon get back to the topic of climate sensitivity.) In the introduction to a talk he gave on the role of Climategate in relation to the hockey stick graph story, he said (3:59):

[K]eep in mind that nothing that I say tonight proves or disproves global warming. Nor does climate science as a whole stand or fall on proxy reconstructions. If we do nothing about tree rings, we would still be obliged to assess the impact of doubled CO2. As a final preamble, there’s far too much angriness in my opinion on both sides of the debate. People are far too quick to yell “Fraud” at the other side. And I think such language is both self-indulgent and counterproductive. I don’t apply these labels myself, I don’t permit them at ClimateAudit, and don’t believe they serve any purpose. That doesn’t mean you can’t criticize authors — I do so all the time and will do so tonight, but any point you make should be able to be made on the facts rather than the adjectives.

And in his ending remarks (36:31):

I started my comments with caveats, and I’ll close with some more. The critical scientific issue, as it has been for the past 30 years, is climate sensitivity, and whether cloud and water cycle feedbacks are strongly positive or weekly negative or somewhere in between. This is the territory of Lindzen, Spencer, Kininmonth and Paltridge at this conference, and I urge you to listen to what they have to say. But also keep an open mind, because many serious scientists don’t agree with them and stand behind standard estimates of climate sensitivity of doubled CO2 in perfectly good faith.
[…]
If I were a politician, regardless of what I felt personally, I would also take scientific guidance from official institutions rather than what I might think personally, either as an occasional contributor to academic journals or as a blogger. Although, knowing what I know now, I would try as hard as I possibly could, to improve the performance and accountability of these institutions.

Here’s the full talk:

Can we trust science?

We may think of scientists as selfless truth-seekers, but according to Terence Kealey, author of The Economic Laws of Scientific Research, it’s a myth that scientists will try to falsify their own theories. They want their theories published and don’t mind taking shortcuts in the use of statistical methods:

One problem is that scientists are much less scientific than is popularly supposed. John Ioannidis […] has shown […] that the poor application of statistics allows most published research findings to indeed be false[.]

There is a perverse reason that scientists use poor statistics: career progression. In a paper entitled “The Natural Selection of Bad Science,” Paul Smaldino and Richard McElreath of the University of California, Merced, and the Max Planck Institute, Leipzig, found that scientists select “methods of analysis … to further publication rather than discovery.” Smaldino and McElreath report how entire scientific disciplines — despite isolated protests from whistleblowers — have, for more than half a century, selected statistical methods precisely because they will yield publishable rather than true results. The popular view is that scientists are falsifiers, but in practice they are generally verifiers, and they will use statistics to extract data that support their hypotheses.

In light of this, the hockey stick scandal makes a little more sense.

So the answer to whether we can trust science is both yes and no. Individual studies aren’t necessarily correct or good even if they’re published in peer reviewed journals. To trust a given scientific field, a diversity of opinion within the field is advantageous — to always have someone with incentives to try to find flaws in new studies. One incentive is simply disagreeing with the study’s conclusions. There are many scientists with this incentive within climate science. Instead of ignoring them, they should rather be made use of in the process of determining what is good and what is bad science.

Climate sensitivity

As I quoted Stephen McIntyre on earlier, the most important scientific question when it comes to CO2 is what the climate sensitivity is. How sensitive is the global average temperature to changing CO2 levels?

There seems to be a general agreement (also among many skeptics) that an exponential rise in CO2 level will lead to a linear rise in temperature. This means that every doubling of the atmosphere’s CO2 concentration will cause a constant temperature rise. So if a doubling of CO2 from today’s 420 parts per million (ppm) to 840 ppm leads to a 2°C temperature rise, a doubling from 840 ppm to 1680 ppm will also cause a 2°C temperature rise. The climate sensitivity is then said to be 2 degrees.

However, climate sensitivity may also vary depending on Earth’s state and temperature. This is due to changes in feedback effects as the Earth warms or cools. In practice, the climate sensitivity probably won’t change much this century. (I’ll explain what feedback effects are shortly.)

When climate scientists talk about the climate sensitivity, they usually talk about the temperature rise caused by a doubling of CO2 from pre-industrial levels, so from 280 to 560 ppm.

Climate sensitivity comes in a few different flavors, including:

  • Equilibrium Climate Sensitivity (ECS), which is the temperature rise that a doubling of CO2 will lead to in the long term — that is, when, finally, a new equilibrium between CO2 and temperature is reached. The full temperature rise won’t be realized immediately. The biggest temperature changes will come early, but reaching the new equilibrium could take more than 1000 years.
  • Transient Climate Response (TCR), which is the temperature rise that a doubling of CO2 will have caused at the time when CO2 has doubled . The assumption is that the CO2 level increases by 1% per year. With an increase of 1% per year, one doubling takes 70 years. (TCR will always be lower than ECS.)

Feedback effects can cause the temperature to rise more or less than if there were no feedback effects. Higher temperatures lead to more water vapor in the atmosphere which leads to a further rise in temperature since water vapor is a strong greenhouse gas. This is an example of a positive feedback effect. Higher temperatures could also lead to more low level clouds that reflect more sunlight. This is an example of a negative feedback effect. The equilibrium climate sensitivity, ECS, would be about 1°C without feedback effects28)

If we’d like to determine how much the temperature will rise as a result of more CO2 in the atmosphere towards the year 2100, it’s more relevant to look at TCR than ECS. 29) However, a 1% annual increase in CO2 concentration (which is assumed in the climate sensitivity TCR) is probably unrealistic (even without new policy measures being introduced). In the last 10 years, atmospheric CO2 increased on average 0.6% per year. From 2000 to 2010 it increased about 0.5% per year.

There’s a lot of uncertainty about the exact value of the climate sensitivity. This is due to uncertainty in feedback effects. In their previous assessment report, IPCC stated that TCR almost certainly has a value of between 1.0 and 2.5 degrees. 1.0 degrees is relatively unproblematic. 2.5 degrees, on the other hand, can be more challenging. For ECS, IPCC stated a value of between 1.5 and 4.5 degrees. This is identical to the so-called Charney range dating all the way back to 1979 (3 degrees ± 1.5 degrees).

Climate models are important in IPCC’s calculation of climate sensitivity. 30) Climate models are advanced computer simulations of the Earth’s climate. They’re based on physical laws, other knowledge about the Earth’s climate processes and expected emissions of greenhouse gases in various scenarios. Since we don’t have perfect knowledge about all climate processes, some approximations are also used. These are called parameterizations. The start time for the simulation can, for example, be the year 1850 and the end time could be the year 2100. Climate models are run on supercomputers, and one run can take several months. According to IPCC’s previous assessment report, climate models calculated that the climate sensitivity, ECS, was in the range of 2.1 to 4.7 degrees. 31) This was slightly higher than the IPCC’s final estimate (1.5 to 4.5 degrees).

But from 2012, as explained in the video above, a number of studies have been published where climate sensitivity is calculated almost entirely from data and instrumental observations. These studies have all concluded that ECS is well below 3 degrees — most of them estimate the most probable ECS value to be in the range of 1.5 to 2 degrees.

Further in the video, we’re told that a study from 2018 is particularly important since it uses IPCC’s own data sets to calculate climate sensitivity (8:48):

This 2018 paper by Nicholas Lewis and Judith Curry, published in the American Meteorological Society’s Journal of Climate, is particularly important because it applies the energy balance method to the IPCC’s own datasets, while taking account of all known criticisms of that method. And it yields one of the lowest ECS estimates to date, 1.5 degrees, right at the bottom of the Charney range.

According to the study, the most likely TCR value is 1.20 degrees. (More accurately, it’s the median value of TCR that’s 1.20 degrees. Median for ECS: 1.50 degrees) The study’s main author, Nic Lewis, has written a guest post about the study on ClimateAudit, McIntyre’s blog.

In another study, Knutti et al. (2017), the authors have made an overview of various climate sensitivity studies and categorized them based on whether they’re observation-based (historical), based on climate models (climatology), or based on past temperature changes (palaeo). Knutti et al. (2017) confirms that observation-based studies find a lower value for the climate sensitivity than studies that use other methods:

Knutti et al. do not agree, however, that one should put more weight on the observation-based studies. 32)

But Nic Lewis definitely thinks so. He and Marcel Crok have written a report, Oversensitive — How the IPCC Hid the Good News on Global Warming (long version here). In it, they argue that one should trust the observation-based estimates for climate sensitivity more. Their conclusion:

[W]e think that of the three main approaches for estimating ECS available today (instrumental observations, palaeoclimate observations, [and climate model] simulations), instrumental estimates – in particular those based on warming over an extended period – are superior by far.

According to Lewis and Crok, there is a great deal of uncertainty associated with estimates of climate sensitivity based on past temperature changes. And climate models won’t necessarily simulate future climate correctly even if they’re able to simulate past temperature changes well. Lewis and Crok write:

There is no knob for climate sensitivity as such in global climate models, but there are many adjustable parameters affecting the treatment of processes (such as those involving clouds) that [climate models] do not calculate from basic physics.

Climate sensitivities exhibited by models that produce realistic simulated climates, and changes in climatic variables over the instrumental period, are assumed to be representative of real-world climate sensitivity. However, there is no scientific basis for this assumption. An experienced team of climate modellers has written that many combinations of model parameters can produce good simulations of the current climate but substantially different climate sensitivities. [Forest et al. (2008)] They also say that a good match between [climate model] simulations and observed twentieth century changes in global temperature – a very common test, cited approvingly in the [IPCC assessment report 4] as proving model skill – actually proves little.

Models with a climate sensitivity of 3°C can roughly match the historical record for the global temperature increase in the twentieth century, but only by using aerosol forcing values that are larger than observations indicate is the case, by underestimating positive forcings, by putting too much heat into the oceans and/or by having strong non-linearities or climate state dependency. [Paragraphs added to improve readability]

In the report, Lewis and Crok also point out errors in some of the observation-based studies that found a high climate sensitivity. 33)

So while it’s not entirely certain that climate sensitivity is low — and most climate scientists will probably not agree that it is — Lewis and Crok’s argument indicates there’s at least a very good chance that the climate sensitivity is low.

The video below is a presentation given by Nic Lewis, where he talks about climate sensitivity:

Business-as-usual CO2 emissions

As we’ve seen, to be able to calculate how much temperatures will rise as a result of human CO2 emissions, we need to know what the climate sensitivity is. In addition, we need to know how much CO2 and other greenhouse gases we’re going to emit.

In 2007, IPCC asked the climate research community to create greenhouse gas emission scenarios for the remainder of this century. The emission scenarios are named “RCP”, followed by a number. The higher the number, the higher the emissions.

The four main scenarios are RCP2.6, RCP4.5, RCP6.0 and RCP8.5.

RCP8.5 is the most pessimistic of the four scenarios. It’s a scenario in which no new policy measures are introduced to limit emissions. This is often used as a business as usual scenario, but according to Moss et al. (2010), it’s not intended as a prediction or forecast for the most likely CO2 emissions path. 34) Still, that’s how it’s been interpreted by many.

Climate scientist Zeke Hausfather writes about RCP8.5 in a comment in Nature:

RCP8.5 was intended to explore an unlikely high-risk future. But it has been widely used by some experts, policymakers and the media as something else entirely: as a likely ‘business as usual’ outcome. A sizeable portion of the literature on climate impacts refers to RCP8.5 as business as usual, implying that it is probable in the absence of stringent climate mitigation. The media then often amplifies this message, sometimes without communicating the nuances.

Hausfather has also, along with Justin Ritchie, looked at the International Energy Agency (IEA)’s 2019 annual report on World Energy Outlook. Ritchie writes that the IEA’s forecasts for CO2 emissions up to the year 2040 are far below the emissions assumed by RCP8.5:

IEA scenarios are a more realistic projection of the global energy system’s current ‘baseline’ trajectory; showing we are far from RCP8.5 [and] RCP6.0. World currently tracking between RCP4.5 [and] lower climate change scenarios – consistent with 1.5˚ to 2.5˚C [warming this century].

– Twitter-thread (Here’s an article that goes into more detail, written by Hausfather and Ritchie)

So they write that the warming this century will be 1.5 to 2.5 degrees. This assumes a higher climate sensitivity than the observation-based studies suggest.

In 2014, Lewis and Crok calculated that if the climate sensitivity TCR is 1.35 degrees, it will lead to a temperature rise of 2.1 degrees between 2012 and ca 2090 in the pessimistic RCP8.5 scenario. In a more realistic scenario than RCP8.5, the temperature will rise less:

On the RCP6.0 scenario and using the observational TCR-based method, total warming in 2081–2100 would still be around the international target of 2°C, with a rise of 1.2°C from 2012 rather than the 2°C rise projected by the [climate models].

As we’ve seen, Nic Lewis later adjusted the estimate for TCR down to 1.20 degrees, which would mean slightly less warming even than this.

And according to the IEA, it seems we’ll stay well below the emissions in RCP6.0 as well, maybe even below RCP4.5.

Accelerating technological progress

Even though I haven’t been as familiar with the science as I am now, I’ve never been worried about global warming. One reason for this is that technological progress is accelerating — improving faster every year.

About 40 years ago, Ray Kurzweil discovered that computing power was improving in a surprisingly predictable way.

The picture above is from Kurzweil’s 2005 book The Singularity Is Near, which I read in 2010. It shows the amount of computing power (measured in operations per second) that you at most (typically with supercomputers) could get for $1000 (inflation adjusted) in different years between 1900 and 2000. It also shows how Kurzweil envisions this price performance to change going forward. His forecast is just that the historical exponential trend will continue.

The y-axis is logarithmic. A straight line thus means exponential growth, where a doubling of the price performance will happen at regular intervals. But the graph bends upwards, so Kurzweil expects the growth in computing power to be even faster than this. In practice, this means each doubling in computing power will happen at shorter and shorter time intervals.

So far, Kurzweil’s prediction has proven quite accurate.

With the extreme computing power we’ll have access to later this century, our technological capabilities will also become extreme. Artificial intelligence (AI), too, is progressing extremely fast. AI will help us develop many other technologies faster than we would otherwise be able to.

An important technology Kurzweil believes we’ll develop by 2030 is atomically precise manufacturing (APM). With APM we’ll be able to create physical objects where the positioning of each individual atom can be precisely controlled (31:52):

There are roadmaps to get to [APM]. The more conservative ones have that emerging by the end of the 2020s. I’m quite confident that [in the] 2030s, we’ll be able to create atomically precise structures which will enable us to create these medical nano-robots which will enable us to overcome essentially every disease and aging process — not instantly, but it’ll be a very powerful new tool to apply to health and medicine.

APM can help us cure disease and heal unwanted effects of aging. APM is also an enabling factor for 3D-printing of complex physical objects. With the help of APM, we’ll eventually be able to 3D-print e.g. clothing, computers and houses very quickly.

One can object that Kurzweil may be too optimistic. But even though many of these technologies won’t be developed until much later in the century, our technological capabilities will still be quite extreme by the year 2100. I therefore see no reason to worry about impacts of global warming that far into the future.

Kurzweil also believes we’ll soon change the way we produce food. This may be relevant for CO2 emissions (24:35):

We’re going to go from horizontal agriculture, which now takes up 40% of our usable land, to vertical agriculture, where we grow food with no chemicals, recycling all the nutrients; in vitro muscle tissue for meat; hydroponic plants for fruits and vegetables.

If he’s right, food production will require both less resources and much less land than today.

Will renewable energy take over?

This is a difficult question to answer. Many people are very optimistic about this, and I’m mostly an optimist myself. But there are also many who are pessimistic, especially among climate skeptics. 35) There are seemingly good arguments on both sides.

In 2011, I accepted a bet with a friend about solar energy. Based on Kurzweil’s predictions, I had spoken warmly about solar energy, and how it might soon surpass fossil energy usage in the world. My friend was sure this wouldn’t happen any time soon and suggested a bet: If we get more energy from solar than from fossil fuels within 20 years (from 2011, so by 2031), I win the bet. Otherwise he wins.

It’s primarily photovoltaic (PV) solar energy (solar cells) that’s expected to have a large increase in usage. According to Kurzweil, the reason for this is that solar PV is an information technology. It can thus benefit from computing power’s accelerating progress. It can also benefit from advancements in nano-technology and materials science.

The video below is from 2016 and shows how cheap “unsubsidized” 36) solar energy has become in many parts of the world and how quickly the price has fallen:

If you liked the video, I can also recommend this newer (from 2019) and longer video with Ramez Naam. In the newer video, Naam shows us that the price of solar has continued to fall after 2016:

Advancements in nanotechnology and materials science will make solar cells more efficient and cheaper. Solar cell technology has a great potential as a cheap energy source since future solar cells can be extremely thin and lightweight (and bendable). They can thus be integrated on many types of surfaces.

This type of solar cell has already been made and is probably not far from commercialization. The main ingredient is a material called perovskite. (Perovskite is named after a Russian named Lev Perovski.) Silicon, which is used in traditional solar cells, needs to be heated to very high temperatures. Perovskite, on the other hand, can be made at room temperature. This is one reason perovskite solar cells will be cheap compared with traditional solar cells. According to Sam Stanks:

[A] $100 million perovskite factory could do the job of a $1 billion silicon factory.

Perovskite is a type of ink that can be easily printed on various surfaces, for example plastic.

Although dark perovskite solar cells are the most effective ones, they can be made in different colors and with varying degrees of transparency. Solar cells that only capture infrared light can be completely transparent. They can thus be used on the sides of buildings (including in windows), and the building can still look very good. (Solar cells on buildings don’t have the same ecological consequences that solar power plants can have.)

Let’s take a look at how my bet is faring. According to ourworldindata.org, the amount of energy we received from solar energy in 2010 was 33.68 TWh. This corresponded to 0.024% of the world’s energy production. 8 years later, in 2018, we received 584.63 TWh from solar energy. This was still only 0.4% of the world’s total energy production. But in those 8 years, there had been a 17-fold increase in the amount of energy we get from solar, or a 15-fold increase in the share of solar energy.

When I wrote about my bet in 2011, I wrote, as Kurzweil had predicted, that solar energy’s share of the world’s energy consumption would double every two years. A doubling every two years, becomes four doublings in 8 years, which is a 16-fold increase. We ended up getting a little more than a 15-fold increase. So far so good.

However, solar’s growth rate has slowed in recent years. From 2011 to 2019, OurWorldInData shows there was only an 11-fold increase in solar energy generation worldwide. The growth might pick up again when perovskite solar cells become available.

Ray Kurzweil is still optimistic about solar and renewable energy soon overtaking fossil fuels. In 2019, he wrote:

[Renewable energy is] doubling every 4 years and I’m confident it will meet 100% of our energy needs by 2030.

Of course, renewable’s share won’t actually reach 100% by 2030, and Kurzweil surely doesn’t really believe so himself either. But if we can only get more energy from solar than from fossil fuels by 2031, then I’ll be happy… In any case, it’ll be very interesting to see how solar and renewable energy fares in the coming years.

Certain forms of renewable energy sources, particularly solar and wind, also have some problems associated with them:

  • They require large areas of land and may damage or displace wildlife (wind turbines, for example, kill many birds and bats).
  • A high share of renewables in the energy mix is associated with high electricity prices.
  • Solar and wind are intermittent energy sources. They generally don’t produce energy continuously throughout the day and need to be backed up by other forms of energy in periods when they can’t produce electricity. The backup-power is typically fossil fuels.
  • Although solar and wind don’t have direct greenhouse gas emissions, production and disposal of wind turbines and solar panels isn’t necessarily all that clean. Production of turbines and panels is energy intensive and requires extensive mining for materials. There are also challenges in connection with disposing of panels and turbines after end-of-life.

I won’t go into detail about these challenges, but they seem like valid concerns.

There are very smart people on both sides of the renewables debate, so how does one determine what’s right? It’s certainly not easy.

I think many of those who are most optimistic are looking more towards the future than the past. They see how solar energy prices have fallen and expect prices to continue to fall. They see better technologies on the horizon. They see potential solutions to difficult challenges.

On the other hand, the pessimists are looking more at renewable energy’s past performance, which may not be that great. They may be more aware than the optimists of the challenges that need to be solved. They don’t necessarily believe they can’t be solved, but may consider the likely cost to be higher than the benefit.

I’m generally an optimist and believe the challenges can be overcome. I’m far from certain, though, and it would be interesting to see a discussion about renewables in the comments.

Ramez Naam is also an optimist and focuses on the potential solutions:

[D]on’t bet on the forecasters, but on the innovators, they’re the ones actually making it happen[.]

Naam said this after explaining how fast Tesla had improved battery technology. In 2013, the US Energy Information Administration predicted that batteries would be about a third cheaper by the year 2048. Instead, they got 4 times cheaper in just 5 years!

Before Naam talked about the improvements in battery technology, he also showed how the amount of energy we get from solar has increased and compared it with forecasts from the International Energy Agency (IEA). It’s a bit funny to see, because while the usage of solar energy has soared, the IEA’s forecasts have been that solar energy usage will hardly increase at all:

Source: https://www.carbonbrief.org/profound-shifts-underway-in-energy-system-says-iea-world-energy-outlook (based on IEA World Energy Outlook 2019).

Hausfather and Ritchie have also discussed the above graph. Maybe not surprisingly, they write that the IEA has been criticized for being too conservative when it comes to renewable energy.

A few weeks ago, however, IEA published this year’s World Energy Outlook report. And this time they’re actually optimistic about solar:

Solar becomes the new king of electricity…

Renewables grow rapidly in all our scenarios, with solar at the centre of this new constellation of electricity generation technologies. […] With sharp cost reductions over the past decade, solar PV is consistently cheaper than new coal- or gasfired power plants in most countries, and solar projects now offer some of the lowest cost electricity ever seen.

They’re probably talking about the price as experienced by the power utility companies. A low price for solar means more solar plants are likely to be built. Due to subsidies, consumers won’t necessarily benefit in the form of lower prices, though. I’ll briefly return to this topic very soon.

Even if IEA is more optimistic about solar energy this year than they’ve been before, they’re still a lot less optimistic than Ray Kurzweil about solar, even in their lowest-emission scenario. Considering IEA’s history of underestimating solar, it wouldn’t be surprising if their projections are still on the low side.

Another exciting alternative to fossil fuels is nuclear power — fission and fusion.

Fission-based nuclear power has gotten a bad reputation due to a few serious incidents (including Chernobyl and Fukushima), but is today a safe way to produce electricity. CO2 emissions from nuclear fission are very low, but nuclear power is relatively expensive. This is partly due to strict regulations and the fact that building a fission power plant takes a long time. New technological breakthroughs, such as modular nuclear power plants or Thorium reactors, could make fission-based nuclear power more profitable and common.

Fusion has traditionally always been 20 or 30 years away, but there’s been good progress in fusion research lately, and in addition to the gigantic ITER project, which is a collaboration between several countries, there are now several smaller, private companies working towards the commercialization of fusion energy. If we’re lucky, maybe we’ll start getting electricity from fusion power plants around the middle of the 2030s?

Should renewable energy take over? What about the poor?

So, I don’t think it’s unlikely that renewable energy will take over, but I didn’t explicitly say it was a good thing. Like the question of whether renewable energy will take over, the question of whether renewable energy should take over is difficult to answer. It depends…

In the long term, I think it’s inevitable that fossil fuels will, to a large degree, be replaced by alternative energy sources. The question then becomes: Should we take political action to force more renewables into the energy mix today?

If we need to subsidize renewable energy considerably for it to be used, this means we end up paying more for electricity — one way or another. Either directly through electricity fees or renewables surcharges, or indirectly through higher taxes in general. Germany and Denmark have both invested heavily in renewables, and their electricity prices (for household consumers) are the highest in Europe. They also have the highest share of taxes and levies in the overall electricity price.

Higher electricity prices naturally affect the poor the most. This means that subsidizing renewable energy can make life worse for poor people.

It’s also the poorest — especially poor people in developing countries — who will be most affected by climate change. So shouldn’t we reduce our CO2 emissions for their sake?

The problem is that a large reduction in CO2 emissions will only have a small impact on the global temperature several decades from now. This means that by reducing the world’s CO2 emissions, we are not helping the world’s poorest today, but we may be helping them a little bit — far into the future.

A much better strategy for assisting the poor is to help them become richer right here and now. The richer you are, the easier it will be to withstand climate change and extreme weather since the risk of dying in a natural disaster decreases as you get richer. So one way to help is to allow poor countries to use the cheapest form of energy, even if the result will be higher CO2 emissions in the short term.

But renewable energy — especially solar energy — is getting cheaper at a fast pace, and if subsidies are needed today, they may not be for long, at least not in fairly sunny places. Since the sun doesn’t shine at night and we don’t yet have good enough storage solutions, solar energy cannot presently cover 100% of a country’s energy needs – you need hydropower, fossil fuels and/or nuclear power in addition.

My conclusion is that renewable energy should take over if and when it becomes cheaper than the alternatives. Preferably, one shouldn’t subsidize the usage of renewable energy, but one can subsidize alternative energy research, with the goal of making it cheaper than fossil fuels. If this happens, then every country would switch to renewables (as Bjørn Lomborg often points out). And then, switching to renewables wouldn’t hurt other areas of the economy and people’s lives.

More CO2 has made the world greener

The concentration of CO2 in the atmosphere has increased from about 280 parts per million (ppm) in pre-industrial times to about 420 ppm today. (420 ppm is 0.042%.) Today’s level of CO2 is still lower than what’s optimal for trees and plants to grow as much as possible. And this is the reason why it’s common to raise CO2 concentration in greenhouses.

The increased atmospheric CO2 concentration has already had good effect on vegetation growth. The first satellite measurements began in 1979. In 2014, after only 35 years of satellite measurements, the amount of green vegetation on the planet had increased by 14%, according to this study. NASA has written an article about the study, where they state:

The greening represents an increase in leaves on plants and trees equivalent in area to two times the continental United States.

The whole increase can’t be attributed to the effect of CO2, but most of it can — around 70%, according to the study.

Global warming policy measures

Bjørn Lomborg is founder and director of Copenhagen Consensus Center (CCC). CCC is a think tank which invites leading economists to prioritize potential solutions to global issues. They do so using cost-benefit analysis.

Bjørn Lomborg emphasizes the importance of not wasting huge amounts of money and resources on measures that don’t work or do more harm than good.

Lomborg and CCC take the IPCC as authority on global warming. They (or at least Bjørn Lomborg) assume that the pessimistic RCP8.5 scenario is the most likely business as usual scenario. And in this scenario, the temperature will, according to IPCC, be almost 4 degrees higher in 2100 than today (4.1 degrees above the 1986-2005 average).

Source: Chapter 12 (from working group I) of IPCCs previous assessment report (FAQ 12.1 Figure 1)

According to IPCC, the cost of this higher temperature will be a ca 3% reduction of global GDP in 2100 37) (see from 32:20 in the video above). This means that if the world’s GDP would have risen by 450% by the year 2100 absent negative consequences of climate change, it will instead rise by only 436%. The lower rise in GDP is due to more climate-related damages. As Bjørn Lomborg says, this cost is a problem, not the end of the world.

In 2009, CCC assessed several potential solutions to global warming. The Paris Agreement and various taxes on CO2 were considered very bad solutions with high cost and little benefit.

Earlier in the article, I said it’s better to subsidize research on renewable energy than to subsidize its usage. Lomborg also recommends this, but in combination with a low, slowly increasing, tax on CO2. The goal of subsidizing renewable energy research is to make renewable energy so cheap that it outcompetes fossil fuels.

If the temperature, in the business-as-usual scenario, rises less than IPCC expects for RCP8.5 (as seems likely), then even the low CO2 tax that Lomborg recommends may be too high. It thus risks doing more harm than good. When we further take into account the positive effects of CO2 on plant growth, it isn’t obvious that human CO2 emissions will have a net negative effect in the next few decades.

And in the longer term, our technology will be so advanced that any temperature rise can, if needed, be reversed. CCC judged two such potential solutions as very good: Marine Cloud Whitening and Stratospheric Aerosol Insertion. 38) Both solutions cause the atmosphere to reflect more sunlight, thus reducing temperature.

Carbon Storage R&D and Air Capture R&D were also considered as very good and good solutions, respectively. Air Capture is about removing CO2 from the atmosphere. This, according to K. Eric Drexler (by some called “the father of nanotechnology”), was too expensive in 2013, but should be affordable in a few decades:

[T]o have the the 21st century have a planet that resembles what we’ve had in the previous human history will require taking the CO2 levels down, and that is an enormous project. One can calculate the energy required – it’s huge, the area of photovoltaics required to generate that energy is enormous, the costs are out of range of what can be handled by the world today.

But the prospects with a better means of making things, more efficient, more capable, are to be able to do a project of that scale, at low cost, taking molecular devices, removing molecules from the atmosphere. Photovoltaics produced at low cost to power those machines can draw down CO2 and fix the greenhouse gas problem in a moderate length of time once we pass the threshold of having those technologies […] We now have in hand tools for beginning to build with atomic precision, and we can see pathways […] to a truly transformative technology.

The disadvantage of removing CO2 from the atmosphere is that we lose the positive effect CO2 has on plant growth. Getting the atmosphere to reflect more sunlight — especially over the hottest areas — may be a better solution. Lomborg seems to agree:

If [you] want to protect yourself against runaway global warming of some sorts, the only way is to focus on geoengineering, and […] we should not be doing this now, partly because global warming is just not nearly enough of a problem, and also because we need to investigate a lot more what could be the bad impacts of doing geoengineering.

But we know that white clouds reflect more sunlight and hence cool the planet slightly. One way of making white clouds is by having a little more sea salt over the oceans stirred up. Remember, most clouds over the oceans get produced by stirred-up sea salt — basically wave-action putting sea salt up in the lower atmosphere, and those very tiny salt crystals act as nuclei for the clouds to condense around. The more nuclei there are, the whiter the cloud becomes, and so what we could do is simply put out a lot of ships that would basically [stir] up a little bit of seawater — an entirely natural process — and build more white clouds.

Estimates show that the total cost of avoiding all global warming for the 21st century would be in the order of $10 billion. […] This is probably somewhere between 3 and 4 orders of magnitude cheaper — typically, we talk about $10 to $100 trillion of trying to fix global warming. This could fix it for one thousandth or one ten thousandth of that cost. So, surly we should be looking into it, if, for no other reason, because a billionaire at some point in the next couple of decades could just say, “Hey, I’m just going to do this for the world,” and conceivably actually do it. And then, of course, we’d like to know if there’s a really bad thing that would happen from doing that. But this is what could actually avoid any sort of catastrophic outcomes[.]

– Bjorn Lomborg on the Costs and Benefits of Attacking Climate Change (8:35)

Bad climate solutions can potentially have very bad consequences.

Earlier this year, Lomborg published a new book: False Alarm – How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet. In his book, Lomborg points to two of IPCC’s new scenarios, SSP1 and SSP5. SSP1 has the tagline Sustainability – Taking the Green Road. SSP5 has the tagline Fossil-fueled Development – Taking the Highway. Both scenarios are relatively optimistic about economic growth.

According to Riahi et al. (2017), to which Lomborg refers, world per capita GDP will, by the year 2100, have increased by 600% for SSP1. For SSP5, it will have increased by as much as 1040%.

Screenshot from Lomborg’s False Alarm. GDP per capita increases to $182,000 for SSP5 and to $106,000 for SSP1.

Lomborg then subtracts the costs associated with climate change for the two scenarios. For SSP1, the cost is 2.5% of GDP. For SSP5, the cost is 5.7% of GDP:

Screenshot from Lomborg’s False Alarm. Even when subtracting the costs associated with global warming, people are still much richer in SSP5 than SSP1 in 2100.

Even after subtracting the costs of climate change in the two scenarios, the average GDP per capita is still significantly higher in SSP5 (the scenario with no restrictions on the usage of fossil fuels) than in SSP1. In SSP5, GDP per capita grows to $172,000, versus $103,000 in SSP1. The fossil-fueled scenario thus leads to much greater prosperity for most people. This is especially true for people in developing countries, who then won’t be nearly as vulnerable as today when natural disasters happen. (And natural disasters and extreme weather will, of course, continue to occur regardless of the global temperature.)

In other words, it’s extremely important that we don’t unnecessarily limit poor countries’ opportunities for economic growth. And this is much more important than limiting our CO2 emissions. The argument applies to an even greater extent if climate sensitivity is lower than assumed by Lomborg and IPCC.

Norwegian media’s white lies and deliberate deception

NRK is a tax-funded Norwegian TV broadcaster. They have a program series called Folkeopplysningen. Folkeopplysningen means something like enlightenment of the people or public education. In 2018, they showed an episode they called “Klimakrisa” (the climate crisis) where they took sides with the alarmists. I’ve enjoyed watching many of Folkeopplysningen’s episodes. Not this one, though.

They referred both to John Cook’s 97% consensus study (which I’ve mentioned earlier) and a hockey stick temperature graph. The graph wasn’t Mann’s, but there were similarities. The reason for showing the hockey stick graph was no doubt to convince the viewer that the current rate of warming is unprecedented in the last 22,000 years. Since they didn’t use Mann’s graph, I’ll say a little about the graph they did use.

Mann’s hockey stick graph from 1999 goes back 1000 years. Folkeopplysningen’s graph, on the other hand, shows the Earth’s average temperature for the last 22,000 years. And while Mann’s graph applies to the Northern Hemisphere, Folkeopplysningen‘s graph applies to both hemispheres.

Here’s an attempt to translate some of what Andreas Wahl, the host of the program, said:

Wahl, speeking to camera: And we start 20,000 years before the current era (BCE). At that time it was more than 4 degrees colder, and as you can see, Norway and Oslo are covered by a thick layer of ice. And the temperature; it is… stable. A tiny rise here, which causes the ice to begin to melt, releasing more CO2, and the temperature rise accelerates. And it goes gently, millennium after millennium.

Wahl, speeking in background (visible Wahl muted): The temperature in the past has had several local and short-term fluctuations that don’t appear here. But this graph shows the global trend over time.

Wahl, speeking to camera: Only here do humans start farming. A gentle rise, stabilizes just above the red line here. Continues… The Pyramids, Stonehenge, the world’s first virgin birth. Middle Ages; slightly higher temperature in Europe, but not high enough to affect the global average temperature, which remains stable through the Middle Ages, Renaissance, Enlightenment. Until we land here, in the industrial revolution, where we dig up coal and wells, where we invent planes, cars and oil platforms and grow to seven billion people. Then this happens. Here we are today, and if we continue much as we do now, we envision this scenario. This scenario is demanding, but realistic, and then we don’t achieve the 2 degrees target. While this is our dream scenario, but that’s extremely demanding. Wherever this ends, to say that these are natural, and not man-made changes, that is rather absurd.

Screenshot from Folkeopplysningen’s episode about “the climate crisis” (11:33).

Folkeopplysningen has copied the graph (including the three scenarios for the future) from this XKCD page. The sources Folkeopplysningen provides are the same as those provided by XKCD. In addition, Folkeopplysningen states XKCD as a source:

  • Shakun et al. (2012)
  • Marcott et al. (2013)
  • Annan and Hargreaves (2013)
  • Hadcrut4
  • IPCC

The IPCC is probably included as a source for the three future temperature scenarios.

HadCRUT4 is recent temperature data (from 1850) used by the IPCC. Met Office Hadley Center and CRU collaborate to maintain the dataset, hence the first six letters of the name. The T stands for “Temperature” and the number 4 is the version number (4 is the latest version).

Annan and Hargreaves (2013) estimated Earth’s average temperature at the point when the ice was at its thickest during the last ice age. According to the study, the temperature was 4.0 ± 0.8 degrees lower at that time (19-23,000 years ago) than in pre-industrial times (with 95% certainty). This, they write, is warmer than earlier studies had concluded. (The earlier studies had used a more limited data set.)

Shakun et al. (2012)‘s main conclusion is that atmospheric CO2 increased before temperature at the end of the last ice age. The study also presents a temperature reconstruction that goes further back in time than Marcott et al. (2013).

Marcott et al. (2013) has reconstructed global average temperature for the last 11,300 years. According to the study, the reconstruction has a time resolution of a few hundred years. This means it can’t reveal short-term large temperature fluctuations. In a FAQ published on RealClimate two weeks after the study itself was published (commented on by McIntyre here), the authors write:

We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer.

So if there have been large but short-term temperature fluctuations during the last 11,300 years, they won’t show up in the graph. Jeremy Shakun, lead author of Shakun et al. (2012), was also co-author of the Marcott study. Shakun was interviewed about Marcott et al. (2013) and was asked about potential short-term temperature fluctuations. His answer:

No, to be fair, I don’t think we can say for sure isn’t a little 50-year warm blip in there that was much warmer than today? That could be hiding in the data out there. We don’t have the resolution for that, because we have an ocean core data point every 200 years, plus that mud’s all mixed around, so when you really get down to it, you might never see that blip.

The title of the XKCD page is A Timeline of Earth’s Average Temperature Since the Last Ice Age Glaciation. The subtitle is When People Say “The Climate Has Changed Before” These are the Kinds of Changes They’re Talking About.

It’s quite obvious that Andreas Wahl and Folkeopplysningen want to convey the same message as XKCD — they are trying to make viewers believe that the rate of warming since 1950 is unprecedented and that similarly rapid temperature changes haven’t occurred previously in at least 20,000+ years. But the sources stated by Folkeopplysningen don’t justify such a conclusion.

Folkeopplysningen is not unique among Norwegian media in the way they communicate about the climate. Folkeopplysningen didn’t lie outright, but the way they presented the science was imprecise. This is true both in regards to the consensus question and the hockey stick graph that they showed. The science is presented in such a way that viewers are left with an incorrect impression. Only very observant viewers can be expected to notice Folkeopplysningen‘s imprecise use of language.

On the alleged consensus, Andreas Wahl said (translated from Norwegian):

In 2013, a study came out, that went through all published climate science to determine what share of the science confirms man-made climate change. And the result? 97%.

This, again, is imprecise — what does it mean to confirm man-made climate change? Does it mean that scientists believe humans contribute at least somewhat to climate change, or does it mean we’re the main cause? The study referred to by Folkeopplysningen (Cook et al. (2013)) only showed that among the approximately 12,000 research papers reviewed, about one third of the papers expressed an opinion about the role of humans, and among those ca 4,000 papers, 97% expressed the opinion that humans contribute to climate change or global warming — either a little or a lot. But when Andreas Wahl says what he says, it’s hard for a neutral viewer to understand that this is what he means.

In my view, Folkeopplysningen‘s special way of communicating scientific research can’t be interpreted as anything other than deliberate deception.

Faktisk.no (faktisk means actually or in fact) is a Norwegian fact-checker which is also an official fact-checker for Facebook. Faktisk is owned by some of the biggest media companies in Norway, including NRK.

Faktisk did a fact-check related to the 97% consensus claim. The article’s title (translated to English) is Yes, the vast majority of the world’s climate scientists agree that humans affect the climate. That humans affect the climate is not a very interesting claim. Even most skeptics would agree to that.

The fact-check reviews an article from resett.no, a Norwegian alternative media company. (Resett means reset.) The title of the reviewed article (again, translated to English) is How the 97% Myth Arose — and the World’s Most Resilient Fake News. There are some misrepresentations in the article, and Faktisk addressed one of those and rated the article as false. I think it’s a good thing Resett is trying to refute the 97% myth, but they need to be more accurate in their criticism. Else, they’re no better than the other side.

The claim that was rated as false was (translated from Norwegian): The vast majority of the world’s climate scientists, 66%, don’t conclude whether humans influence the climate.

The claim relates to Cook’s 97% consensus study. I agree with Faktisk that the statement is false. The 66% figure is the share of studies reviewed that were assessed as not expressing an opinion on the human contribution to climate change or global warming. 66% of the studies isn’t necessarily the same as 66% of the scientists, and most of the scientists probably had an opinion, although it wasn’t expressed in the study’s abstract 39) (which was the only part of the papers that Cook et al. assessed).

Although I agree with Faktisk that Resett’s claim is false, I think it would be more interesting if they did a fact-check on another claim — the claim that 97% of the world’s climate scientists agree that human activity is the main cause of global warming. Because that’s something many (most?) Norwegians actually believe.

Faktisk actually also mentions the author of The Hockey Stick Illusion, Andrew Montford, in their fact-check:

[Kjell Erik Eilertsen refers to] a report from the British organization Global Warming Policy Foundation (GWPF), authored by Andrew Montford. It states that the Cook report’s methodology is one-sided, and that it only describes agreement on the obvious. [Translated from Norwegian]

The report they refer to is entitled Fraud, Bias and Public Relations — The 97% ‘consensus’ and its critics. I can definitely recommend reading it — Cook’s study also has an exciting story behind it, with lies, leaked data and leaked internal communication.

Faktisk doesn’t attempt to address Montford’s criticism of Cook – instead they just write:

Montford is an accountant with a bachelor’s degree in chemistry. He hasn’t published a single peer-reviewed scientific article. On the other hand, he is a well-known climate skeptic, blogger and author of the book “The Hockey Stick Illusion” about “the corrupt science”.

GWPF was established in 2009 as a charitable foundation by the climate-skeptical British politician Nigel Lawson. In 2015, they were investigated by the British Authority for Charitable Foundations. They concluded that the GWPF didn’t publish independent information suitable for teaching, but agitated politically. The organization was later split up. [Translated from Norwegian]

According to Wikipedia, the Global Warming Policy Foundation (GWPF) was split into two parts: the existing Global Warming Policy Foundation, which was to remain a charitable foundation, and the Global Warming Policy Forum, which was allowed to engage in political lobbying. (Faktisk wrote that the investigation happened in 2015, but according to Wikipedia, it happened in 2014 and possibly 2013.) My impression is that the quality of the reports published by GWPF is very high.

It’s only natural that Faktisk doesn’t want to argue that the Cook report only describes agreement on obvious things. This is because, to a large extent, Faktisk does the same thing themselves — recall their fact check title: “Yes, the vast majority of the world’s climate scientists agree that humans affect the climate.” However, they do alternate between saying that humans affect the climate and that human activity is the main cause of climate change. 40) 41)

If Faktisk’s journalists have actually read Montford’s relatively crushing report, it’s hard to understand how they can still defend Cook’s study. One way Faktisk defends the study is by re-stating the study’s rationale for why it is okay to set aside the nearly 8,000 studies in which no opinion was expressed about the human contribution to global warming.

I agree that human activity contributes to climate change, and it’s quite possible that more than 97% of climate scientists think so, too. But I also agree with Andrew Montford that this is a matter of course and not very interesting. If one wants to determine how many scientists believe human activity is the main cause of global warming since 1950, Cook et al. (2013) cannot provide the answer to that, and I strongly recommend the media to stop referring to that study. (Instead, see the study discussed in footnote 41.)

Improvement in the IPCC and among climate scientists?

The Climategate emails revealed some serious problems within the field of climate science prior to 2010, but is it as bad today? Fortunately, there seems to have been some improvements:

1. In a 2019 interview, Ross McKitrick said:

Now that whole constituency that wants certainty and wants catastrophe and wants the big scary message, it’s beginning to detach itself from the IPCC, because the message in the IPCC reports just isn’t keeping up with where the exaggeration folks want to go. 42)

2. Valérie Masson-Delmotte, co-chair of IPCC’s Working Group I since 2015, agrees with McIntyre’s critique of Mann’s studies. She told this to McIntyre in 2006, but she wanted McIntyre to keep her name secret. Recently (probably in 2019) McIntyre asked if he could use her name, and she allowed it. McIntyre said this in the SoundCloud interview that I embedded earlier (1:00:40).

3. You may recall that I’ve quoted Tom Wigley, the former CRU director, a few times. Wigley has recommended Michael Shellenberger’s new book Apocalypse Never — Why Environmental Alarmism Hurts Us All. Shellenberger is a climate activist who’s no longer an alarmist. He argues, among other things, that climate change doesn’t lead to worse natural disasters and that humans are not causing a sixth mass extinction. (The article I linked to here was first published on forbes.com, but was soon removed from the Forbes website 43)). Shellenberger is pro nuclear power, but skeptical about solar and wind. This is largely due to solar and wind’s ecological consequences in that they require large areas of land and kill birds. In the book, he describes how producers of fossil fuels and renewable energy oppose nuclear power. (Personally, I think he’s too one-sidedly negative about renewable energy.)

In conversation with Shellenberger, Wigley said that climate change does not threaten our civilization, that it’s wrong to exaggerate to get people’s attention, and:

All these young people have been misinformed. And partly it’s Greta Thunberg’s fault. Not deliberately. But she’s wrong.

In his recommendation of the book, Wigley wrote that Apocalypse Never “may be the most important book on the environment ever written”.

4. Judith Curry is a climate scientist who became a climate skeptic after Climategate. In a 2014 blog post, she listed some positive changes that had resulted from Climategate and the hockey stick graph revelations, including:

Transparency has improved substantially. Journals and funding agencies now expect data to be made publicly available, along with metadata. The code for most climate models is now publicly available. As far as I know, there are no outstanding [Freedom of Information Act] requests for data (other than possibly some of Mann’s [Hockey Stick] data and documentation). Climategate shed a public light on the lack of transparency in climate science, which was deemed intolerable by pretty much everyone (except for some people who ‘owned’ climate data sets).

Understanding, documenting and communicating uncertainty has continued to grow in importance, and is the focus of much more scholarly attention. With regards to the IPCC, I feel that [Working Group 2] in [Assessment Report 5] did a substantially better job with uncertainty and confidence levels (I was not impressed with what [Working Group 1] did).

Life for a scientist that is skeptical of ‘consensus’ climate science or critical of the IPCC is definitely easier post-Climategate.

As a result of Climategate, there is little tolerance for the editorial gatekeeping ways of trying to keep skeptical papers from being published.

(In 2019, however, she wrote that it’s become even more difficult to get skeptical papers published in the largest and most influential journals. 44))

Despite the progress, she argues that the IPCC should be shut down.

Conclusion (Summary for Policymakers)

There’s little reason to worry about climate change — at least as a result of CO2 emissions from human activity. CO2 emissions are on a much lower path than the media leads us to believe. There’s also a good chance that the climate sensitivity is low. If so, Earth’s average temperature may not rise more than about 1°C this century (as a result of human greenhouse gas emissions).

More CO2 causes trees and plants to grow more, so it isn’t entirely obvious that our CO2 emissions will have a net negative impact in the next few decades. And in the longer term, our technology will be so advanced that the negative effects can be easily managed or reversed.

It’s important that we don’t waste large amounts of money and resources on policy measures that do more harm than good. Regarding renewables, it’s better to subsidize renewables research than its usage. If the goal is zero emissions, unsubsidized alternative energy needs to become cheap enough to out-compete fossil fuels.

For poor countries, economic growth is more important than reduced CO2 emissions. The richer you are, the greater the likelihood of surviving extreme weather and natural disasters. The effect of reduced CO2 emissions, on the other hand, won’t be measurable for decades.

There has been a lot of dishonesty among some climate scientists, but, fortunately, some things have also improved after Climategate. The media, though, isn’t one of those things. I wish for the media to become more honest when writing about the climate, and I wish they would stop with their climate scares. Less scary news could mean better mental health for many young people who are today afraid of the future.


Footnotes:

1) All emails from the second round of Climategate can (or could previously) be downloaded from here. If the link is down, Wayback Machine has them archived.

The online book I linked to in the main text has a slightly sarcastic tone. A book with a more neutral tone — which may also be better — is “Climategate: The CRUtape Letters”, but I haven’t found an online version of it.

2) To determine how many studies were categorized by Cook et al. (2013) as expressing the view that humans have caused at least 50% of the warming since about 1950, we can start with the raw data from Cook et al. (2013). We can then write a small Python program, as I have done here, where I’ve also copied the raw data into datafile.txt:

The code above is just an image (since I couldn’t avoid the code automatically receiving focus when it was embedded). Clicking the image takes you to a page where the code is (or can be) executed.

The point is to count up all the studies categorized as endorsement level 1. These are the studies that, according to Cook et al. (2013), expressed the opinion that humans are the main cause of global warming.

The endorsement level for a study is the last number on lines containing information about a study (in datafile.txt). Line 19 in datafile.txt, which describes the format of the lines below, informs us about this:

Year,Title,Journal,Authors,Category,Endorsement

When you navigate to the page with the code from the image above, the code will be run automatically. You will then see the result on the right (if your screen is wide enough — otherwise, press the play button). It should give the following result:

{‘1’: 64, ‘2’: 922, ‘3’: 2910, ‘4’: 7970, ‘5’: 54, ‘6’: 15, ‘7’: 9}

This means 64 studies were categorized as endorsement level 1. The endorsement categories can also be found in the raw data file:

Endorsement
1,Explicitly endorses and quantifies AGW as 50+%
2,Explicitly endorses but does not quantify or minimise
3,Implicitly endorses AGW without minimising it
4,No Position
5,Implicitly minimizes/rejects AGW
6,Explicitly minimizes/rejects AGW but does not quantify
7,Explicitly minimizes/rejects AGW as less than 50%

AGW is short for Anthropogenic (that is, man-made) Global Warming.

3) Without the use of principal components, a possible way to reconstruct the temperature is to take a relatively simple average of all the chronologies. But you then risk missing important patterns in the temperature data, such as the temperature hypothetically increasing a lot in many places in the 20th century. In the simple average, this won’t necessarily be visible because the temperature may have decreased in other places. Principal component analysis can also help to elicit such underlying patterns.

4) The full title of MBH98 is Global-scale temperature patterns and climate forcing over the past six centuries. It’s not especially easy to read. You’re hereby warned.

5) The full title of MBH99 is Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations.

6) Another thing that seemed strange was that a simple average of all the proxy series in MBH98 showed that the temperature only varied around a constant level — there was no particular trend — no sharp temperature rise in the 20th century. The temperature reconstruction in MBH98 nevertheless showed that there had been a temperature rise in the 20th century, and that these high temperatures were unique for the last 600 years:

Screenshot from a presentation McIntyre and McKitrick gave to an expert panel at the National Academy of Sciences (NAS) in 2006. Top: A simple average of all the proxy series in MBH98. Bottom: The final temperature reconstruction in MBH98.

There could, in theory, be valid and good reasons for the difference, but the result was suspicious and gave motivation for further investigations.

7) Mann’s response to McIntyre and McKitrick was published in the form of two short articles by freelance journalist David Appell on his website (archived by Wayback Machine here and here). Mann’s response didn’t quite make sense. In addition to saying that the wrong data had been used (which was probably correct), Mann criticized McIntyre and McKitrick for requesting the data in Excel format (which they hadn’t done) and for not using all 159 proxies — despite the fact that Mann’s study stated that there were 112 proxies, not 159. It was an easy task for McIntyre and McKitrick to counter Mann’s criticism, and you can see their response here.

A few days later, Mann published a formal response, in which he explained that McIntyre and McKitrick’s calculation of principal components incorrect. McIntyre and McKitrick hadn’t used the stepwise procedure that Mann had used. However, this procedure wasn’t described in his study. In McIntyre and McKitrick’s subsequent response to Mann, they document that it’s still not possible to recreate Mann’s stepwise procedure even with access to the new FTP website.

8) Mann had written:

Here is an email I sent [to McIntyre] a few weeks ago in response to an inquiry. It appears, by the way, that he has been trying to break into our machine[.] Obviously, this character is looking for any little thing he can get ahold of.

McIntyre briefly commented on Mann’s email in the comment section of a blog post on ClimateAudit.

9) Or down, but Mann’s program turned such graphs upside down, so that curves pointing downwards in the 20th century were interpreted as pointing upwards. From The Hockey Stick Illusion: “Meanwhile, any [series] with twentieth century downticks were given large negative weightings, effectively flipping them over and lining them up with upticks.”

10) The other area is Gaspé in south east Canada. MBH98 had used a controversial proxy from cedar trees from this area:

  • Like for the trees in California, the Gaspé tree ring widths had a distinct hockey stick shape with an uptick in the 20th century. Also, like the trees in California, the sharp increase in tree ring width didn’t match measured temperatures in the area.
  • From the year 1404 to 1447, the Gaspé chronology consisted of only 1-2 trees.
  • Mann had extrapolated the chronology back to the year 1400, so that it could be used for the earliest period in MBH98. The Gaspé chronology was the only proxy series in the study that had been extrapolated in this way.
  • The Gaspé chronology was included twice in MBH98, once as a stand-alone chronology, and once as part of the first principal component (PC1) for North America (NOAMER).

(See McIntyre and McKitrick’s 2005 Energy & Environment article.)

11) Ross McKitrick has written:

If the flawed bristlecone pine series are removed, the hockey stick disappears regardless of how the [principal components] are calculated and regardless of how many are included. The hockey stick shape is not global, it is a local phenomenon associated with eccentric proxies. Mann discovered this long ago and never reported it.

The reason McKitrick could say that Mann knew about it, is that there was a folder with a very suspicious name on the FTP website where the data was located: BACKTO_1400-CENSORED. In that folder, Mann had made calculations without the trees from California. The trees in question are called strip-bark bristlecone pines.

12) McIntyre had a different theory, though. The cross-section of the trunks of these trees wasn’t a perfect circle — there were large variations in tree ring width depending on which side the tree rings were measured from. McIntyre believed the asymmetry could be due to branches that had broken in the 19th century, possibly due to heavy snow. The asymmetry may also have made it more challenging to measure the tree rings.

Linah Ababneh re-sampled the trees in 2006 and did not find the same large increase in tree ring widths in the 20th century as was found in the earlier sampling. McIntyre talks about this in a video here. Several theories attempting to explain the seemingly drastic increase in tree ring widths were discussed in McIntyre and McKitrick’s 2005 article in Energy & Environment.

13) The verification statistics tell us something about how much we can trust the temperature reconstruction. As I have understood it and a little more specifically for MBH98, the verification statistics say how well the reconstructed temperature graph matches thermometer data in the study’s verification period and earlier periods. The verification period was 1856-1901. 1902-1980 was the calibration period.

The calibration period determines the relationship between ring widths and temperature. You then want to see how well this relationships holds up in a different period where you also have both temperature and proxy data. This other period is the verification period. For MBH98, the correlation between temperature and proxy data in the verification period was R2=0.2. This is a rather poor correlation.

I found it difficult to understand what R2 means for earlier time periods, but I think McIntyre explained it in a somewhat understandable way in a comment to a blog post about the meaning of R2:

[T]he 1400 step used a subset of proxies available in the later steps. It produced a time series for the period 1400-1980 that was scaled (calibrated) against instrumental (an early HadCRU) for 1901-1980. Statistics comparing the 1856-1901 segment of the reconstruction time series to 1856-1901 observations are the verification statistics.

My interpretation of this: The proxies that go all the way back to the period that you want to calculate R2 for are used to create a temperature graph that goes all the way from that time period until today. Proxies that don’t go back that far are ignored in the calculation. R2 for the desired time period is how well the curve matches temperature measurements in the verification period (1856-1901).

14) RE is short for Reduction of Error and was a widely used verification statistic among climate scientists. It wasn’t used much at all in other scientific fields. In MBH98, the term β was used instead of RE, but it’s the same thing.

In MBH98, Mann wrote that their results were statistically significant if RE was greater than 0.0. But using so-called Monte Carlo analysis where McIntyre and McKitrick tested Mann’s algorithm with random data series (red noise), they found that the threshold value for RE was 0.59. Mann’s graph didn’t satisfy this for the earliest time period (starting in 1400). Their results thus weren’t statistically significant for that period. See A Brief Retrospective on the Hockey Stick by Ross McKitrick, Section 2 — Our Critique of the Method.

(R2 was low for the entire period before 1750, indicating a lack of statistical significance for the entire period from 1400 to 1750.)

Mann had also used Monte Carlo analysis to find the threshold value for RE. However, due to the error in his principal components algorithm, the threshold value that MBH98 found was incorrect. This made it look as though their temperature reconstruction was better than it actually was.

15) R2 or r2 is much more commonly used than RE among scientists in general, and it was also commonly used by climate scientists. An advantage of R2 over RE is that you don’t need to do Monte Carlo analysis to be able to interpret the R2 value. See A Brief Retrospective on the Hockey Stick by Ross McKitrick, Section 2 — Our Critique of the Method.

16) From the Energy & Environment article:

For steps prior to 1820, MBH98 did not report verification statistics other than the RE statistic. Unlike the above case, we cannot prove on the present record that Mann et al. had calculated these other statistics, but we consider it quite likely that these statistics were calculated and not reported. (In this case, we believe that diligent referees, even under the limited scope and mandate of journal peer review, should have requested the reporting of this information.)

17) If behind a paywall, read it here instead. The title is Global Warring In Climate Debate, The ‘Hockey Stick’ Leads to a Face-Off.

18) The temperature reconstruction from MBH99 is shown in a “spaghetti diagram” together with temperature reconstructions from several other studies in Chapter 6 (from Working Group I) on “palaeoclimate”.

19) John Stewart joked about the “hide the decline” email on The Daily Show. I couldn’t find the video, but here you can see Stephen McIntyre retell Stewart’s joke.

20) The video doesn’t show all of McIntyre’s slides, but the full set of slides is available here.

21) In 2013, McIntyre wrote that until then there had still not been much progress in terms of what was published in the peer-reviewed literature:

The IPCC assessment has also been compromised by gatekeeping by fellow-traveler journal editors, who have routinely rejected skeptic articles on the discrepancy between models and observations or pointing out the weaknesses of articles now relied upon by IPCC. Despite exposure of these practices in Climategate, little has changed. Had the skeptic articles been published (as they ought to have been), the resulting debate would have been more robust and IPCC would have had more to draw on [in] its present assessment dilemma.

22) Judith Curry has put it like this:

Simply, scientists are human and subject to biases. Further, they have personal and professional stakes in the outcomes of research – their professional reputation and funding is on the line. Assuming that individual scientists have a diversity of perspectives and different biases, then the checks and balances in the scientific process including peer review will eventually see through the biases of individual scientists. However, when biases become entrenched in the institutions that support science – the professional societies, scientific journals, universities and funding agencies – then that subfield of science may be led astray for decades and make little progress.

23) We saw that it was difficult for Stephen McIntyre to access the underlying data and methods from Mann’s studies. The same was true of many other studies. Skeptics eventually requested data under the Freedom of Information Act, but this, too, proved difficult. The book Climategate: The CRUtape Letters tells about this in more detail, particularly in the context of skeptics requesting data for the weather stations used in the calculation of global average temperature, so they could verify the calculations.

24) Brandon Shollenberger (not to be confused with Michael Shellenberger) has written several blog posts documenting that John Cook (and SkepticalScience) have been dishonest:

25) But the verification statistics for MBH98 were so poor that MBH98 couldn’t really say much about what the temperature was so far back in time.

26) McIntyre criticized climate scientists for using a tree ring chronology from the Yamal Peninsula in northern Russia. The chronology had a hockey stick shape, but was based on very little data (few trees) in the later years. McIntyre suggested merging data from Yamal with data from nearby areas (including Polar Urals). In 2013, CRU (and Briffa) began using a combined Yamal chronology that was very similar to McIntyre’s proposal. The new chronology didn’t have a hockey stick shape. (See also Andrew Montford’s post on The Yamal deception.)

27) SkepticalScience writes:

Scientific skepticism is healthy. Scientists should always challenge themselves to improve their understanding. Yet this isn’t what happens with climate change denial. Skeptics vigorously criticise any evidence that supports man-made global warming and yet embrace any argument, op-ed, blog or study that purports to refute global warming. This website gets skeptical about global warming skepticism. Do their arguments have any scientific basis? What does the peer reviewed scientific literature say?

They do have a point that many skeptics are too unskeptical of arguments from skeptics. So I recommend being a little skeptical of arguments from both sides — both from alarmists and from skeptics.

28) The IPCC believes that it’s very likely that the feedback effect of water vapor and albedo (how much sunlight is reflected from the earth) is positive, ie that these effects contribute to a higher climate sensitivity. The feedback effect from clouds is more uncertain, but probably positive, according to IPCC:

The water vapour/lapse rate, albedo and cloud feedbacks are the principal determinants of equilibrium climate sensitivity. All of these feedbacks are assessed to be positive, but with different levels of likelihood assigned ranging from likely to extremely likely. Therefore, there is high confidence that the net feedback is positive and the black body response of the climate to a forcing will therefore be amplified. Cloud feedbacks continue to be the largest uncertainty. The net feedback from water vapour and lapse rate changes together is extremely likely positive and approximately doubles the black body response [meaning that the climate sensitivity, ECS, doubles from about 1°C to about 2°C].

29) IPCC agrees and has written:

For scenarios of increasing [radiative forcing], TCR is a more informative indicator of future climate change than ECS.

30) Zeke Hausfather, a climate scientist who works with climate models (among other things), explains:

“Sensitivity” is something that emerges from the physical and biogeochemical simulations within climate models; it is not something that is explicitly set by modellers.

31) The IPCC writes:

The Coupled Model Intercomparison Project Phase 5 (CMIP5) model spread in equilibrium climate sensitivity ranges from 2.1°C to 4.7°C[.]

32) Knutti et al. write:

Our overall assessment of ECS and TCR is broadly consistent with the IPCC’s, but concerns arise about estimates of ECS from the historical period that assume constant feedbacks, raising serious questions to what extent ECS values less than 2 °C are consistent with current physical understanding of climate feedbacks.

Many other climate scientists also disagree with Lewis that we should trust the observation-based studies more. An article on CarbonBrief comments on a new study (Sherwood et al. (2020)), which concludes that a low climate sensitivity is unlikely.

33) See the Appendix of the long version of Lewis and Crok’s climate sensitivity report.

34) In the paper they write:

The RCPs provide a starting point for new and wide-ranging research. However, it is important to recognize their uses and limits. They are neither forecasts nor policy recommendations, but were chosen to map a broad range of climate outcomes. The RCPs cannot be treated as a set with consistent internal logic. For example, RCP8.5 cannot be used as a no-climate-policy reference scenario for the other RCPs because RCP8.5’s socioeconomic, technology and biophysical assumptions differ from those of the other RCPs.

And Chapter 12 (from Working Group I) of IPCC’s previous assessment report states:

It has not, in general, been possible to assign likelihoods to individual forcing scenarios.

35) According to Judith Curry:

Skeptics generally support nuclear energy and natural gas, but are dubious of rapid expansion of wind and solar and biofuels.

36) It’s been argued that solar energy (and wind) receive unfair advantages in competition with other energy sources, and that these are hidden subsidies for solar energy, which means that the price consumers have to pay for electricity increases.

37) “5-95% percentile range 0.5-8.2%”

That the IPCC thinks climate change will have relatively low costs, can also be seen in Chapter 10 (from Working Group II) in their previous assessment report:

For most economic sectors, the impact of climate change will be small relative to the impacts of other drivers (medium evidence, high agreement).

38) The question that the economists were tasked with: “If the global community wants to spend up to, say $250 billion per year over the next 10 years to diminish the adverse effects of climate change, and to do most good for the world, which solutions would yield the greatest net benefits?”

39) As part of Cook et al. (2013), a survey was also e-mailed to the authors of the reviewed studies. The survey asked the authors to categorize (or rate) their own study (or studies). Rated by the authors themselves, a higher proportion (64.5%) now expressed an opinion about the role of humans. And, as when rated by Cook et al., 97% of the studies that expressed an opinion agreed that humans contribute to climate change.

Unfortunately, Cook et al. (2013) didn’t reveal how many of the authors rated their own study as agreeing that human activity was the main cause of global warming. But Dana Nuccitelli, co-author of Cook et al. (2013) and contributor at SkepticalScience, gave the answer in the comments section of a blog post. Out of 2143 studies, 228 studies were rated by their authors as Category 1 — that humans are the main cause:

The self-rating column looks like it has 228 Category 1 results. There were a further 18 where 1 author rated a paper as a 1, but a second author rated it as a 2.

40) In some places, Faktisk write that scientists think humans merely contribute to climate change. In other places, they write that scientists think humans are the main cause. It can be confusing for the reader. In support of the “main cause”-claim, Faktisk quoted NASA (though they only provided a Norwegian translation of NASA’s text):

Multiple studies published in peer-reviewed scientific journals show that 97 percent or more of actively publishing climate scientists agree: Climate-warming trends over the past century are extremely likely due to human activities.

The source provided by NASA is a meta-study from 2016 entitled Consensus on consensus: a synthesis of consensus estimates on human-caused global warming. The lead author is John Cook, so I’ll call it Cook et al. (2016). Cook, along with several of his co-authors, are also authors of other consensus studies that are discussed in this study. These other authors include Naomi Oreskes, Peter Doran and William Anderegg.

NASA’s claim is an exaggeration of what the meta-study says. Cook et al. (2016) concludes:

We have shown that the scientific consensus on [Anthropogenic Global Warming] is robust, with a range of 90%–100% depending on the exact question, timing and sampling methodology.

This is also not entirely accurate. The study defines consensus:

The consensus position is articulated by the Intergovernmental Panel on Climate Change (IPCC) statement that ‘human influence has been the dominant cause of the observed warming since the mid-20th century'[.]

Cook et al. (2016) also provided the definition of consensus for the various studies. And for several of the studies the consensus definition does not align well with IPCC’s definition. We have already seen that Cook et al. (2013) doesn’t use the IPCC definition. For Doran and Zimmerman (2009), the consensus definition is “Human activity is a significant contributing factor in changing mean global temperatures”. For Stenhouse et al. (2014) the consensus definition is “Humans are a contributing cause of global warming over the past 150 years”. For Carlton et al. (2015) the question was “Do you think human activity is a significant contributing factor in changing mean global temperatures?”

For each study (where possible), Cook et al. (2016) provided numbers both for all respondents/authors and for a sub-set: publishing climate scientists. Their analysis shows that among scientists with many climate science publications, the share of scientists who agree with the consensus definition is higher than for scientists with fewer climate science publications:

This may well — at least to some extent — be due to the difficulty for skeptics to get their papers published in scientific climate journals.

In the above image, “C13” is Cook et al. (2013), “DZ1”, “DZ2” and “DZ3” are Doran and Zimmermann (2009), “S141”, “S142” and “S143” are Stenhouse et al. (2014), and “C151” and “C152” are Carlton et al. (2015). These studies all had a weaker consensus definition than the IPCC’s. “A10200” is a subset of Anderegg et al. (2010) — the 200 authors with the greatest number of published climate-related papers (there were 1372 respondents in total). 66% of all 1372 respondents agreed with Anderegg’s consensus definition, but this result hasn’t been plotted in the image. Other studies have also been left out of the plot.

41) Faktisk writes (translated from Norwegian):

A 2016 survey among 1868 scientists showed that, among those having published at least ten peer-reviewed scientific papers, 90 percent agreed that human greenhouse gas emissions is the most important driver of climate change.

They link to a study by Verheggen and others, where John Cook is one of the co-authors. The study’s title is Scientists’ Views about Attribution of Global Warming. The study has a publication date of 22 July 2014, and according to the study, the survey was conducted in 2012 (not 2016).

If the goal is to determine the share of scientists who think humans are the main cause of global warming, then this is a much better study than Cook et al. (2013).

Verheggen et al. (2014) e-mailed a 35-question survey to 7555 scientists. 1868 responded. Question 1 was “What fraction of global warming since the mid-20th century can be attributed to human-induced increases in atmospheric [greenhouse gas] concentrations?” The answers were distributed as follows (blue dots show the percentages for all 1868 respondents):

66% of the 1868 respondents thought that humans contribute more than 50% to global warming through greenhouse gas emissions. But there were some who thought the question was difficult to answer. Of those who gave a quantitative answer, the result was 84%.

The first part of question 3 was similar to question 1, but the options were more qualitative than quantitative: “How would you characterize the contribution of the following factors to the reported global warming of ~0.8 °C since preindustrial times: [greenhouse gases], aerosols, land use, sun, internal variability, spurious warming?” So the first part of question 3 was about greenhouse gases, and the answers were distributed as follows (unfortunately the figure doesn’t show the percentages for all 1868 respondents, but since the four categories are roughly equal in size, a simple average will give a good approximation):

There were significantly fewer who were undecided on this question than on question 1.

Those who answered “Strong warming” are naturally considered as supporting the consensus definition that human greenhouse gas emissions have contributed to most of the temperature increase since 1950. According to the study, some of those who answered “Moderate warming” should also be considered as supporting the consensus definition — those who had not chosen “Strong warming” on any of the other sub-questions in question 3. This doesn’t seem unreasonable, but at the same time, it isn’t quite obvious that it’s okay, either.

In any event, in this way, they found that 83% of the 1868 respondents to the survey supported the consensus definition, or 86% of those who were not undetermined.

The number used by faktisk.no is 90%. Here’s how it appears in Verheggen et al.:

Excluding undetermined answers, 90% of respondents, with more than 10 self-declared climate-related peer-reviewed publications, agreed with dominant anthropogenic causation for recent global warming. This amounts to just under half of all respondents.

Again, I’m a little skeptical that there’s so much focus on those scientists who have published the greatest number of peer-reviewed studies. This is because of how difficult it was for skeptics to get published in the peer-reviewed literature. A positive thing about this study is that they’ve included some skeptics who have only published things outside of peer-reviewed journals (“gray literature”).

Although Verheggen et al. (2014) in many ways appears to be a good study, they’ve misrepresented the results of other studies. They write:

However, Oreskes, Anderegg et al., and Cook et al. reported a 97% agreement about human-induced warming, from the peer-reviewed literature and their sample of actively publishing climate scientists […]. Literature surveys, generally, find a stronger consensus than opinion surveys. This is related to the stronger consensus among often-published — and arguably the most expert — climate scientists.

However, as we’ve seen, Cook et al. (2013) has a completely different consensus definition than Verheggen et al. (2014). While the consensus definition in Verheggen et al. (2014) focused on human activity as the main cause, the consensus definition was only that humans contribute to global warming in Cook et al. (2013).

Later, though, they write:

Different surveys typically use slightly different criteria to determine their survey sample and to define the consensus position, hampering a direct comparison. It is possible that our definition of “agreement” sets a higher standard than, for example […] Doran and Kendall-Zimmermann’s survey question about whether human activity is “a significant contributing factor”.

It isn’t just possible, it’s absolutely certain.

42) However, IPCC’s Summaries for Policymakers are often more alarming than the reports they summarize. Richard Tol, a Dutch economist who’s contributed extensively to IPCC’s assessment reports, said the following about the chapter he was convening lead author for in IPCC’s previous assessment report:

That [many of the more dramatic impacts of climate change are really symptoms of mismanagement and poverty and can be controlled if we had better governance and more development] was actually the key message of the first draft of the Summary for Policymakers. Later drafts … and the problem of course is that the IPCC is partly a scientific organisation and partly a political organisation and as a political organisation, its job is to justify greenhouse gas emission reduction. And this message does not justify greenhouse gas emission reduction. So that message was shifted from what I think is a relatively accurate assessment of recent developments in literature, to the traditional doom and gloom, the Four Horsemen of the Apocalypse and they were all there in the headlines; Pestilence, Death, Famine and War were all there. And the IPCC shifted its Summary for Policymakers towards the traditional doom and gloom [message].

43) Climate Feedback, a fact-checker for Facebook, has criticized Shellenberger’s article. Shellenberger has responded, and Mallen Baker has commented on the conflict on YouTube.

44) Curry writes:

The gate-keeping by elite journals has gotten worse [in my opinion], although the profusion of new journals makes it possible for anyone to get pretty much anything published somewhere.

She also mentions two other things that have gotten worse in recent years:

– Politically correct and ‘woke’ universities have become hostile places for climate scientists that are not sufficiently ‘politically correct’

– Professional societies have damaged their integrity by publishing policy statements advocating emissions reductions and marginalizing research that is not consistent with the ‘party line’

error2