The Scientific Consensus on Climate Change:
How is it measured and what it means.
Ray Weymann/Central Coast Climate Science
It is frequently said that “97 percent of climate scientists agree that the climate is changing, due mostly to human activities,” or words to that effect. I recently received email from a friend asking what kind of surveys were done to determine this. This essay is my response to that question.
Before getting into those details, though, some preliminary comments:
I commonly encounter two responses when people first hear about the “strong scientific consensus” on climate change and the role of human activities in driving this change.
The first response is something like:
“So what? Scientists don’t vote. Science isn’t done by consensus.”
It is true that scientists do not take a vote to settle uncertain matters. What they actually do, though, is compile evidence, and interpret and discuss it through workshops and peer-reviewed articles in professional journals. When a heavy majority have reached a strong consensus about some issue after this process, that issue stops being one that attracts further research efforts. Instead, research efforts turn toward resolving other issues about which little is known or about which there may be substantial controversy.
A classic example from my own field of astrophysics and cosmology was the debate over whether the universe was in a ”steady state”–an idea that was championed by cosmologist Fred Hoyle– or whether it was evolving from an initial “big bang”–one of the early proponents of the big bang being astrophysicist George Gamow.
In the 1950’s that debate raged, but then came the discovery of the “cosmic background radiation” in 1964–predicted by Gamow. This was then followed by unequivocal evidence that galaxies which formed a long time ago look very different from younger ones recently formed. Although a few diehards clung to the steady state hypothesis for a few years after this, one never sees research today debating this issue. The “big bang” really did occur.
So, just because there is a strong consensus on the basic statement that we are in the midst of a changing climate that is being driven by human activities, that doesn’t make it automatically true. What it does mean is that the evidence accumulated from previous research has convinced the heavy majority of researchers of its truth.
For those of us not specialists in this field, there is thus good reason to give great weight to this consensus in the same way that most of us do not smoke and discourage children from doing so. We do this not because we ourselves have done research on the adverse health effects of smoking, but because we are aware of the very strong consensus among medical experts on these adverse health impacts.
A second comment I frequently hear is:
“In the middle ages, there was a strong consensus that the Earth was the center of the Universe. Then along came Galileo, a lone dissenting voice who was ultimately proven correct. So much for your strong consensus!”
What those who make this, or similar arguments, miss, is the fact that that particular consensus view was authoritarian in nature, not the result of the evidence gathering process I described above. The consensus view prior to Galileo of the Earth’s place in the universe was not evidence-based but was based on theology.
In fact, what Galileo did was precisely what scientists do now: He made careful observations, drew conclusions from them and published them. When others made similar confirming observations or read about this evidence, then the Galilean point of view became a real scientific consensus.
A final preliminary remark: We are discussing here surveys, not petitions. This is an important distinction. In a carefully done survey, one seeks opinions from a sample as free from bias as possible. “Do you prefer Pepsi or Coke” is a survey. “Sign this petition if you prefer Pepsi over Coke” is not a survey.
I mention this now because people often bring to my attention the fact that “thirty thousand scientists signed a petition” saying that “there is no convincing scientific evidence that human release of carbon dioxide will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere”. This refers to the “Oregon Petition” whose signers included only a tiny fraction of scientists doing research in climate science.
For a discussion of the “Oregon Petition” see:
With those preliminaries out of the way, what information do we have on the degree of consensus by scientists about climate change. There have been several surveys, and here is a graphic showing the results of seven of these surveys:
The seven surveys shown above (and they are not the only ones) are of two kinds:
1) Questionnaires sent to groups of scientists asking for their opinions on climate change.
In this 1st group is the work of Doran and Zimmerman (2009), Stenhouse et al. (2014), Verhaggen et al, (2014) and Carlton, et al. (2015).
2) Surveys of the published peer-reviewed literature or other published information, from which the views of the authors on this topic can be inferred.
In this 2nd group is the work of Oreskes (2004) and Anderegg et al. (2010).
The work of Cook et al. (2013) is a kind of hybrid because it was initially a literature survey, but was then followed up by questions to the authors asking them to self-rate the positions their papers took on human caused climate change.
To go through all seven of these papers in detail would make this already-long essay much too lengthy, so I will examine in detail just two of these. The remaining five proceed very similarly and reach the same basic conclusions.
The survey of Doran and Zimmerman
I have chosen to discuss this survey for two reasons:
First, it illustrates a result found by several of these surveys: The closer the group of scientists sampled comes to that group who are actual climate scientists and who are actively publishing in peer-reviewed journals on climate change, the higher the degree of consensus on human-caused climate change.
Second, the results of this survey have been subject to criticism that reflects a lack of knowledge of basic statistics.
The following two questions were sent to 10,257 scientists identified in a database as being “Earth Scientists”:
1) “When compared with pre-1800s levels do you think mean global temperatures have generally risen, fallen, or remained relatively constant?”
2) “Do you think human activity is a significant contributing factor in changing mean global temperatures?”
Of course, if a respondent did not think there had been a rise in mean global temperatures, then question (2) is moot, so the interesting question is (2) –the percentage among the various groups of Earth Scientists who answered “risen ” to question (1) and then “yes” to question (2)
Of the 10257 earth scientists to whom the questions were sent, 3146 responded, a response rate of about 31 percent, which is
fairly typical. The 3,146 respondents (whose identity was not revealed to Doran and Zimmerman) were asked to identify which subfield of Earth Science they belonged to (e.g. geochemistry, hydrology etc.) as well as the frequency and topics on which they published papers in peer-reviewed journals.
For the entire group of 3146 respondents, 82 percent answered “yes” to question (2). As this entire group was refined to get closer to Earth scientists who classify themselves as climate scientists, and who are frequent publishers of peer-reviewed papers, the groups shrink in size and in this final latter category there were only 77 respondents. But 75 of these 77 responded with a “yes”, yielding a “yes” response of 97.4 percent.
I frequently hear this last result being criticized because a survey result involving only 77 respondents was thought to be too small to be meaningful. Could it be that in fact the “true” result would be about 50% if a much larger sample instead of only 77 from the same group had responded?
The statistics of this result are just the same as asking: Suppose you flipped a coin 77 times and it came up heads 75 times. What are the chances that if the same coin were flipped one trillion times instead of just 77 that the result would be about 50 percent heads and 50 percent tails? That is, could it be that the true probability of a tail flip is 50 percent, and it was just an unusual string of flips that produced 75 heads and only 2 tails in the experiment?
If you had a course in algebra or statistics and remember them this is a straight forward question to answer. The answer is that the probability that a “true” coin would give such a result is absurdly small. (And we can add in the even lower probabilities of 1, or 0 tails.) In other words, it is absolutely beyond any doubt that the true probability is anything like 50 percent.
To make this point very strongly, I have shown the details of this calculation in the Appendix to this essay. I also discuss a slightly more interesting calculation: What is the true percentage of “yes” responses for which there is just a 5 percent chance that the 75 or greater yes answers out of the 77 arose by chance. (Five percent is a typical “confidence level” that is often applied in statistical tests)
The answer is that we can have high confidence that the true percentage of “yes” answers is at least 92 percent despite the small sample size of 77 respondents from publishing climate scientists.
The Cook et al. 2013 analysis of published papers
Cook and collaborators surveyed nearly 12000 papers in the peer-viewed literature. These papers were found by key word searches for “global climate change” or “global warming” in a database of scientific papers. They then had two volunteers independently read only the title and abstract of these papers and rate them according to rating guidelines provided to these volunteers.
These guidelines asked for classifications on whether an opinion (or no opinion) was expressed in the abstract about whether climate change was or was not occurring due to human activities. In the infrequent cases in which the two volunteers disagreed, a third person resolved the disagreement.
As a follow up to this survey they then contacted the authors of the papers whose abstracts had been rated, and asked the authors to provide their own evaluation based upon the full paper. The only significant difference between the evaluation by the volunteer readers and the self-evaluation by the authors was that the abstracts frequently expressed no explicit opinion on human-caused climate change, whereas the authors felt that the papers implied endorsement of human-caused climate change.
The result was very similar to the Doran and Zimmerman result and the final sentence of the Cook paper reads: “Among papers expressing a position on AGW [human-caused global warming], an overwhelming percentage (97.2% based on self-ratings, 97.1% based on abstract ratings) endorses the scientific consensus on AGW.”
Cook and his co-authors considered possible biases in their technique and conclude that none of those considered had any significant impact on their result.
However, one possible bias which skeptics have voiced to me was one these authors did not mention. Could it be that papers which rejected the consensus simply could not get their papers published in peer-reviewed journals because of bias on the part of the editors and reviewers of these papers? I think the instances in which this occurs are very rare.
There are, after all, published papers which do reject the consensus. Moreover, there are a great many journals and often a paper that is rejected by one journal will then be sent to others. But because the evidence for human-caused climate change really is compelling, it is increasingly hard to make a scientifically credible case for a dissenting view.
Most importantly however, scientists, and especially the Editors of scientific journals, are keenly aware that the reputation of their journals and the professional societies with which they are affiliated strongly depend upon the integrity of the peer-review process. (Please see my Essay #1: The Peer Review Tradition: http://www.centralcoastclimatescience.org/essays.html
I have not described the other five papers whose consensus results are described in Figure 1. The methodology is quite similar to one or the other of the above two papers, except that the groups whose opinions are being surveyed vary–some are more general, some are more specialized.
A common thread, though, is that the consensus is strongest among those whose expertise is in climate science and who are active researchers as demonstrated by their recent publication records.
Why does documenting this consensus matter?
Why have the authors of these and other papers gone to the trouble of examining the degree of consensus on this issue?
Because several studies have shown that the degree of public support for action to control greenhouse gases is, not surprisingly, strongly dependent upon acceptance of the scientific consensus on the reality of human-caused climate change and its mostly negative consequences. And this acceptance, in turn, is strongly dependent upon the public’s recognition of the high degree of consensus among the community of climate scientists.
In a sort of back-handed confirmation of the preceding paragraph, the fossil fuel industry has made a major effort to promote the notion that there is no consensus on this issue among the scientific community. This is documented in the book “Merchants of Doubt” by Oreskes and Conway:
http://www.merchantsofdoubt.org/. The same strategy (even involving some of the same doubt-promoters) was used by the tobacco industry to oppose action to curb cigarette smoking.
Regrettably, one would have to say that this strategy by the fossil fuel industry, in collaboration with sympathetic politicians, has been quite successful, as shown in the following figure.
The actual public acceptance about climate change was slightly higher than this as of March 2016, but the November 2016 election may change this situation.
Appendix: The binomial probability distribution and the Doran and Zimmerman statistics
The “binomial probability distribution” can be applied to a series of independent experiments (like a series of coin flips) when the only outcomes are “binary”–yes or no, heads or tails, black socks or white socks. It is assumed that there is a “true” probability of a yes or no that may be anywhere from 0.0 to 1.0
For example, if we had an enormous drawer filled with 3 trillion white socks and one trillion black socks (well mixed!), then we could say that upon randomly reaching into the drawer and pulling out a sock, the probability of getting a white sock would be 3/4 = 0.75 and 1/4 = 0.25 for a black sock.
If we pulled out 5 socks, what is the probability of getting 3 white ones and 2 black ones? The binomial probability distribution formula tells us how to do that calculation, but in this simple example we can do simple reasoning:
Suppose the first three were white and then the 4th and 5th were black. The probability of exactly this sequence occurring is:
But we could also end up with WBBWW as well and the probability of that is exactly the same; it is just the order of the W and B that are different. In fact, if you look at all the different combinations of 3 whites and 2 blacks you will find out there are 10. So the probability of getting 3 W and 2 B is 10 times the number above, or 0.26367…
When the number of socks you pull out of the drawer gets large (like 77), doing this calculation “in your head” gets too difficult, but there is a simple formula for this.
C(n,k) = n!/[k!*(n-k)!] where the “!” does not express surprise but the “factorial.” For example 5! = 5*4*3*2*1=120.
If you want a simple explanation for where the C(n,k) formula comes from see https://betterexplained.com/articles/easy-permutations-and-combinations/
Now, finally, we can compute the likelihood of getting, just by chance, 75 heads and 2 tails if the true probability of heads was 0.5, (and therefore the same for tails).
The probability of this is C(77,75)* (0.5)77 which turns out to be about 0.000 000 000 000 000 000 019 363 — a pretty small number!! It is preferable to ask what the probability is, under these same assumptions, for having at least 75 heads.
This changes the answer only very slightly to:
0.000 000 000 000 000 000 019 877.
So, despite the “small” number of 77 respondents in the Doran and Zimmerman survey who are publishing climate scientists, the conclusion is this: The likelihood of the “true” percentage of publishing climate scientists who would say yes to question 2 roughly 50 percent is vanishingly small.
A more interesting calculation is to ask what the true percentage would be such that there was a 5 percent chance that 75 or more yeses out of the 77 responses could have arisen by chance. Without showing the details of the calculation, it turns out that this true percentage
is about 92 percent.
Here are links to the seven surveys cited in this essay for those who wish to read them:
Oreskes 2004: http://science.sciencemag.org/content/306/5702/1686
Doran and Zimmerman 2009: http://tinyurl.com/zmmc962
Anderegg et al. 2010: http://www.pnas.org/content/107/27/12107.full
Cook et al. 2013: http://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024/pdf
Verheggen et al. 2014: http://pubs.acs.org/doi/abs/10.1021/es501998e
Stenhouse et al. 2014 (A survey of professional members of the American Meteorological Society): http://pubs.acs.org/doi/abs/10.1021/es501998e
Carlton et al. 2015: http://iopscience.iop.org/article/10.1088/1748-9326/10/9/094025/pdf
Dr. Ray Weymann is a retired Astrophysicist with over 40 years experience in teaching and research. He received his undergraduate degree in Science from Cal Tech and his PhD from Princeton. He has published research in many areas of astrophysics, including the transfer of energy through the Sun and other astronomical objects whose physical processes are the same as those governing the Earth’s climate. He is an elected member of the American Academy of Arts and Sciences as well as the National Academy of Sciences.
Since moving to Atascadero in 2003 he has used his background in physics and astrophysics to educate the public and students about climate change and has given numerous lectures and short courses on Climate Change throughout San Luis Obispo County. He was a co-founder of the Climate Science Rapid Response Team, a “matchmaking” service directing inquiries from journalists about climate change to appropriate members of a large roster of experts in all aspects of climate science.
This article originally appeared on Dr. Weymann’s website, Central Coast Climate Science. It is republished here with permission.