WARNING! LONG,  BORING, SCIENTIFIC ENTRY FOLLOWS.  READ AT YOUR OWN RISK!

It would be hard to think of an instance where the separation of true knowledge from mere prognostication is of more import than in the debate surrounding anthropogenic global warming.  With the recent revelations of the – shall we say – creativity within the research staff of East Anglia’s Climate Research Unit, a pause seems in order.  Let’s review briefly what is known with reasonable certainty about global warming:

  1. Emissions of carbon dioxide increased in the 20th century.  Emissions increased slowly beginning about 1890, with nearly all of the increase being observed since 1940.
  2. Some analyses of records of sea surface temperature and land surface temperature suggest that global temperatures have increased on the order of 0.5 degrees Celsius in the 20th century (however, see below).
  3. The same records indicate that no global-scale warming has occurred since about 1998.
  4. Sea-surface elevations have risen anywhere from zero to 5 inches over the 20th century.

That is the sum total of the “settled science” – to use the phrase currently in vogue among true believers.  All of the other predictions which are used to justify global emissions curbs are derived from computer models which employ the laws of thermodynamics and fluid mechanics to predict the future effects of assumed changes in atmospheric constituent proportions, and from the ensuing speculation about the worst-case scenarios which could result for humankind.  Despite the strenuous assertion of the Goreites in the media and academe, there are severe problems with the computer models and even with the observed temperature records.

First, the notion of “settled science” is itself unscientific; such phraseology has always been used as a semantic and political effort to minimize dissent.  No sober researcher without an agenda would ever use the term.  One needs no reminder that the body of “settled science” once included Aristotle’s views on motion, Ptolemy’s model of the universe, the phlogiston model of chemistry, the concept of the invisible aether through which radio waves traveled, the belief that light emanated from the eyeballs to illuminate things, and even Newton’s Laws (refined and to a certain degree replaced by Einstein’s).

Now, to the problems.  Computer models are useful tools, but embody the assumptions given to them by the human programmer and are therefore inherently biased in that direction.  For this reason, the assumptions must be solidly founded on correct theory and reliable observed data, and the models must be accurately calibrated and verified with long-term records.  However, this is not the case with today’s long-term climate models because

  • The magnitude of the warming effect of carbon dioxide is not known with reasonable certainty, and therefore the computer models cannot be adequately calibrated.  Some researchers believe that CO2 is actually a relatively minor greenhouse gas with much less effect than is currently programmed into most models.
  • Most researchers agree that the most important greenhouse gas is water vapor.  Yet, not enough is known about the effect of water vapor in mitigating warming influences by other greenhouse gases.
  • The effect of the sunspot cycle on global temperatures has not been adequately defined and factored into the models.  Some studies indicate that the sunspot cycle is correlated more heavily with global temperatures than are increases in greenhouse gas concentrations.

Yes, the computer models predict warming.  But if the models are simply reproducing incorrect or poorly-founded assumptions of the programmers, an “output” from the model in itself means little.  The fact that “the computer model says so” is often used to bolster and even conceal underlying poor science.

In addition to the shaky computer modeling, the land surface temperature records used to bolster the observed 20th-century warming suffer from several defects:

  • Stations which were originally located in a rural area or on the outskirts of a city have gradually been encompassed by asphalt, brick, and concrete.  It is well known that stations located in urban areas tend to overestimate temperature locally due to radiation from man-made structures.  A station which is “urbanizing” should be expected to show a gradually-increasing temperature bias.
  • It has now been revealed that much of the data at stations in less-developed countries such as China cannot be adequately accounted for, and that many of these stations were actually moved large distances over the period of record.
  • It now appears that the density of weather stations has been reduced over the period of record, and the stations that have been eliminated were in the colder and/or high-altitude areas.  This would tend to lead to an increasing temperature bias.

Reviewing the last 12 to 15 years of published research on climate change, the bias toward anthropogenic global warming – the conscious ignorance of adverse data and analyses – is simply too obvious not to notice.  The recent appearance of chinks in the armor of the UN climate change juggernaut have caused respectable scientists to step back, take a breath, and contemplate that there may be nothing “settled” about the science of global warming.  Nothing, at least, that warrants the sort of panic being drummed up by people like Al Gore, and the forcible economic redistribution that is the passion of Mr. Gore and other climateoholics.

Advertisements