Part 2
6. IPCC bias
7. CO2 and H2O
8. Oceans and
geotectonics
9. Solar influences
10. Hype, hysteria, heresy
11. Sources
The Intergovernmental Panel on Climate Change (IPCC) was founded in 1988 to assess ‘the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change’. Although depicted as an authoritative body that produces balanced reports, the reality is very different. 2500 scientists are said to be involved in the IPCC process, but the vast majority have no direct influence on the conclusions expressed in its reports, which are controlled by a small clique of scientists wedded to the doctrine that human CO2 emissions are causing dangerous global warming.
The Summary for Policymakers (SPM) that accompanies each IPCC report is traditionally a simplistic, rather alarmist document designed to persuade politicians and the public that urgent action is needed to curb greenhouse gas emissions. Government-appointed bureaucrats revise the SPM line by line; in the case of the 2007 SPM, just 52 scientists were involved in approving the changes proposed by delegates from 115 countries. SPMs are highly selective summaries of the voluminous science reports, which are typically over 800 pages long. Although the science reports are very informative, they are biased, selective, misleading and unreliable on many subjects. A major problem is that chapter authors are frequently direct participants in the controversies and disputes that they have to summarize. Experience shows that they tend to showcase their own articles and those of their associates.
The IPCC’s Third Assessment Report (TAR), which appeared in 2001, was noteworthy for its use of spurious scientific papers to back up its claim of ‘new and stronger evidence’ of anthropogenic global warming. One of these was the infamous ‘hockey stick’ paper, which is now known to contain basic statistical flaws. The IPCC also cited a paper claiming that pre-1940 warming was of human origin and caused by greenhouse gases. This work, too, contained fundamental errors in its statistical analysis (Singer, 2008), and the IPCC now attributes only post-1975 warming mainly to anthropogenic greenhouse gases.
The Fourth Assessment Report (AR4) was published in 2007. The Summary for Policymakers was released in February 2007, three months before the bulk of the Working Group I (WG I) report The Physical Science Basis that it was supposedly summarizing. This was said to be necessary so that the main report could be brought into line with the conclusions in the Summary for Policymakers. This is certainly a novel approach to doing ‘science’! As already illustrated in section 3, AR4 is marred by errors and misrepresentations, and ignores scientific data that would upset its major conclusion – that warming over the past few decades is ‘unprecedented’ and is mainly caused by anthropogenic greenhouse gases.
To illustrate the IPCC’s bias, Roger Pielke Sr. & Dallas Staley have listed dozens of peer-reviewed scientific papers whose message conflicts with the one the IPCC wishes to put across and which are ignored in AR4. These include:
Some of the scientists the IPCC enlists as ‘expert reviewers’ question man-made global warming, but the critical comments they submit are typically dismissed. For instance, Vincent Gray submitted 1878 comments on AR4, 16% of the total, but nearly all were rejected. Some nonconforming experts have either declined to become involved in the IPCC process or have later withdrawn from it. The IPCC sometimes deselects scientists known to disagree with its alarmist viewpoint.
For instance, Paul Reiter, nominated by the US to assist the IPCC with the 2007 report because he is the world’s foremost expert on malaria, was passed over in favour of two non-experts who had published only one peer-reviewed paper on malaria between them, but had each published alarmist reports in conjunction with the green movement. When Reiter asked why these two were chosen over him, the IPCC lied to the effect that he had not been nominated. (Monckton, 2007c)
The WG I report forms the basis for the reports of working groups II and III: Impacts, Adaptation and Vulnerability and Mitigation of Climate Change. The numbers of scientists involved in WG I was just over 600, less than a quarter of the total. For the first time ever, the UN has released on the internet the comments of reviewers who assessed the drafts of the WG I report and the IPCC editors’ responses. This was almost certainly a result of intense pressure applied by Steve McIntyre and his allies using the Freedom of Information Act. Tom Harris & John McLean (2008) write:
An examination of reviewers’ comments on the last draft of the WG I report before final report assembly (i.e. the ‘Second Order Revision’ or SOR) completely debunks the illusion of hundreds of experts diligently poring over all the chapters of the report and providing extensive feedback to the editing teams. ...
A total of 308 reviewers commented on the SOR, but only 32 reviewers commented on more than three chapters and only five reviewers commented on all 11 chapters of the report. Only about half the reviewers commented on more than one chapter. It is logical that reviewers would generally limit their comments to their areas of expertise but it’s a far cry from the idea of thousands of scientists agreeing to anything.
Compounding this is the fact that IPCC editors could, and often did, ignore reviewers’ comments. ... Reviewers had to justify their requested changes but the responding editors appear to have been under no such obligation. ...
The attitude of the editors seemed to be that simple corrections were accepted, requests for improved clarity tolerated but the assertions and interpretations that appear in the text were to be defended against any challenge.
An average of 67 reviewers examined each chapter of the WG I report; no chapter was examined by more than 100 reviewers and one by as few as 34. It is commonly claimed that thousands of IPCC scientists agree with the key statement (in chapter 9) that ‘Greenhouse gas forcing has very likely caused most of the observed global warming over the last 50 years’. In reality, only 62 scientists – 55 of whom had serious vested interests – reviewed the chapter concerned, and almost 60% of their comments were rejected by IPCC editors. Only five reviewers explicitly endorsed the chapter concerned – four had vested interests and the other made only a single comment on the entire 11-chapter report. Over two-thirds of all the authors of chapter 9 had coauthored papers with one other, and 40% of the papers cited appeared under the name of at least one chapter author (McLean, 2008). This helps to explain why the chapter makes little mention of contrary scientific opinions.
The IPCC’s handling of the hockey-stick controversy in AR4 speaks volumes. Ross McKitrick (2007b) describes how the IPCC dealt with his and Steve McIntyre’s work:
Despite having published five journal articles on the hockey stick controversy by the time the IPCC report was being drafted, the IPCC initially ignored all but our first paper. They falsely claimed that we had offered up a novel climate reconstruction that had failed model validation tests, and that we had been unable to replicate Mann’s work because we omitted a key part of his data set. They also claimed that our results were rebutted in an unpublished paper by Wahl and Ammann, who had (they said) successfully replicated Mann’s results. As we pointed out in our replies, none of this was true. We had repeatedly denied that we were presenting a new reconstruction, instead we were attempting to replicate Mann’s reconstruction based on his stated methods and data. We showed that it was not possible to get his results using his stated data and methods. The IPCC failed to mention that we had proved to Nature’s satisfaction that the original disclosure of data and methods was, indeed, inaccurate, and a Corrigendum had been ordered. Based on the amended disclosure of data and methods, the results of Wahl and Ammann were identical to ours, not to Mann’s, and, like us, Wahl and Ammann had found that Mann’s claims of finding statistical significance could not be replicated. But the draft version of the Wahl and Ammann paper submitted to the IPCC omitted the latter findings, which were included in the version they had submitted to a journal for publication. And their paper had not been published, which should have ruled out its usage by the IPCC in any case. ...
The final, published text of the IPCC report thoroughly misrepresents the hockey stick debate, ignores published evidence against Mann’s original results that had been upheld by two independent expert panels, relies on unpublished claims in the Wahl and Ammann paper while ignoring their replication of our results, etc. This whole section of the AR4 is indefensible and stands as a lasting testament to the bias of its authors, and the willingness of the IPCC process to indulge such biases. (pp. 8-9)
The shocking and sordid story of Wahl & Ammann’s deceitful attempt to shore up the hockey stick is told by Bishop Hill (2008). As he says, it is ‘a remarkable indictment of the corruption and cynicism that is rife among climate scientists’. And the IPCC played along because they were desperate for anything that would save them from having to ditch the hockey stick altogether.
As noted in section 3, many scientists whose work is uncritically used in IPCC reports have failed to archive their data and methodology so that their work can be replicated by others. Even worse, the journals in question not only allow this to happen, but have subsequently defended the lack of disclosure. The IPCC, too, has brought no pressure to bear on scientists who flaunt best practice and whose articles they cite in their reports.
Although reviewers’ comments and IPCC responses are now partially available, IPCC scientists have refused to release documents that would shed light on the internal discussions that led to its responses to reviewers’ comments and to the final text of the report (climateaudit.org/?p=3194; climateaudit.org/?p=3193).
One of the IPCC review editors for the paleoclimate chapter of AR4 is John Mitchell, Chief Scientist of the UK Met Office. David Holland submitted requests under the Freedom of Information Act (FOIA) for all documents and correspondence relating to Mitchell’s duties as review editor. The Met Office first replied that Mitchell had destroyed all of his email correspondence – even though the IPCC requires the retention of documents for five years. Then it made the ridiculous claim that Mitchell acted as review editor in a ‘personal capacity’– even though he made use of Met Office resources and the Met Office boasts that it is ‘the single most influential scientific contributor’ to AR4, and has provided lead authors and review editors (climateaudit.org/?p=3208).
The Climatic Research Unit (CRU) was asked to disclose the identities of the ground stations used in a key paper, authored by its head, Phil Jones, and his coworkers, cited by the IPCC, and to provide the raw data. After prolonged FOIA actions covering about two years, all it provided was a still incomplete list of stations. In a response to a request submitted to the NOAA regarding the same article, Steve McIntyre was eventually informed that some of the data he had requested no longer existed as it had been adjusted many times (climateaudit.org/?p=1471, climateaudit.org/?p=3255)!
Clearly many climate scientists do not want independent scientists looking over their shoulders and checking their work. But their well-documented obstructive tactics merely undermine their credibility and are bound to backfire in the long run. The best approach would be for all scientists to be totally open and honest; if the science is sound it will withstand scrutiny, and if it is not, it should be corrected. Instead, IPCC scientists often behave like high priests defending a religious doctrine that they know to be true, and treat anyone who dares to challenge their faith as heretics and scoundrels.
David Holland (2007) sums up the situation:
Rather than the consensus of thousands of scientists, the IPCC conclusions represent the passionate belief of a small number of scientists whose funding and research careers depend heavily upon continuing alarm. The belief is then shared by a much larger number of environmentally and politically motivated individuals, organisations and also businesses that have evolved to service the emission reductions that the IPCC calls for. The vested interests of these groups are powerful sources of bias. (p. 981)
False certainties
The terms ‘uncertain’ and ‘uncertainties’ appear more than 1300 times in the 987-page full report of WG I (The Physical Science Basis). The 74-page Technical Summary of WG I alone identifies 54 ‘key uncertainties’ in scientific knowledge of climate change. As Green et al. (2007) say: ‘These acknowledged uncertainties often concern key points that bear directly on an assessment of the likely magnitude of future climate change and therefore have great relevance to policymakers in terms of policy choices and implementation time scales.’ Yet the IPCC (2007) still manages to claim that ‘most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations’.
AR4 defines ‘very likely’ as meaning a probability of over 90%. This sounds very ‘rigorous’, but the probability figure has not been determined statistically – it is merely an indication of the level of confidence the lead authors have in their own beliefs and models! There is no compelling evidence that humans are the main cause of the warming since 1976 (from 1947 to 1976 the earth was cooling). IPCC review editor John Mitchell wrote in 2007: ‘It is only possible to attribute 20th century warming to human interference using numerical models of the climate system.’ All the IPCC is saying is that its climate models, which are programmed with the assumption that CO2 is a major driver of climate, cannot reproduce the warming of the past 30 years without including a substantial ‘human’ influence. Fred Singer (2008, p. 2) points out that the IPCC’s conclusion ‘seems to be based on the peculiar claim that science understands well enough the natural drivers of climate change to rule them out as the cause of the modern warming. Therefore, by elimination, recent climate changes must be human-induced.’ As Roy Spencer (2008c, p. 82), says, the difficulty of confidently attributing current warming to specific causes ‘makes the claim that global warming is due to humans more of a belief system than a scientific observation’.
The IPCC (2007) included the following table in the AR4 Summary for Policymakers (fig. SPM-2). It expresses the IPCC’s belief that since 1750 anthropogenic forcing of the climate has been 13 times greater than solar forcing. The factors as listed in this table cannot explain earlier warming and cooling cycles during the Holocene, when significant human influences were clearly not at work.
Fig. 6.1 The IPCC explains this figure as follows: ‘Global-average radiative forcing (RF) estimates and ranges in 2005 for anthropogenic carbon dioxide (CO2), methane (CH2), nitrous oxide (N2O) and other important agents and mechanisms, together with the typical geographical extent (spatial scale) of the forcing and the assessed level of scientific understanding (LOSU). The net anthropogenic radiative forcing and its range are also shown. ... Additional forcing factors not included here are considered to have a very low LOSU. Volcanic aerosols contribute an additional natural forcing but are not included in this figure due to their episodic nature.’
Clearly, the fact that a particular climate factor has a low level of understanding does not necessarily mean that its influence is negligible. Note that the above table ignores the main greenhouse gas: water vapour. Vincent Gray (2002, p. 61) explains: ‘The most important greenhouse gas, water vapour, and the clouds that result from it, have been relegated to the status of a feedback, where the large uncertainties in their estimation can be concealed.’ Many scientists would dispute the tiny role the IPCC assigns to solar forces, which is based only on changes in total solar irradiance and disregards the effects of solar wind, solar magnetism, or ultraviolet radiation changes. AR4 states that the estimated solar forcing has been revised downwards despite ‘substantial uncertainty in the timing and amplitude of solar variations on time scales of several decades to centuries’. Ongoing research is being conducted into the impact of cosmic rays, mediated by the solar wind, on the formation of clouds, which control the amount of sunlight reaching the earth’s surface. All of the late 20th-century warming could have been caused by a slight change in cloudiness. Geotectonic events that release heat from the earth’s interior and affect ocean circulation patterns are ignored altogether. As for greenhouse gases, the IPCC’s claim to have a ‘high’ level of understanding of their impact on temperature cannot be taken seriously (see next section). Patrick Frank (2007) says that uncertainties in the energetic responses of the earth’s climate systems are over 10 times larger than the entire effect attributed to increased CO2.
Changes in the ‘level of scientific understanding’ rating assigned to different climate forcings during the drafting of AR4 clearly indicate that some of the assessments are arbitrary. In the review comments on the Second Order Draft, one reviewer pointed out that the level of scientific understanding for pre-satellite-era solar forcing had jumped from ‘very low’ in the TAR to ‘medium’ in the draft of AR4, and suggested that this should either be explained or corrected. In response to this one objection, the IPCC author decided to reduce the rating to ‘low’ – suggesting that there was no reason to scale it up so far in the first place. In the first draft of AR4, the scientific understanding of 6 out of 15 climate forcing categories was rated as ‘very low’. Ross McKitrick (2007b) writes:
In response to reviewer comments, the second draft scaled down its certainty ratings so that 7½ out of 15 were Very Low (contrails includes two subcategories, one Low and one Very Low). In other words, half the categories of major climatic forcings were subject to the lowest possible rating for scientific certainty. I did not find any review comments on the second draft saying this overstated the uncertainty, yet in the final, published report only 4 of 15 Very Low ratings are shown (with two categories deleted). And in the Summary for Policy Makers Figure SPM-2, none of the forcings in the Very Low categories appear, creating the impression of greater certainty than was indicated in Table 2.11 at the close of scientific review.
Given the enormous uncertainties in understanding a system as complex and interconnected as the climate, the IPCC’s confident assertions about the predominant role of man-made greenhouse gases are no more than an expression of faith.
Modelling mania
The main foundation of IPCC (pseudo)science is computer climate models, or GCMs. GCM used to stand for ‘global climate model’, but now stands for ‘global circulation model’, though ‘greenhouse catastrophe model’ might be a more accurate name! The IPCC claims that models cannot account for the warming of the late 20th century without assigning a significant role to man-made greenhouse gases. But this may simply mean that the models have grossly exaggerated warming expected from rising greenhouse gases while greatly underestimating solar and other natural forcings and neglecting negative feedbacks from clouds and water vapour.
The IPCC’s Fourth Assessment Report (AR4) presents model-based predictions of dramatic and harmful increases in average world temperatures over the 21st century. Some IPCC scientists insist that the IPCC does not make any climate forecasts, but only ‘projections’ based on different emission scenarios. Nevertheless, the word ‘forecast’ and its derivatives occur 37 times, and ‘predict’ and its derivatives 90 times in the relevant chapter of the report. The IPCC provides what might be called ‘conditional forecasts’.
Two experts on scientific forecasting, Kesten Green & Scott Armstrong (2007), have audited the IPCC’s projections, and conclude that ‘because the forecasting processes ... overlook scientific evidence on forecasting, the IPCC forecasts of climate change are not scientific’ and do not provide a sound basis for developing public policy.
We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. (p. 997)
‘Make sure forecasts are independent of politics’ is an example of a principle that is clearly violated by the IPCC process.
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder. (p. 997)
While advocates of complex climate models claim that they are based on ‘well established laws of physics’, there is clearly much more to the models than the laws of physics otherwise they would all produce the same output, which patently they do not. And there would be no need for confidence estimates for model forecasts, which there most certainly are. Climate models are, in effect, mathematical ways for the experts to express their opinions. (p. 1002)
AR4 lead author Jim Renwick has admitted: ‘Climate prediction is hard, half of the variability in the climate system is not predictable, and so we don’t expect to do terrifically well.’ His expert view is that current climate models are unable to predict future climate any better than chance.
Climate is a coupled, nonlinear, chaotic system, and therefore the long-term prediction of future climate states is not possible. Given that weather forecasts are rarely accurate for more than a couple of days, it is incredible that there is so much blind faith in IPCC temperature forecasts for the next hundred years, and that multi-trillion dollar measures are being taken on the basis of them. Roger Pielke says: ‘Weather is very difficult to predict; climate involves weather plus all these other components of the climate system, ice, oceans, vegetation, soil etc. Why should we think we can do better with climate prediction than with weather prediction?’ (climatesci.org).
In the IPCC’s draft 1995 report, there was a chapter headed ‘Validation of climate models’. One of the reviewers, Vincent Gray, pointed out to the IPCC that none of the models had ever been validated. As a result, the title was changed, and the word ‘validation’ has not been used since. Instead, they speak of ‘evaluating’ models, by trying to fit them to selected climate sequences. But since many of the ‘parameters’ used in the models are highly uncertain, it is often possible to fit a model to some climate sequence by suitably tweaking their values. As Gray says, this used to be called ‘fudging’.
Kiehl (2007) gives a good example. Climate models differ by a factor of 2 to 3 in the values they use for climate sensitivity (usually 1.5 to 4.5°C for a doubling of CO2), yet they all simulate a global warming of 0.5 to 0.7°C over the 20th century to within 25% accuracy. They manage to do this because the range of anthropogenic forcing also varies by a factor of just over 2, largely due to the threefold range of uncertainty in the aerosol forcing.
In each successive report, the IPCC has adjusted its temperature ‘predictions’ in the light of the latest observations. In 1995, for example, it significantly reduced its 1990 prediction. This enables its forecasts to remain closer to reality than they otherwise would. But as Roger Pielke Jr. (2008) points out: ‘This is a bit like predicting today’s weather at 6PM.’ Climatologists do not give much publicity to how the model-based predictions they made several decades ago have fared. The graph below shows why.
Fig. 6.2 The red, orange and yellow curves are the temperatures predicted by James Hansen in 1988 for three different scenarios. Scenario C assumes a drastic reduction in CO2 emissions in the 1990s, which has obviously not happened. Scenarios A and B are for different rates of increase in CO2. The actual increase corresponds more or less to scenario B. Yet global temperatures are even undershooting curve C at present. (rankexploits.com)
Climate models did not predict the timing or size of the El Niño which caused the temperature spike in 1998. Nor did they predict the lack of temperature rise during the past decade despite the continuing increase in CO2 levels, or the cooling of the oceans since 2003. Individual climate models are typically unable to reproduce the observed mean surface temperature to better than ±3°C; in the polar regions the average error is 3 to 5°C, with all models overestimating mean Antarctic temperatures by at least 5° (McKitrick, 2007a). For the next two decades, models predict a 0.2°C temperature rise per decade, which will later become slightly higher according to most models. For the end of this century, the IPCC provides best estimates (for different scenarios) ranging from 0.6 to 4.0°C. The most likely value is said to be about 3°C.
Fig. 6.3 The IPCC’s (2007, fig. SPM-5) projected increases in global average temperature for different CO2 emission scenarios: B1 assumes atmospheric CO2 will level off at 600 parts per million (ppm); A1B assumes growth to 850 ppm; A2 assumes growth to 1250 ppm; ‘Year 2000’ assumes no growth beyond the present 390 ppm. The solid lines are multi-model global averages of surface warming (relative to 1980-99) for the various scenarios. The shading ‘denotes the plus/minus one standard deviation range of individual model annual averages’. This means that 68% of the time (one standard deviation), the projections of a particular model will fall within the shaded regions. Note that this tells us nothing at all about the reliability of the temperature predictions.
The IPCC’s projection that temperature could rise by as much as 6°C by 2100 is based on extreme and unrealistic scenarios. Two of its scenarios assume that population will reach 15 billion by 2100, though most demographers say population will peak at 10 billion in the middle of the century and then decline. The IPCC grossly exaggerates the long-term increase in emissions from poor countries, and even assumes that poor nations will catch up with or even surpass wealthy nations in per-capita income by the end of the century. Even according to the most conservative storylines used by the IPCC, per-capita GDP in the US in 2100 would be surpassed by Estonia, Latvia, Lithuania, North Korea, Malaysia, Singapore, Hong Kong, Libya, Algeria, Tunisia, and Argentina (Singer, 2008).
Patrick Frank (2007) writes: ‘General Circulation Models have dozens of parameters and possibly a million variables, and all of them have some sort of error or uncertainty. A proper assessment of their physical reliability would include propagating all the parameter uncertainties through the GCMs, and then reporting the total uncertainty.’ This has apparently never been done, even though it is standard practice in the physical sciences. This means that ‘the same people who express alarm about future warming disregard their own profound ignorance’.
Climate models are particularly poor at modelling clouds and precipitation – which may be the most important thermostat in the earth’s atmosphere (see next section). Satellites shows that precipitation increases by 6.5% for each degree centigrade rise in temperature, whereas models predict the increase will be 1-3%.
The graph below compares observed cloudiness at different latitudes with the cloudiness expected by 10 climate models. All the models show significant misses at all latitudes, including the tropics, where clouds can have a large impact on climate. The resulting standard average cloudiness error is ±10.1%, which produces a ±2.8 W/m2 uncertainty in model projections. This uncertainty equals about ±100% of the excess forcing attributed to all the human-generated greenhouse gasses presently in the atmosphere. After 100 years, the uncertainty is 20 to 40 times greater than the temperature increase being predicted. In fact, even after just a few years, ‘a GCM global temperature prediction is no more reliable than a random guess. That means the effect of greenhouse gasses on Earth climate is unpredictable, and therefore undetectable. And therefore moot’ (Frank, 2007).
Fig. 6.4 Each grey line shows a model’s hindcast of the
percentage cloud cover,
averaged by latitude. The black line shows the observed cloud cover.
The UK Hadley Centre used a GCM to generate an artificial climate, and then tested how well the model predicted the same climate it had generated. It performed poorly, because tiny uncertainties in starting conditions rapidly expanded and drove the GCM into incoherence. The conclusion was that ‘annual mean global mean temperatures are potentially predictable 1 year in advance and that longer time averages are also marginally predictable 5 and 10 years in advance’ (Frank, 2007). Yet the IPCC predicts 100 years ahead! In another test, 15 GCMs employed by the IPCC were used to predict future El Niño/Southern Oscillations (ENSO) in the event of a doubling of CO2. 7 predicted no significant change, 5 predicted a weaker ENSO, and 3 predicted a stronger ENSO. As Patrick Frank (2007) says: ‘This result is exactly equivalent to “don’t know.” ’
Current models are unable to account for climate change at regional level. Koutsoyiannis et al. (2008) compared temperature and precipitation records, at least 100 years long, from eight stations worldwide with the output of various climate models. They conclude that models perform poorly, even on a climatic (30-year) scale, and that local model projections are not credible. Furthermore, models used in the latest IPCC report do not perform any better than those used in the previous report. The models underestimate multiyear fluctuations, with the result that ‘model predictions are much poorer than an elementary prediction based on the time average’ – a damning indictment of the current state of climate modelling. The authors also say that the common argument that models perform better at larger spatial scales is no more than an unsupported conjecture.
CO2 properties
It is a sign of the prevailing climate of ignorance that the public has been led to believe that carbon dioxide is a ‘pollutant’. There are certainly plenty of harmful pollutants in the atmosphere (e.g. particulates, and nitrogen and sulphur oxides), and local and regional measures are needed to combat such emissions. But while carbon particles (soot) are a pollutant, CO2 is not – it is a benign, nontoxic gas that is food for plants, a vital ingredient of photosynthesis and therefore of the food chain, and essential to life on earth. It has been called the ‘molecule of life’.
As a greenhouse gas, CO2 traps heat by absorbing and reemitting infrared radiation emitted from or reflected by the earth’s surface. Other greenhouse gases include methane and nitrous oxides, but by far the most important one is water vapour. The pre-industrial concentration of CO2 in the atmosphere is said to have been about 280 parts per million by volume (ppmv), whereas the current concentration is 385 ppm – i.e. just under four hundredths of 1% of the atmosphere. By contrast, the concentration of water vapour in the air ranges from 0 to 5% by volume, with an average value of about 1%. Water vapour and clouds are responsible for about 90% of the greenhouse effect. This is not just due to water vapour’s higher concentration, but also because it is far more efficient than CO2 at absorbing infrared (IR) radiation, as the following diagram shows.
Fig. 7.1 The top panel shows the incoming solar radiation in red, and outgoing radiation in blue, the rest being absorbed or scattered. The lower panels show the wavelengths of the radiation absorbed by the main greenhouse gases. CO2 absorbs infrared radiation in only three narrow bands of frequencies, and only the one corresponding to a wavelength of 15 micrometres (µm) has much significance. Even if the atmosphere consisted of nothing but CO2, it would still only be able to absorb no more than 8%, and perhaps as little as 4%, of the heat radiating from the earth.
Where the grey shading extends to the top of a panel, it indicates that the energy at that wavelength is fully absorbed. This means that adding more of the gas in question will not absorb any more energy as that wavelength is fully saturated. Parts of the CO2 spectrum are already fully saturated, and adding more CO2 will result in ever diminishing effects as more of the available wavelengths become saturated. (globalwarmingart.com)
As atmospheric CO2 increases, plants grow faster, and are also able to grow under drier conditions since leaves transpire less, i.e. lose less water. Commercial growers deliberately increase CO2 levels in agricultural greenhouses to between 700 and 1000 ppm to raise productivity and improve the water efficiency of food crops. The increase in atmospheric CO2 since the beginning of industrialization is said to have increased the average plant growth rate by about 15%. Experiments indicate that if the present atmospheric CO2 concentration were to increase by about 300 ppm, the productivity of earth’s herbaceous plants would rise by around 30% while the productivity of its woody plants would rise by around 50% (Idso & Idso, 2007).
Increases in both water temperature and atmospheric CO2 enhance marine biological productivity, leading to increased production of dimethylsulfide and various iodocarbons. These are known to be instrumental in creating more, brighter, and longer-lasting clouds, which reflect more incoming solar radiation back to space (Idso & Idso, 2007). This negative (mitigating) feedback is one of the factors contributing to the self-regulation of the earth’s climate and biosphere.
CO2 and temperature
CO2’s current atmospheric concentration is close to the lowest level during the past half a billion years. Back in the Eocene, when many of our plant families evolved, the concentration was five times higher than today. About 450 million years ago CO2 concentrations were more than 10 times present levels, yet the earth was in the throes of a severe ice age.
Fig. 7.2 (geocraft.com)
When it was discovered that the ice-core record showed a close match between temperature and CO2 during the last ice age, most climatologists immediately jumped to the conclusion that this confirmed the theory that CO2 drives temperatures. But closer analysis soon revealed that in at least the last four interglacial cycles, temperatures rose 800 or more years before increases in atmospheric CO2 and methane (CH4). The IPCC (2007) says: ‘Variations in CO2 over the last 420 kyr [420,000 years] broadly followed Antarctic temperature, typically by several centuries to a millennium.’ This is because rising temperatures cause the oceans in particular to release more of the CO2 and methane dissolved in them. AGWers assert that the rising CO2 concentration then amplifies warming, but the data provide no evidence for significant amplification (Glassman, 2006a). The fact that large surges in CO2 did not cause a runaway greenhouse effect confirms that significant negative feedbacks are at work.
Fig. 7.3 CO2 and temperature measured from the Vostok ice, Antarctica. (Petit et al., 1999)
In recent times, the correlation between the official CO2 record and global temperature has been rather poor. To explain the discrepancies, AGW proponents simply invoke whatever additional warming or cooling factors are required. 40% of the 20th-century temperature increase occurred before 1940, during which period hydrocarbon use was almost unchanged. It then rose by 330% between 1940 and 1972, yet temperatures fell slightly. The sixfold increase in hydrocarbon use since 1940 has had no significant effect on temperature trends. This implies that cooling from negative physical and biological feedbacks counteracts the effect of CO2 increases, resulting in negligible changes in global temperature (Robinson et al., 2007).
The carbon cycle
Fig. 7.4 The global carbon cycle for the 1990s, according to the IPCC (2007, fig. 7.3). The diagram shows the main annual fluxes in gigatons of carbon per year (GtC/yr), with pre-industrial ‘natural’ fluxes in black and ‘anthropogenic’ fluxes in red. The numbers are rough estimates; the IPCC says: ‘Gross fluxes generally have uncertainties of more than ±20% but fractional amounts have been retained to achieve overall balance ...’
According to the above figure (which involves a lot of guesswork), the total amount of carbon dioxide in the atmosphere is 762 GtC, compared with 38,271 GtC in the oceans and surface sediment, and 2541 GtC in the biosphere. The oceans reportedly outgas 90.6 GtC into the atmosphere and absorb 92.2 GtC every year, whereas the biosphere adds 121.2 GtC to the atmosphere and absorbs 124.6 GtC every year. Annual human emissions (6.4 GtC) are just 3% of the total amount of carbon dioxide entering the atmosphere every year. The official line is that about half of these emissions are absorbed by the ocean and biosphere, while the rest (3.2 GtC) accumulates in the atmosphere, accounting for virtually the entire increase in atmospheric CO2.
In other words, the IPCC claims that 100% of natural CO2 emissions are absorbed, compared with only 50% of anthropogenic CO2. This scenario is highly implausible because human and natural CO2 quickly become indistinguishably mixed in the atmosphere and ocean (contrary to the impression given in fig. 7.4). Jeffrey Glassman (2006c) says that until it can be shown that the solubility of CO2 in water depends on the carbon isotope or some other as yet unknown property differing between natural and man-made CO2, the logical conclusion is that the same percentage of all CO2 emissions is reabsorbed and the same percentage contributes to any increase in the atmospheric concentration. Although the IPCC’s figures imply that both oceans and biosphere are net sinks rather than sources of CO2, and are absorbing half of humanity’s emissions, the error margin of ±20% means that this is very far from certain.
For anthropogenic CO2 to accumulate in the atmosphere, it would need to have a long residence time. There is abundant evidence, however, that CO2 has a very short residence time. Estimates of the atmospheric CO2 half-time based on experimental measurements published between 1957 and 1992 range from 2 to 25 years, with a mean of 7.5 years (Robinson et al., 2007; Segalstad, 1997). There is no experimental evidence to support computer-model estimates of a CO2 atmospheric residence time of 50 to 500 years. In fact, using the IPCC formula and data, the average residence time of all CO2 is between about 1.5 and 5 years. A slight change in the ratios of different carbon isotopes (14C:13C:12C) over time is not sufficient to prove the IPCC scenario of a dangerous build-up of human CO2 (Glassman, 2006a).
During the last ice age, the growth in atmospheric CO2 concentration was an effect of warming, not its cause. Likewise, the warming of the last few hundred years must be contributing to the present increase in atmospheric CO2, though the relative importance of this factor compared with human emissions has yet to be accurately determined. Glassman (2007) argues that the CO2 absorbed by the oceans over most of the globe is fed into the ocean ‘conveyor belt’, or thermohaline circulation – the global circulation pattern driven by density differences related to heat and salinity. As surface waters move from the equator towards the poles, they give up their heat and absorb increasing amounts of CO2. In the cold, polar regions, they sink and then return as deep ocean currents, resurfacing about a millennium later in hot, tropical zones, especially the Eastern Equatorial Pacific, where the dissolved CO2 is outgassed. As Glassman (2006a) puts it: ‘A massive river of CO2 is flowing around the globe, but just not yet through the GCMs.’
Fig. 7.5 This figure from AR4 (fig. 7.8) is based on measurements collected
since 1956.
Areas of significant CO2 outgassing are shown in red.
Climate sensitivity and nature’s thermostat
Climate sensitivity is defined as the average increase in the earth’s temperature expected from doubling the amount of CO2 in the atmosphere – from 0.028% in the pre-industrial era to 0.056% (expected around 2100). If we ignore feedback mechanisms, and compute how much additional energy in the form of infrared rays emitted by or reflected from the earth’s surface will be absorbed by the CO2, it is generally agreed that climate sensitivity is about 1.1°C (Gregory, 2008; Milloy, 2007). But the inclusion of feedbacks greatly complicates the picture.
The IPCC (2007) gives a value of 2° to 4.5°C, with a ‘best estimate’ of 3°C (compared to 3.5°C in its 2001 report). This is because it assumes the existence of large positive (amplifying) feedbacks: the increased warmth from CO2 leads to more atmospheric water vapour and clouds, which cause further warming, etc. Nowhere in the entire IPCC report is there a detailed, technical exposition of how the ‘best estimate’ of 3°C greenhouse warming is calculated. Nor is there a peer-reviewed article anywhere in the world that does so. Spencer Weart recently admitted that this is because the climate system is too complex. In other words, there is no detailed theoretical underpinning for AGW alarmism!
IPCC computer models predict that 20th-century temperatures should have increased by 1.6 to 3.74°C, while the observed 20th-century temperature increase was only about 0.6°C. So even if all the warming over the past century were attributed to man-made greenhouse gases, it is only about 1/3 to 1/6 of what models project. The IPCC assumes that the models are correct, but that some unknown process has cancelled most of the warming. An alternative explanation is that models are greatly overestimating the climate’s response to greenhouse gases.
Stephen Schwartz (2007) estimated climate sensitivity (including feedbacks) to be 1.1 +/- 0.5°. This estimate relies on the surface temperature record, which is contaminated by the urban heat island effect, attributes the 20th-century temperature change to CO2, modified by aerosols, and assumes that the sun had no effect. So it is probably an overestimate. Other attempts to derive climate sensitivity from observations give lower values, such as 0.5°C (Roy Spencer, 2008b), and 0.4°C (Sherwood Idso).
The IPCC admits that cloud feedbacks remain ‘the largest source of uncertainty’. As Richard Lindzen (2005) says, ‘the models simply fail to get clouds right. We know this because in official model intercomparisons, all models fail miserably to replicate observed distributions of cloud cover.’ It is important to distinguish between low-altitude and high-altitude clouds: low clouds have a predominantly cooling effect due to their shading of sunlight, whereas thin, high-altitude cirrus clouds have a net warming effect, because they trap more infrared energy than they reflect solar energy. All leading climate models predict that as the atmosphere warms, cirrus clouds should increase. It is this positive feedback that supposedly amplifies any warming caused by man-made greenhouse gases, resulting in a high value for climate sensitivity. There is mounting evidence, however, that cirrus clouds actually decrease, and that clouds and water vapour provide a significant negative feedback, resulting in a very low climate sensitivity.
Climate models assume that relative humidity remains more or less constant, whereas the data show that it declined 21% from 1950 to 2007 at 9 km altitude, allowing more heat to escape to space (Gregory, 2008). Furthermore, Roy Spencer et al. (2007) have published satellite evidence for a previously unknown natural cooling mechanism in the tropics; it was originally hypothesized by Richard Lindzen, who called it the ‘infrared iris’ (Lindzen et al., 2001). Spencer’s team found that
when the tropical atmosphere heats up due to enhanced rainfall activity, the rain systems there produce less cirrus cloudiness, allowing more infrared energy to escape to space. The combination of enhanced solar reflection and infrared cooling by the rain systems was so strong that, if such a mechanism is acting upon the warming tendency from increasing carbon dioxide, it will reduce man-made global warming by the end of this century to a small fraction of a degree. (Spencer, 2008a)
When low cloud cover is observed to decrease with warming, this is usually interpreted to mean that warming is the cause of less cloud cover, resulting in amplified warming, i.e. a positive feedback. But in another study Spencer and his colleagues show that the decrease in low clouds is the cause, rather than the effect, of the warming.
While this might sound like too simple a mistake to make, it is surprisingly difficult to separate cause and effect in the climate system. And it turns out that any such non-feedback process that causes a temperature change will always look like positive feedback. Something as simple as daily random cloud variations can cause long-term temperature variability that looks like positive feedback, even if in reality there is negative feedback operating. (Spencer, 2008a)
Fig. 7.6 Atmospheric air is continuously recycled through precipitation
systems, which then
directly or indirectly
control
the water vapour and cloud properties, and thus the
earth’s natural greenhouse effect. (weatherquestions.com)
Spencer’s key point is that precipitation systems act as a thermostat, reducing the earth’s greenhouse effect and causing enhanced cooling when temperatures get too high, and causing warming when temperatures get too low.
Al Gore likes to say that mankind puts 70 million tons of carbon dioxide into the atmosphere every day. What he probably doesn’t know is that mother nature puts 24,000 times that amount of our main greenhouse gas – water vapor – into the atmosphere every day, and removes about the same amount every day. While this does not ‘prove’ that global warming is not manmade, it shows that precipitation systems control the Earth’s natural greenhouse effect, over 90% of which is due to water vapor and clouds. ...
The Earth’s total greenhouse effect is not some passive quantity that can be easily modified by mankind adding a little carbon dioxide – it is instead being constantly limited by precipitation systems, which remove water vapor and adjust cloud amounts to keep the total greenhouse effect consistent with the amount of available sunlight. ...
[I]t is the inadequate handling of precipitation systems – specifically, how they adjust atmospheric moisture contents during changes in temperature – that is the reason for climate model predictions of excessive warming from increasing greenhouse gas emissions. (2008a)
Tropospheric hotspot
Most climate models calculate that greenhouse warming should cause the troposphere (the lowest layer of the atmosphere) in the tropics to warm two to three times faster than the surface, especially at an altitude of about 8 to 12 km. The latest balloon (radiosonde) and satellite observations contradict this.
Fig. 7.7 Greenhouse-model-predicted temperature trends versus latitude and
altitude.
Note
the ‘hotspot’ in the tropical mid-troposphere (above about 10 km). (science-sepp.blogspot.com)
Fig. 7.8 Observed temperature trends versus latitude and altitude. Note
the
absence of the ‘hotspot’ in the tropical mid-troposphere. (science-sepp.blogspot.com)
The above figures are from a 2006 report by the US Climate Change Science Program (CCSP), which highlights this ‘potentially serious inconsistency’ between observations and model predictions. It says that ‘the models that show best agreement with the observations are those that have the lowest (and probably unrealistic) amounts of warming’. In other words the models that yield the best agreement with the observations are dismissed as ‘probably unrealistic’ because they fail to match modelers’ preconceptions concerning notable greenhouse warming! Consequently, an enormous effort is now under way to find errors in the observations that would reduce the disagreement with the models.
Based on a study of 22 models, Douglass et al. (2007) found that in atmospheric layers near 5 km altitude, the modelled trend is 2 to 4 times higher than observed, and, above 8 km, the modelled trend goes up while the observed trend goes down. They conclude that the human greenhouse contribution to current warming is not significant when compared with natural causes.
Fig. 7.9 Temperature time trends (degrees per decade) against pressure (altitude) for 22 averaged models (shown in red) and 10 observational data sets (blue and green lines). Only at the surface do the mean of the models and the mean of observations agree, within the uncertainties. (Douglass et al., 2007)
According to AR4, the linear warming trend at the surface has been 0.13 ± 0.03°C per decade for the last 50 years. The tropospheric temperature trend is now 0.05 ± 0.07°C per decade (Spencer et al., 2007). This means that the lower limit of the surface trend (0.10°C) is fractionally less than the upper limit of the tropospheric trend (0.12°). The IPCC (2007) claims that this ‘largely reconciles’ the discrepancy noted in the previous report. But this is very feeble because, as Singer (2008, p. 7) says: ‘If one takes GH [greenhouse] model results seriously, then the GH fingerprint would suggest the true surface trend should be only 30 to 50 percent of the observed balloon/satellite trends in the troposphere.’ Another way out would be for the IPCC to admit that the surface trend is contaminated by urban heat and that the real trend could be distinctly lower than the upper range of the tropospheric trend. But that would seriously undermine the IPCC’s warnings about catastrophic anthropogenic greenhouse warming as it would indicate that its estimate of climate sensitivity is far too high.
Greenhouse warming predicts that the stratosphere (the atmospheric layer above the troposphere) should be cooling. But ozone depletion would cause the same effect. The stratosphere has indeed cooled, but ozone concentration has declined as well.
Fig. 7.10 (hadobs.metoffice.com)
Measurement bias
The standard view is that the lifetime of CO2 in the air is decades to centuries and that consequently atmospheric CO2 is well mixed. Measurements of atmospheric CO2 using a nondispersive infrared sensor (NDIR) have been made at Mauna Loa in Hawaii since 1958. They are believed to represent the global, ‘background’ concentration – which is currently put at about 385 ppm. The Mauna Loa data are said to be confirmed by data from a handful of other (mainly marine) measuring sites.
Fig. 7.11 Monthly mean atmospheric carbon dioxide at Mauna Loa
Observatory.
The black curve represents the seasonally corrected data. (esrl.noaa.gov)
The IPCC (2007) claims that the current CO2 concentration is ‘very likely’ much higher than at any time in at least 650,000 years. But it forgets to mention that the data in the Vostok record are spaced at intervals averaging 1500 years, which means that there is a more than 95% probability that a period of higher concentrations similar to the last 50 years would not have been sampled even if it had occurred. In actual fact, Siple Dome ice-core data do show that CO2 levels reached 390 ppm within the last 140,000 years – but nowadays such findings are automatically dismissed as artefacts (nsidc.org).
Determinations of CO2 have to meet strict conditions to be acceptable as measurements of the ‘background’ concentration, e.g. wind must be from the south and the measured values must not vary significantly for a period of six hours. As a result, only a small proportion of the data are selected, and the rest are rejected. In addition, the raw data are subject to extensive ‘adjustments’ and ‘calibrations’, both within and between monitoring stations, with Mauna Loa having ‘gold standard’ status. Vincent Gray (2008b, p. 27) writes: ‘All this suppression of information seems to have been made to cover up the fact that the concentration of carbon dioxide in the atmosphere is very far from being “well-mixed”.’
That atmospheric CO2 is not as well mixed as is commonly assumed is confirmed by data gathered by the Atmospheric Infrared Sounder (AIRS) instrument onboard NASA’s Aqua satellite. The map below shows the CO2 concentration in the mid-troposphere (8 km altitude) for July 2003. The variations in concentration at that altitude are supposed to be tiny, but the chart shows that they actually range from about 365 to 382 ppm, and it is quite likely that variations at the surface are even higher. The regional patterns of atmospheric sources and sinks are still apparent in mid-troposphere CO2 concentrations despite the high degree of mixing that is supposed to occur. The AIRS team recognizes that this is at variance with mainstream thinking, and have therefore spent many years validating their results. They plan to release all their data soon (wattsupwiththat.wordpress.com).
Fig. 7.12 This image flatly contradicts the mainstream claim that ‘background CO2 levels can be found
over all
oceans and over land at 1000 m and higher
altitudes’ (Engelbeen, 2008). (photojournal.jpl.nasa.gov)
Before 1958, atmospheric CO2 concentration was measured using standard chemical methods. Ernst-Georg Beck (2007) has assembled over 90,000 such measurements carried out by top scientists, including two Nobel Prize winners, between 1812 and 1961, mostly in the northern hemisphere. He chose the most carefully done assays for the graph below. The curve shows large variations, including a major increase roughly coinciding with a rise in ocean temperatures from 1920 to 1940.
Fig. 7.13
The significance and accuracy of these measurements are hotly disputed. IPCC scientists accept only a handful of measurements that are in keeping with the chosen pre-industrial average CO2 concentration of 280 ppm – which is regarded as the ‘background’ level at that time. However, the measured level today is known to diverge significantly – by up to 150 ppm (meteo.lcd.lu) – from the ‘background’ level depending on local sources of CO2, wind direction, wind speed, etc.
Jeffrey Glassman (2006a,c) notes that despite its claims about CO2 being well mixed, the IPCC admits that latitudinal CO2 gradients over the globe are an order of magnitude greater than longitudinal CO2 gradients. Such gradients, he says, ‘must exist because of the highly concentrated outgassing of CO2 from equatorial waters, and the balancing concentrated polar uptake’. This means that CO2 concentration depends on where it is measured. Note that whereas Vostok is located inside one of the polar CO2 sinks, Mauna Loa is located in the plume of the massive CO2 outgassing from the Eastern Equatorial Pacific (see fig. 7.5 above).
Gray (2008b, p. 27) writes:
Beck (2007) showed that the concentration of carbon dioxide in the atmosphere can vary between 280 ppmv and 400 ppmv, or even more, depending on the time, place and wind direction but is highly variable. The few recent measurements over land indicate that the concentration of carbon dioxide in the atmosphere tends to be higher over industrial and urban areas and lower over forests and pastures. In this way and by a process of ‘smoothing’, ‘seasonally adjusting’ and ‘annual averaging’, the illusion is created that carbon dioxide concentration in the atmosphere is a constant, so that the ‘radiative forcing’ effect can be calculated from its increase by use of a non linear empirical equation.
Joel Kauffman (2007) notes that the increases in CO2 level over the past 100 years, as shown in Beck’s graph, appear to follow rather than precede the slight surface warmings. He says that, given that up to 50 times as much CO2 is dissolved in the oceans as there is in the atmosphere, and 20% more CO2 dissolves in water at 15°C than at 20°C, steady concentrations of CO2 in the air before about 1900, as claimed by mainstream climatologists, are unlikely. When ocean temperatures warm, less CO2 can be retained in the upper 3000 metres and it is exhaled into the atmosphere. He notes that the warming of the North Atlantic by 1° or more could explain why the chemical assays registered a large increase in atmospheric CO2, from 295 ppm in 1885 to 440 ppm in 1944. Ocean cooling of about 0.6° from 1940 to 1970 brought CO2 levels down for a while to 325 ppm from 1955-65. The prevailing winds in the North Atlantic, at least from 40-70° north, blow over Europe and were the source of air used in most of the measurements.
Another method of determining past CO2 levels is to measure the concentration in air bubbles trapped in ice. The figure below shows the measurements obtained from an ice core from Siple Dome in West Antarctica. The age of each point in the core is accurately known from the core’s ‘year rings’. Because this curve did not join up with the Mauna Loa curve, it was arbitrarily shifted to the right by 83 years. The combined curve could be called the CO2 ‘hockey stick’. It is said to be confirmed by data from other ice cores, but given the lack of disclosure, it is not known whether the data were ‘recalibrated’ in the same way as the Siple Dome data.
Fig. 7.14 A fraudulent way of combining Siple and Mauna Loa CO2 records. (Jaworowski, 2004)
Given the 11° swings in temperature during past glacials and interglacials, it seems unlikely that the CO2 concentration really remained uniformly low right up until the industrial revolution. Zbigniew Jaworowski (2004, 2007) argues that determinations of CO2 in polar ice cores are unreliable because air inclusions in ice are not a closed system; more than 20 physico-chemical processes, mostly related to the presence of liquid water, alter the original chemical composition of the air inclusions. In addition, drilling the cores is a brutal and polluting procedure, drastically disturbing the ice samples. The air from ice at Summit, Greenland, deposited during the last 200 years gave CO2 values ranging from 243 to 641 ppm – reflecting artefacts caused by sampling or natural processes in the ice sheet, rather than variations of CO2 concentrations in the atmosphere.
Another problem is data selection. In 1985 pre-industrial CO2 concentrations of 330-500 ppm were reported from a Byrd, Antarctica, ice core, whereas in 1988 only values of 290 ppm or less were reported for the same core, in agreement with the global warming hypothesis. In a 1986 paper in Nature 40% of the CO2 readings, 39% of methane readings, and 40% of the nitrous oxide readings from a Law Dome, Antarctica, core were rejected because they were higher or lower than the ‘politically correct’ values (Kauffman, 2007).
Jaworowski notes that other proxies demonstrate that during the past 10,000 years CO2 levels were generally higher than 300 ppm, fluctuating up to 348 ppm, and contradict the idea that Holocene CO2 concentrations remained at 270 ppm to 280 ppm until the industrial revolution. He concludes:
The CO2 ice core data are artifacts caused by processes in the ice sheets and in the ice cores, and have concentration values about 30 to 50 percent lower than in the original atmosphere. Ice is an improper matrix for such chemical studies, and even the most excellent analytical methods cannot be of help when the matrix and samples are wrong. ...
These studies are beset with a unilateral interpretation and manipulation of data, and with an arbitrary rejection of both the high greenhouse gas readings from the pre-industrial ice, and the low readings from the contemporary samples. (2007, p. 19)
Mitigation madness
The Kyoto Protocol, which was adopted in 1997 and came into force in 2005, requires signatories to reduce total greenhouse gas emissions by an average of 5.2% compared with 1990 levels by 2012. In 1997 the US Senate unanimously passed a resolution, 95-0, stating that they would not ratify Kyoto because of the huge negative impact it would have on the US economy and the fact that it excludes the large and most rapidly developing economies of India and China. China has already surpassed the US as the world’s biggest emitter of carbon. Between 1997 and 2004, CO2 emissions increased by an average of 21.1% in countries that had ratified Kyoto, and by an average of 10.0% in countries that had not. 75% of Kyoto ratifiers had a CO2 growth higher than the US, where the increase was 6.6% (atoc.colorado.edu). Only two of the 15 EU member states that initially signed Kyoto are currently on track to meet their reduction targets.
In 1997, Al Gore’s scientific adviser, Tom Wigley, calculated that if the US were to ratify Kyoto and if all the signatories were to meet their targets, it would prevent only 0.07°C of warming by 2050. Given the flaws in the IPCC’s science, even this tiny figure is likely to be too high. The report of the Nongovernmental International Panel on Climate Change (Singer, 2007, pp. 26-7) states:
Our findings, if sustained, point to natural causes and a moderate warming trend with beneficial effects for humanity and wildlife. This has obvious policy implications: Schemes proposed for controlling CO2 emissions, including the Kyoto Protocol, proposals in the U.S. for federal and state actions, and proposals for a successor international treaty to Kyoto, are unnecessary, would be ineffective if implemented, and would waste resources that can better be applied to genuine societal problems. Even if a substantial part of global warming were due to greenhouse gases – and it is not – any control efforts currently contemplated would give only feeble results. For example, the Kyoto Protocol – even if punctiliously observed by all participating nations – would decrease calculated future temperatures by only 0.02 degrees C by 2050, an undetectable amount.
As Richard Lindzen (2005) observes, both proponents and opponents of AGW agree that Kyoto would have no discernible impact on climate. Yet: ‘This clearly is of no importance to the thousands of negotiators, diplomats, regulators, general purpose bureaucrats and advocates whose livelihood is tied to climate alarmism.’ The estimated cost of Kyoto is about 180 billion dollars a year, which would degrade living standards worldwide while having no measurable effect on global temperature.
The 2006 Stern Review on the Economics of Climate Change broke new ground in sensationalist fear-mongering. Most climate economists believe that the cost of taking no mitigation measures would rise to about 3% of gross domestic product (GDP) a year by 2100. But by making a chain of pessimistic and unrealistic assumptions, Nicholas Stern managed to conclude that doing nothing will wipe 20% off GDP ‘now and forever’, whereas stabilizing CO2 at 550 ppm would cost only 1% of GDP, or $450 billion, a year. What he fails to mention anywhere in his 700-page report is that this would only reduce the projected temperature rise by 2100 from 2.53°C to 2.42°C, i.e. by a mere 0.11°C (and probably far less, given the inflated CO2 sensitivity assumed by the IPCC). As Björn Lomborg (2006) says, ‘One can understand the reluctance of the Stern review to advertise such a puny effect.’ So even in the unlikely event that all the world’s 192 nations would flawlessly implement Stern’s multi-trillion-dollar, century-long policy proposal, it would still be necessary to pay the cost of adapting to climate change. In other words, the mitigation measures would be money down the drain. And for a fraction of the cost – just $75 billion per year – the UN estimates that we could provide clean drinking water, sanitation, basic healthcare and education for all, right now.
There is every reason to take measures to reduce the release of pollutants (such as particulate matter, carbon monoxide, nitrogen oxides and sulphur oxides) in flue gases, and to develop alternative sources of energy. But calls for draconian measures to reduce worldwide CO2 emissions by 80% or more are utterly unrealistic. It is hypocrisy for privileged westerners to tell poorer countries that they should not try to raise themselves out of poverty by developing their economies in the same way that the West has done. And the idea that slashing greenhouse gas emissions might stop the climate from changing is sheer idiocy.
The current CO2 reduction mania has led to the idea of carbon trading. The idea is that if company A can reduce emissions more cheaply than company B, then B can pay A to make reductions for both of them. Since big polluters are able to buy cheap offset credits from abroad, no real reduction in CO2 emissions results, but carbon emission trading companies can make huge profits. Consequently, some warmers support the scheme while others don’t. Critics says that the main effect of the EU’s emissions trading scheme has been to transfer some €30 billion from consumers to the power companies.
Before it went bankrupt in 2001, ENRON had been one of the most intense lobbyists for Kyoto, as it hoped to become a trading firm dealing in carbon emission rights.
These rights are likely to amount to over a trillion dollars, and the commissions will run into many billions. Hedge funds are actively examining the possibilities. It is probably no accident that Gore, himself, is associated with such activities. The sale of indulgences is already in full swing with organizations selling offsets to one’s carbon footprint while sometimes acknowledging that the offsets are irrelevant. (Lindzen, 2008a)
Another idiotic measure to curb ‘global warming’ is to turn corn into ethanol to fuel our vehicles. Filling up one large vehicle fuel tank with 100% ethanol uses enough corn to feed one person for a year. According to an April 2008 World Bank report, biofuels have caused world food prices to increase by 75%. Jean Ziegler, the United Nations Special Rapporteur on the Right to Food, has called for a five-year moratorium on biofuel production, saying that the growing practice of converting food crops into biofuel is ‘a crime against humanity’, which is creating food shortages and price jumps that are forcing millions of poor people to go hungry. The biofuels craze is also fuelling the destruction of ever more habitats and freshwater resources (http://en.wikipedia.org/wiki/Ethanol_fuel; Sherwood & Idso, 2008).
Bob Carter (2006) says: ‘Attempting to “stop climate change” is an extravagant and costly exercise of utter futility. Rational climate policies must be based on adaptation to climate change as it occurs, irrespective of its causation.’ In an Open Letter to UN Secretary-General Ban Ki-Moon in December 2007, 100 prominent scientists from around the world underlined the need to adapt to inevitable, predominantly natural climate change, stating that ‘it is irrational to apply the “precautionary principle” because many scientists recognize that both climatic coolings and warmings are realistic possibilities over the medium-term future’.
Oceans
The oceans cover 71% of the earth’s surface and play a critical role in storing and transporting heat and carbon. There is still much to be learned about horizontal and vertical ocean circulation patterns and oceans’ temperature variability and oscillations.
The Pacific is dominated by the El-Niño/Southern Oscillation (ENSO) cycle and is modulated by the Pacific Decadal Oscillation (PDO), a 20 to 30 year warming and cooling of the north-central Pacific Ocean. North Pacific sea surface temperatures (SSTs) show warm phases during 1925-1946 and 1977-2005. North Atlantic SSTs show a 66 year variation, known as the Atlantic Multidecadal Oscillation (AMO), with warm phases at roughly 1860-1880 and 1930-1960, and cool phases during 1905-1925 and 1970-1990. The cycle returned to a warm phase in the mid-1990s (McKitrick, 2007).
Fig. 8.1 The alternating warm and cool phases of the PDO coincided with major temperature shifts in the 20th century – warming after 1905, cooling after 1946, warming again after 1977. It shifted back to a cool phase in 2008. (jisao.washington.edu/pdo)
The warm phase of the PDO leads to more El Niños and general warmth, and the cold phase to more La Niñas and widespread coolness. The warm mode of the AMO also produces general warmth, especially across northern hemispheric land masses. When the two effects are combined, it is possible to explain much of the temperature variances of the past 110 years. Major volcanic activity can enhance or offset the tendencies at times. In 1998 temperature skyrocketed after a strong El Niño, but this did not happen after the strong El Niño in 1993 because the El Chichon volcano belched enough aerosols into the atmosphere to cancel the heating effect.
Fig. 8.2 A negative trend in the ENSO (more and stronger La Niñas) from 1945 to 1975 and a positive trend from 1975 (more and stronger El Niños) correlate better with global temperature changes than the official CO2 record. (cdc.noaa.gov)
Fig. 8.3 Correlation between US temperature record and ocean cycles (PDO + AMO). The correlation strength (R2) is 0.85 (a perfect correlation would give a value of 1). (D’Aleo, 2007)
Over the past 100 years ocean climate cycles such as the PDO, the North Atlantic Oscillation, ENSO, and the North Pacific Oscillation synchronized several times. Tsonis et al. (2007) found that the interconnections between these cycles can explain all global temperature tendency changes in the 20th century. This includes the temperature oscillation of 65-70 years that can be found in reliable temperature records from the USA, China, and many individual long-term stations (see fig. 3.8). Vincent Gray writes: ‘We are currently at the top of the cycle, and today’s warmish temperatures were last seen in 1950 and 1885. Cooler temperatures occurred in 1915 and 1980, with the next cool year being in 2040.’
Compo & Sardeshmukh (2008) argue that most of the recent worldwide land warming has occurred in response to a worldwide warming of the oceans rather than as a direct response to increasing greenhouse gases (GHGs) over land. Ocean warming, they say, has increased atmospheric humidity, altered the atmospheric vertical motion and associated cloud fields, and perturbed radiative fluxes at the continental surface. Their study is based on atmospheric model simulations for the past 50 years, using observed ocean temperature changes, but ignoring greenhouse gases and aerosols. At the very least, this study shows how adaptable and flexible climate models can be. The authors point out that the models used by the IPCC substantially underestimate natural oceanic variability. They note that ‘the degree to which the oceans themselves have recently warmed due to increased GHG, other anthropogenic, natural solar and volcanic forcings, or internal multi-decadal climate variations is a matter of active investigation’. Roger Pielke Sr. argues that this study shows the need to adopt a regional perspective of climate variability and change, instead of the IPCC’s approach of focusing on changes in global average surface temperature.
The IPCC regards ocean oscillations as natural ‘internal oscillations in the climate system’, and does not attribute them to human causes such as greenhouse gases. They cannot be forecast by conventional climate models. Theodor Landscheidt (1999, 2001, 2003) argued that El Niño, the Pacific Decadal Oscillation, and the North Atlantic Oscillation are probably linked to cycles in solar activity, associated with the sun’s motion around the centre-of-mass of the solar system. A number of other researchers have also linked ocean oscillations with the sunspot cycle (Mackey, 2007). The oscillations have also been linked to geotectonic events, which may in turn be subject to solar influences (see below).
Geotectonics
The emission of dust and gases during volcanic eruptions usually produces regional or global cooling. Worldwide cooling occurred near the volcanically active periods of the early 1980s and 1990s. There have been no major volcanic eruptions in recent years, which makes the decline in temperatures since 2001 all the more noteworthy.
Fig. 8.4
Fig. 8.5 The eruption of Mt Pinatubo in 1991.
The release of heat from within the earth during underwater geotectonic events – e.g. volcanic or seismic activity, or lava flows from ocean-floor fracture zones – is another important factor influencing climate. Surveys of the ocean floor are revealing a large number of active undersea volcanoes. Unlike land volcanoes, sea volcanoes are not explosive because of the containing pressure of the deep ocean; the hot magma simply oozes out and flows over the sea bed. Submarine volcanoes are familiar around Hawaii and Iceland, producing visible steam clouds and invisible CO2. Global surveys of sea surface temperatures show hotspots of varying intensity along the lines of major geotectonic shear zones and lines of seismicity. Extensive ocean hotspots with temperatures elevated by over 1°C and lasting for months have been mapped. Exploration of the Arctic Ocean is revealing major areas of magmatic and hydrothermal activity that were previously unknown (Kauffman, 2007; Endersbee, 2006).
Fig. 8.6 Diagram showing the true proportions of the earth and its atmosphere (Endersbee, 2006). The crust, the atmosphere and the oceans are very thin compared with the overall size of the earth. The thickness of the continental crust relative to the earth’s radius is only one third of the relative thickness of the shell of a hen’s egg. The oceanic crust is generally believed to be even thinner.
The ionosphere is a relatively thin layer of diffuse plasma (ionized, electrically charged gas) above the stratosphere, and the Moho – the interface between the crystalline crust and the earth’s mantle – may also be a very thin layer of plasma as it shows no resistance to seismic shear waves, and behaves like a gas. Lance Endersbee (2006) argues that these two layers of plasma can be compared to the electrically charged plates of a giant capacitor, dominating the electromagnetic behaviour of the atmosphere and crust: ‘Lightning and thunderstorms can be understood as internal discharges within this giant capacitor. Thus it is reasonable to postulate that the variable electromagnetic energy flows from the Sun have a powerful influence on the geotectonic, geothermal and geomagnetic behaviour of the Earth, and thereby, influencing climate.’ An alternative view is that the sun emits etheric radiation that triggers the local production of photons (electromagnetic radiation) after interacting with charged particles in the atmosphere and crust (see The global warming scare, section 8).
With the exception of volcanic eruptions on land, the IPCC gives no serious consideration to how the earth’s internal dynamics may affect atmospheric phenomena. Daniel Walker was the first to note that increased tectonic activity (seismicity, magma upwelling and hydrothermal venting) along portions of the East Pacific Rise increased temperatures and preceded (by up to six months) each El Niño event studied since 1964 (Bhat, 2006). Bruce Leybourne et al. (2006) show that submarine earthquake swarms at 10-33 km depths are correlated with subsequent sea surface temperature anomalies, probably due to increased heat emission from seafloor volcanic extrusions and/or associated hydrothermal venting. They speculate that volcanism is triggered by electrical bursts from the core-mantle boundary induced by solar coupling to the internal geodynamo. Bottom ocean currents tend to redistribute heat in unpredictable patterns causing a general regional warming or in some cases thermal plumes, such as the Pacific El Niño temperature signatures.
Leybourne et al. argue that higher seismic activity and episodic hydrothermal plumes could explain the entire ocean basin warming since 1955, affecting weather patterns, hurricane formation, tornadoes, and ocean/atmospheric circulation. Thermal plumes above the Peru Trench off the coast of South America in June 1997 signalled the beginning of the 1997/98 El Niño, and are correlated with earthquakes in this area seven months earlier. Adriatic, Aegean, and North African (Algerian) earthquake events appear to be associated with the anomalous European heat wave in the summer of 2003.
Leybourne & Adams (2001) draw attention to the fact that the three major global oscillation systems – the Southern Oscillation (SO), North Atlantic Oscillation (NAO), and North Pacific Oscillation (NPO) – are controlled by high-pressure/low-pressure cells underlain by magma-upwelling or -downwelling vortex structures. The sea-saw of sea-level pressure between HP and LP cells determines large-scale regional weather patterns within the area of influence of each of these oscillation systems.
The El Niño/Southern Oscillation (ENSO) has its HP cell over the Easter Island and Juan Fernandez ‘microplates’ on the East Pacific Rise, and its LP cell over the Banda Sea (Indonesia) in the Western Pacific. The Easter and Juan Fernandez vortices are 300-400 km in diameter, and the Banda vortex, the largest on the earth’s surface, is 1100-1200 km. The former are magma-downwelling vortices and rotate anti-clockwise, while the Banda vortex is a magma-upwelling vortex. The NPO, which controls North American weather patterns, has its high-pressure cell over Lake Baikal – the deepest lake along a continental rift zone – and the island arcs and ocean trench systems in the northwestern Pacific Ocean, and its low-pressure cell over the Mid-Pacific Mountains and Hawaiian volcanic islands. The NAO, which controls European and Siberian weather patterns, has its high-pressure cell located near the Azores on the Mid-Atlantic Ridge while its low-pressure cell lies over Iceland – one of the major global upwelling vortices.
Fig. 8.7 1. CPM = Central Pacific Megatrend; 2. EPR = East Pacific Rise; 3. EM = Easter Microplate; 4. JFM = Juan Fernandez Microplate; 5. BM = Banda Microplate; 6. AAD = Australian-Antarctic Discordance; 7. LB = Lake Baikal. (geostreamconsulting.com)
The alternative geological model known as surge tectonics is based on abundant geological and geophysical evidence that all the major features of the earth’s surface (rifts, foldbelts, metamorphic belts, etc.) are underlain by shallow (less than 80 km) magma chambers and channels, known as surge channels or geostreams. It explains that vortex structures are formed when surge channels converge from different directions. Similar dynamic processes take place in the atmosphere (e.g. hurricanes) and oceans (e.g. the intertropical convergence zone). Some of the tectonic vortices have been likened to hurricanes embedded in the earth’s crust.
Leybourne et al. propose that seismic events and related shape-shifts in tectonic vortices cause microgravity fluctuations and consequent changes in atmospheric pressure. The main vortices are linked by surge channels; for instance, the Central Pacific Megatrend connects the East Pacific Rise vortices across the Pacific basin to the Banda Sea vortex. The interlinking surge channels may provide a conduit for the transfer of microgravity oscillations between these regions.
Exploration of the geotectonic dimension of climate dynamics is still in its infancy, but appears to offer great promise.
The radiation emitted by the sun varies in cycles of different lengths. The best-known one is the approx. 11-year sunspot cycle. The sun’s magnetic field reverses polarity during each cycle, and therefore returns to the same state every 22 years (known as the Hale cycle). Longer-term cycles include the approx. 870-year Gleissberg cycle and the 210-year Suess cycle. Some scientists contend that all known cyclical solar phenomena are linked to the sun’s irregular orbit of the centre of mass (barycentre) of the solar system, caused by the pull of the various planets (Mackey, 2007).
The number of sunspots waxes and wanes during each 11-year cycle; the higher the number of spots, the greater the sun’s irradiance. Sunspots are compact, dark features where radiation is locally depleted, but they tend to be accompanied by faculae, which are extended bright features where radiation is locally enhanced. During the peak of the 11-year solar cycle, the expansion of faculae outweighs the darkening from increased sunspot activity. From sunspot minimum to sunspot maximum there is a net increase in total solar irradiance of about 0.1%, with the value varying for different frequencies of radiation.
Fig. 9.1 In this image from an active solar period in March 2001, colours are shifted to highlight the contrast between sunspots (black and dark red) and the faculae that surround them (bright yellow). (NASA/Goddard Space Flight Center)
The Maunder Minimum (1645-1715) was a period when there were few or no sunspots for 70 years, and it coincided with the coldest period of the Little Ice Age. Other periods with low sunspot activity, such as the Spörer Minimum (1420-1530) and the Dalton Minimum (1795-1825), also coincided with cooler phases.
Fig. 9.2 (globalwarmingart.com)
There is some debate about the accuracy of historical sunspot observations (e.g. Svalgaard). In addition to sunspots, other proxies for solar activity, especially for more remote periods, include cosmogenic isotopes (or nuclides) such as carbon-14 (14C) and beryllium-10 (10Be), found in tree rings and ice cores/ocean sediments respectively. This is because they are formed in the atmosphere by high-energy cosmic rays, whose influx is partly modulated by the solar wind and solar magnetic activity; when the sun is more active, the solar magnetic field becomes stronger and partially shields the earth from cosmic rays. Like other proxies, they are not perfect.
Robinson et al. (2007) present a graph (see below) showing seven independent records: solar activity; northern hemisphere, Arctic, global, and US annual surface air temperatures; sea level; and glacier length (sea level and glacier length are corrected for their 20-year lag of atmospheric temperature). They argue that these records all qualitatively confirm each other by exhibiting three successive trends since about 1910 – warmer, cooler, and warmer. While solar irradiance correlates with temperature, hydrocarbon use clearly does not.
Fig. 9.3 (Robinson et al., 2007)
In the above graph Robinson et al. use the Hoyt & Schatten reconstruction of total solar irradiance (TSI). It should be noted that more recent TSI reconstructions show far less variability.
Fig. 9.4 Reconstructions of TSI by Hoyt & Schatten, Lean et al., Wang et al., Svalgaard, Preminger, and Krivova. Modern satellite records (PMOD, ACRIM, TIM, DIARAD), covering the last 30 years, also disagree slightly due to calibration problems. (leif.org)
The oldest reconstruction in the above graph is that by Hoyt & Schatten (1993), which shows the greatest variability, and a clear upward trend since 1700. Leif Svalgaard, by contrast, believes there is a ‘floor’ in TSI, below which it has not fallen for several centuries, and therefore no overall upward trend in the ‘background’ level of solar irradiance on this time scale. Nevertheless, the amplitude of each 11-year cycle can vary significantly, so that there are longer-term trends in the solar energy being input into the earth’s climate system (fig. 9.5). This means that even with the Svalgaard TSI reconstruction, some of the warming of the late 20th century could still be the result of a longer period of persistence of higher-than-average TSI, leading to heat being stored in the oceans and gradually warming up the earth.
Fig. 9.5 The red and blue curves are different ways of ‘smoothing’ the 11-year solar cycles, revealing longer-term increases and decreases in the solar energy received by the earth. (climateaudit.org/?p=3218, #130)
Although the IPCC admits that the contribution of solar forcing to climate change is far from ‘settled’, that ‘uncertainties remain in the representation of solar radiation in climate models’, and that some aspects of solar behaviour are the subject of ‘ongoing debate’ among scientists, the Fourth Assessment Report (AR4) still revised downward by about a factor of three the low positive forcing effect assigned to solar activity in the Third Assessment Report. This is because the IPCC focuses exclusively on total solar irradiance and adopts the TSI reconstruction by Wang et al. (2005), which shows less long-term variability than previous reconstructions. It fails to mention peer-reviewed studies that attribute 50% or more of recent warming to changes in solar activity.
For instance, the IPCC completely ignores the work of Nicola Scafetta and Bruce West. The latter write (2008): ‘We maintain that the variations in Earth’s temperature are not noise, but contain substantial information about the source of variability, in particular the variations in TSI.’ They present the following figure, which shows excellent agreement between the 11-year solar cycles and the cycles observed in the smoothed average global temperature data. They say that the global cooling since 2002
seems to have been induced by decreased solar activity from the 2001 maximum to the 2007 minimum as depicted in two distinct TSI reconstructions. Thus the average global temperature record presents secular patterns of 22- and 11-year cycles and a short timescale fluctuation signature ..., both of which appear to be induced by solar dynamics. The same patterns are poorly reproduced by present-day GCMs and are dismissively interpreted as internal variability (noise) of climate. The nonequilibrium thermodynamic models we used suggest that the Sun is influencing climate significantly more than the IPCC report claims. If climate is as sensitive to solar changes as the above phenomenological findings suggest, the current anthropogenic contribution to global warming is significantly overestimated. We estimate that the Sun could account for as much as 69% of the increase in Earth’s average temperature, depending on the TSI reconstruction used. Furthermore, if the Sun does cool off, as some solar forecasts predict will happen over the next few decades, that cooling could stabilize Earth’s climate and avoid the catastrophic consequences predicted in the IPCC report. (p. 51)
Fig. 9.6 The green curve is the difference between the measured global surface temperature and the 1890-1910 average. The black curve is the smoothed temperature trend. The blue and red curves represent the solar signature, using two alternative TSI satellite composites for the post-1978 period. (Scafetta & West, 2008)
Many researchers have found correlations between solar activity and climate going back thousands of years.
Fig. 9.7 Usoskin et al. (2007a) contend that 12 out of 14 climate shifts since 5000 BC occurred during grand minima of solar activity, giving an 86% hit rate. Blue and red areas denote grand sunspot minima and maxima respectively.
Bond et al. (2001) demonstrated that dramatic swings in the climatological state of the North Atlantic over the past 12,000 years coincided with swings in solar activity; they identified nine 1500-year cycles during this period. Some researchers have proposed that a combination of the approx. 87- and 210-year solar cycles could combine to form a cycle of about 1470 years, thereby explaining the approx. 1500-year Bond and Dansgaard-Oeschger cycles that show up ice cores, tree rings, and fossil pollen from around the world (Singer & Avery, 2007).
Of course correlation does not prove causation. Moreover, since solar irradiance has varied by only about 0.1% during recent sunspot cycles, it seems there would have to be mechanisms at work amplifying the small changes in the sun’s irradiance. Several mechanisms for indirect solar influences have been proposed, including the impact of increased solar ultraviolet (UV) energy on the stratosphere, and the effect of the solar wind and magnetic field on the influx of cosmic rays. The climatic significance of these mechanisms is still very uncertain, and the IPCC therefore disregards them, and does not even add a larger range of uncertainty to its estimate of solar forcing. Note that the assumed huge warming effect of doubling the CO2 concentration is based on unproven speculation about positive water-vapour feedbacks, yet the IPCC still maintains that its understanding of CO2 forcing is ‘high’ (see fig. 6.1 above).
Although UV makes up only a small fraction of the sun’s total energy spectrum, it varies much more than total irradiance, and can peak at up to 100 times its minimum level. Increases in the sun’s UV output tend to create more stratospheric ozone than normal, producing further warming. Some models indicate that this high-altitude effect might be felt near the surface, by influencing the way the underlying atmosphere distributes its own heat.
Ongoing research is being conducted into the influence on cloud cover (and therefore the earth’s albedo) of cosmic rays, which are modulated by the solar wind and, on longer time scales, by the geomagnetic field and the earth’s galactic environment. It is proposed that cosmic rays either influence the production of cloud condensation nuclei, or influence the global electrical circuit in the atmosphere and, in turn, ice nucleation and other cloud microphysical processes (Svensmark & Calder, 2007; Christiansen et al., 2007; Kirkby, 2007). While TSI varies by about 0.1% over a solar cycle, the figures are about 5% for UV and 3-20% for galactic cosmic rays.
Fig. 9.8 Henrik Svenmark’s proposed link between solar wind, cosmic rays, and cloud cover. When cosmic rays hit the atmosphere, they form ions, which give rise (through mechanisms not yet fully understood) to cloud condensation nuclei, and thereby spur the growth of clouds. An active sun partially shields the earth from the normal barrage of cosmic rays, leading to fewer low-altitude clouds. Changes in cosmic-ray influx do not affect high clouds, as there are always plenty of cosmic rays at high altitudes, but fewer cosmic rays penetrate to low altitudes, so increases or decreases due to changes in solar magnetism have more noticeable effects there.
Fig. 9.9 The correlation between cosmic ray flux (red) at low latitudes and low-altitude (<3 km) cloud cover (blue) using the International Satellite Cloud Climatology Project dataset. (Svensmark, 2007)
Low-level clouds cover over a quarter of the earth’s surface and exert a strong cooling effect on the surface. A 2% change in low clouds during a solar cycle will change the heat reaching the earth’s surface by 1.2 watts per square metre (W/m2). The IPCC estimates the total warming during the 20th century to be 1.4 W/m2. Since the Little Ice Age, there has been a substantial increase in solar magnetic activity and a corresponding 30% reduction in the cosmic ray intensity, with about one half of this decrease occurring in the last century.
Fig. 9.10 Values of carbon-14 (a proxy for solar activity) correlate extremely well with oxygen-18 (a climate proxy). The data are from a stalagmite in Oman (Neff, 2001), covering a time interval of over 3000 years; the lower panel shows a particularly well-resolved time interval from 8350 to 7900 BP. This detailed correlation could be explained through the modulation of galactic cosmic rays by changes in the solar wind and solar magnetic activity. (Kirkby, 2007)
Lockwood & Frohlich published an article in 2007 that received widespread publicity as it claimed to prove that the historical link between the sun and climate ended about 20 years ago and that Svensmark’s cosmoclimatology theory was false. In their reply, which was largely ignored, Svensmark & Friis-Christensen (2007) showed that the sun’s influence remains obvious over the past 20 years in variations of tropospheric air temperature and ocean sub-surface water temperature. They say that if, as L&F had argued, the climate system’s response to the solar cycle is not so apparent in the global surface temperature, ‘one can only wonder about the quality of the surface temperature record’. In defence of their own theory, they included a figure (see below) showing a good (inverse) correlation between cosmic-ray flux and tropospheric air temperatures. They point out that L&F had erased the solar cycle from various datasets by using running means of 9 to 13 years, thereby creating the illusion that temperatures are still rising rapidly early in the 21st century.
Fig 9.11 Correlation between cosmic rays (red) and temperature (blue), after removing El Niño, the North Atlantic Oscillation, volcanic aerosols, and a linear trend of about 0.14°C/decade (implying that the cosmic-ray/cloud theory does not account for this trend). (Svensmark & Friis-Christensen, 2007)
Even critics of cosmoclimatology (e.g. Sloan & Wolfendale, 2008, Kristjánsson et al., 2008) have had to admit that there is some correlation between clouds and cosmic rays, but they complain that the correlation is not as strong as it ‘ought’ to be or that it is purely coincidental (for a response, see Shaviv, 2006, 2008; Moriarty, 2008). Cosmic rays are of course only one potential factor affecting cloud formation.
In an assessment of cosmoclimatology, Usoskin et al. (2007b) wrote: ‘A link between low clouds and CR [cosmic rays] appears statistically significant on the inter-annual time scale since 1984 in limited geographical regions, the largest being North Atlantic + Europe and South Atlantic. We note that many reconstructions of the past climate are based on European data, where the CR-cloud relation is the most pronounced. Extension of this relation to the global scale may be misleading.’ They noted that there is evidence that solar variability affects climate changes on centennial to millennial time scales, but ‘it is hard to distinguish the role of cosmic rays, and the exact mechanisms need to be resolved’. The SKY experiment at the Danish National Space Center in 2005/06 confirmed that enhanced ionization notably facilitates the production of small ion clusters, the building blocks of cloud condensation nuclei. Further experiments are planned at CERN as part of the CLOUD project, to see whether cosmic rays can lead to the production cloud condensation nuclei themselves.
The solar wind is a highly conducting stream of plasma flowing out from the sun at speeds of 400 or 750 km/s. Brian Tinsley (2008) notes that as well as impeding the flow of high-energy cosmic-ray particles coming in from the galaxy (as emphasized by cosmoclimatology), it also energizes high-energy electrons in the earth’s radiation belts that precipitate into the atmosphere. He writes:
both of these effects change the column conductivity between the ionosphere and the earth’s surface. The solar wind also changes the potential difference between the ionosphere and the earth in the polar cap regions. All three effects alter the ionosphere-earth current density (Jz) that is part of the global atmospheric electric circuit, and which flows down from the ionosphere to the surface and through clouds. ...
About half of the global warming over the past century can be accounted for by changes in the sun and the solar wind, and there are well documented correlations of climate during past millennia with cosmic ray flux changes. These can be understood in terms of electrical interactions between cloud droplets and aerosol particles responding to solar wind-induced changes in atmospheric ionization and in the latitude distribution of Jz ... The implication is that global changes in Jz produce global changes in suitable types of clouds, and in some cases changes in precipitation. (utdallas.edu/physics/faculty/tinsley.html)
On the basis of carbon-14 and beryllium-10 data, Sami Solanki et al. (2004) contended that it was necessary to go back over 8000 years to find a time when the sun was, on average, as active as in the last 60 years. Using the same proxies, Muscheler et al. (2007) argued that recent solar activity is high but not exceptional with respect to the last 1000 years. In the past decade or so the sun has started to decline in activity. Such behaviour is usually followed by cooler temperatures on earth. However, Solanki, who believes that half of the warming since 1970 is due to solar activity, thinks that temperature may fall by only about 0.2°C (Stuart, 2006) – though even this is about ten times higher than the cooling that would come from implementing the Kyoto Protocol. Some scientists, especially solar physicists, think the sun’s reduced activity may cause a far greater global temperature drop of up to 1.5°C (see Mackey, 2007). Habibullo Abdussamatov believes cooling will begin slowly around 2012-15, with the coldest phase being reached around the middle of the century.
Clearly the science is unsettled, and what happens in the next few decades will greatly contribute to our understanding of the solar-climate link.
Alarming forecasts of a warmer or cooler climate are nothing new. In 1922 the US Weather Bureau reported that ‘The Arctic ocean is warming up, icebergs are growing scarcer and in some places the seals are finding the water too hot’. It spoke of ‘a radical change in climate conditions and hitherto unheard-of temperatures in the Arctic zone’. A series of scares followed, as the following list of headlines from the New York Times shows:
18 Sept. 1924: ‘MacMillan reports signs of new ice age’
27 March 1933:
‘America in longest warm spell since 1776’
21 May 1974: ‘Scientists ponder
why world’s climate is changing: a major cooling widely considered to be inevitable’.
(In 1975 Newsweek proposed ‘melting the arctic cap by covering it with black soot
or diverting arctic rivers’!)
27 Dec. 2005: ‘Past hot times hold few reasons to relax
about new warming’
The climate is of course always changing. And any major shifts in climate are likely to benefit some people (and other species) and adversely affect others. However, coverage of climate change issues by environmental and scientific journalists and commentators is overwhelmingly one-sided and sensationalist. For instance, an August 2008 headline in the UK Guardian read: ‘On a planet 4C hotter, all we can prepare for is extinction’. Researchers seem to be falling over themselves trying to discover some new negative consequence of global warming, which nowadays is automatically assumed to be chiefly man-made. John Brignell has compiled a list of 598 things that have been blamed on global warming. Recent additions include more kidney stones and the possibility of the earth exploding! Roy Spencer (2008, p. 12) says: ‘Apparently, global warming theory is so powerful and flexible that it can explain everything, from failed crops, to flooded homes, to shrinking polar bear populations, and, as recently reported, even shrinking polar bear testicles.’
In 1984 James Hansen said that we may have as little as 10 years to drastically cut carbon emissions and avoid a catastrophe. In 1997 he said that the ‘tipping point’ is drawing ever closer and we (still) have about 10 years to make a difference! Some scaremongers currently prefer to talk about the threat of ‘climate change’ rather than ‘global warming’ (since worldwide temperatures in recent years have not been very ‘cooperative’), and the latest catch-all term is ‘climate chaos’. Al Gore’s constantly repeated buzz word is ‘the climate crisis’, which, he says, ‘threatens the survival of our civilization and the habitability of the earth’. It is certainly true that the ‘climate crisis’ is man-made – with Gore and Hansen playing leading roles!
Alarmist journalists, researchers and environmentalists may not have much talent for sober scientific analysis, but at least they provide an endless source of entertainment. Roy Spencer writes:
Movie stars, politicians-turned-movie stars, and famous musicians don’t notice the hypocrisy of calling for humanity to use less toilet paper while they fly their private jets from city to city. And what could be more ironic than the early 2007 trek to the North Pole to raise awareness of global warming that had to be called off because of cold weather? (2008c, pp. viii-ix)
Sir James Lovelock, who seems to have
forgotten his earlier notion that Gaia (the earth) is a self-regulating ‘superorganism’,
said in 2006: ‘Before this century is over, billions of us will die [sure bet!], and the few
breeding pairs of people that survive will be in the Arctic where the climate remains tolerable.’
Sir David King, former UK Chief Scientist, has said that in order to cure global warming, women
who find supercar drivers ‘sexy’ should divert their affections to men who live more
environmentally-friendly lives. In a humorous comment, Steve McIntyre displayed a photo showing a former Miss
Scandinavia with Finnish racing car driver Kimi Raikkonen, and added: ‘This would
presumably represent that the type of liaison that must be sacrificed if we are to cure global
warming.’
An August 2008 headline on the BBC website read: ‘Eat kangaroo to “save the planet” ’. It reported that switching from beef to kangaroo burgers could significantly reduce greenhouse gas emissions. This is because sheep and cows emit a huge amount of methane (a potent greenhouse gas) through belching and flatulence, whereas kangaroos are virtually fart-free! In 2003 New Zealand’s government dropped a controversial and much-ridiculed plan to tax farmers for their livestock’s flatulence as part of its emission-reduction efforts.
The poles
Alarmists like to focus on the Arctic, which contains less than 3% of the world’s ice, rather than Antarctica, which contains 90%. The following graphs show why.
Fig. 10.1 Temperature anomalies in the Arctic. (co2science.org)
Fig. 10.2 Temperature anomalies in the Antarctic. (co2science.org)
Widespread publicity was given to the fact that in the summer of 2007 ice cover in the Arctic shrank from 13 million square kilometres to just 3 million – the lowest value since satellite imaging of the ice-cap began, which was just 30 years ago. But the media remained silent about the fact that in 2007, the Antarctic ice field reached its largest extent for the past 30 years (arctic.atmos.uiuc.edu/cryosphere).
Little publicity was given to NASA’s statement that the melt in the Arctic was largely due to an ongoing reversal in Arctic Ocean circulation triggered by atmospheric circulation changes that vary on decade-long time scales. Another factor was the strong winds that blew all summer up the Bering Strait, across the pole and out into the warm waters of the North Atlantic. By the winter, the record north polar ice melt had been replaced by record ice formation. In late October 2008 sea-ice extent was 30% greater than at the same time the previous year, and was virtually identical to the situation in 1979, when satellite records began (SPPI, 2008).
In early July 2008, Dutch TV News (NOS) dispatched a reporter to the north pole as experts had forecast that the north pole would be ice-free; some reports said this had not happened before in living memory, but the presence of a certain amount of open water at the north pole is actually not unusual. The Dutch reporter ended up getting no closer than 650 km to the pole because the ice was too thick. Fortunately, she was able to snap some photos of cuddly, man-eating polar bears instead. Some scientists are now predicting that the north pole will be ice-free by the summer of 2013.
James Hansen believes that an ice-free north pole will signal that we have reached a ‘tipping point’, ushering in terrible catastrophes. It’s worth bearing in mind that the earth appears to have had permanent polar ice caps for only about 5% of its history. There was no ice in the Arctic or in Greenland 850,000 years ago. And the Arctic seems to have been free of ice in each of the past four interglacial periods, when temperatures were up to 5°C warmer than today. It was probably also free of ice in the Medieval, Roman, and Bronze Age warm periods.
Alarmists would have us believe that Greenland is in rapid meltdown, and that this threatens to raise sea levels by several metres. However, many recent studies indicate that the Greenland ice sheet is thickening slightly inland and thinning quickly near the coasts, with a small overall mass gain. Temperatures in Greenland were higher in the 1930s and 40s than today.
Fig. 10.3 Greenland ice thickness changes determined by NASA satellites. (D’Aleo, 2008)
Fig. 10.4 Temperatures in the Arctic (62.5º to 90ºN). (Polyakov et al., 2003)
Antarctica as a whole has not warmed since 1975. It is mainly the Antarctic Peninsula – the tiny part of Antarctica pointing towards South America – that has undergone recent warming, and this just happens to be the part that receives most publicity. But the warming seems to be linked in part to volcanic activity. For instance, the Larsen Ice Shelves A and B sit astride a chain of volcanic vent islands known as the Seal Nunataks. Wingham et al. (2006) conclude from an analysis of satellite altimeter data covering 72% of the Antarctic ice sheet that the ice sheet has been growing in thickness by 5 mm/year (1992 to 2003). The IPCC’s Fourth Assessment Report (2007) stated that Antarctica is unlikely to lose any ice mass during the remainder of this century.
Fig. 10.5 NASA map of Antarctic temperature trends.
Sea level
Fig. 10.6 Sea level has risen by about 130 metres since the last glacial maximum. (globalwarmingart.com)
In the 1980s, the US Environmental Protection Agency predicted that oceans would rise by several metres by 2100. By the 1990s, the IPCC was expecting a 67 cm rise. In 2001, it expected sea levels to rise by 48.5 cm, and in its 2007 report it reduced this further to an estimated 43 cm. Al Gore, on the other hand, believes sea levels could rise by 20 ft (6 m) by the end of the century!
The IPCC says that global average sea level rose at an average rate of 1.8 mm per year from 1961 to 2003, and by 3.1 mm per year from 1993 to 2003, but is unsure whether this represents a long-term trend or is due to decadal variability. Using data from the 9 tide-gauge stations with the highest quality data, far away from regions with high rates of vertical land movement, Holgate (2007) calculated that the mean rate of global sea-level rise was ‘larger in the early part of the last century (2.03 ± 0.35 mm/year 1904-1953), in comparison with the latter part (1.45 ± 0.34 mm/year 1954-2003)’. Wöppelmann et al. (2007) used Global Positioning System (GPS) data to take account of the effect of vertical land motion on tide-gauge data, and found that the rate of world sea-level rise is 1.31 ± 0.30 mm/year.
Fig. 10.7 Decadal rates of sea-level change over the past century. The
greatest global sea-level rise occurred around 1980, well before the
greenhouse scare got off the
ground. (Holgate, 2007)
Nils-Axel Mörner (2005, 2007) says that satellite altimetry showed annual variability but no overall trend in sea level from 1992 to 2002, but that in IPCC publications the straight line changed in 2003 into a rapid upward trend of 2.3 mm per year. This is because they entered a ‘correction factor’ to bring it into line with one of the six tide gauges in Hong Kong – the one, says Mörner, that should not be used, as it is located in a region that is known to be subsiding. He estimates that sea level might rise by 5 cm by 2100, with an uncertainty of ±10 cm. Zwally et al. (2005) found a combined Greenland/Antarctica ice-loss sea-level rise of 0.05 mm per year during 1992-2002. At that rate, it would take a full century to raise sea level by just 5 mm.
Polar bears and extinctions
Al Gore used a photo of a polar bear and her cub, supposedly stranded on melting ice off the coast of Alaska, as emotional propaganda to support his claim that man-made global warming is doing great harm. His commentary was: ‘Their habitat is melting ... beautiful animals, literally being forced off the planet. They’re in trouble, got nowhere else to go.’ The photo was taken in August 2004, a time of year when the fringe of the Arctic ice cap normally melts. The bears were not in fact stranded; they were not far from the coast, and can swim distances of over 100 miles.
Fig. 10.8 (prisonplanet.com)
Polar bears have endured warmer periods than are forecast by the IPCC. They survived the last interglacial (130,000-110,000 years ago), when there was virtually no ice at the north pole and average Arctic temperature was 3 to 5°C higher than today. They also survived the Holocene Climate Optimum and the Medieval Warm Period.
In 2006, biologist Mitchell Taylor dismissed ‘media-assisted hysteria’ about polar bears going extinct within 25 years. 11 of the 13 populations of polar bears in Canada, which is home to two-thirds of the world’s polar bears, are stable or increasing in numbers. In the Baffin Bay region between North America and Greenland, the polar bear population has declined – but so have temperatures. In the Beaufort Sea region the temperature has increased, yet so has the polar bear population. The main threat to polar bears is not ‘global warming’ but hunting. In 1940, there were just 5000 polar bears worldwide, but now that hunting is controlled, there are 25,000.
The IPCC claims that 20 to 30% of plant and animal species will be threatened with extinction in this century due to global warming. Yet during the past 2.5 million years almost none of the millions of species on earth went extinct, despite temperature changes as great and rapid as those projected by climate models today. The main exceptions were about 20 species of large mammals (megafauna such as saber-tooth tigers and hairy mammoths), which went extinct at the end of the last ice age. Fossil evidence and recent ecological and genetic research suggest that current projections of extinction rates are overestimates, and that species are more adaptable than models presume (Botkin et al., 2007). Idso & Idso (2007) cite abundant research showing that ‘warming – especially when accompanied by an increase in the atmosphere’s CO2 concentration – typically results in an expansion of the ranges of terrestrial plants and animals, leading to increases in biodiversity almost everywhere on the planet’.
Extreme weather
Human-caused global warming is increasingly being blamed for every hurricane, tornado, snowstorm, heat wave, tsunami, earthquake, flood and drought that occurs. In April 2007, the UK Met Office forecast that Britain was about to experience its hottest, driest, most drought-prone summer on record because of ‘global warming’. Just six weeks later, reports started coming in of the coldest, wettest, most flood-prone summer since records began – and of course ‘global warming’ was given the blame!
After a record 2005 hurricane season, which included the flooding of New Orleans by hurricane Katrina, singer (and hurricane expert?) Barbara Streisand warned that we were in a global warming emergency and that storms would become more frequent and more intense. But to most people’s surprise, 2006 saw a below-average hurricane season, so the media quickly turned its attention to droughts in California and Australia instead. In February 2008 the National Oceanic and Atmospheric Administration (NOAA) issued a press release denying any link between hurricanes and global warming, saying: ‘There is nothing in the U.S. hurricane damage record that indicates global warming has caused a significant increase in destruction along our coasts.’ But this wasn’t considered newsworthy.
Research suggests that a warmer climate will lead to increased vertical wind shear, which will impede the development of tropical cyclones (hurricanes). It will also reduce temperature gradients between the equator and the poles, resulting in fewer or less intense storms in mid-latitudes too.
It is often claimed that global warming will lead to increased mortality due to heat stress. However, in temperate regions, human mortality tends to show clear maxima in the winter and secondary maxima in the summer. It is estimated that in Europe over 200,000 people die every year from excess heat, while 1.5 million people die from excess cold (Lomborg, 2007).
To put things in perspective, the following chart shows aggregate global mortality and mortality rates between 1900 and 2006 for the following weather-related extreme events: droughts, extreme temperatures (both extreme heat and extreme cold), floods, slides, waves and surges, wild fires and windstorms of different types (e.g. hurricanes, cyclones, tornados, typhoons). It indicates that both the number of deaths and the death rates have declined at least since the 1920s. From the 1920s to the 2000-06 period, the death rate per million dropped by 99%, from 241.8 to 3.5.
Fig. 10.9 Global death and death rates due to extreme events, 1900-2006. (csccc.info)
Heretics and the warmist faith
Those who question the dogma that human CO2 emissions are driving climate change are frequently referred to as ‘global warming deniers’ or ‘climate change deniers’ or even ‘climate deniers’. Such terms are designed to sow confusion. No one denies that much of the earth has generally been warming for the past few centuries, or that the climate changes, or that the climate does indeed exist. And no one denies that humans influence the climate, though there is plenty of room for debate about the extent of the human impact and the main mechanisms involved.
With so many reputations, careers and livelihoods riding on the continued reign of the man-made global warming doctrine and the associated scaremongering, an extremely intolerant attitude has developed toward those who challenge mainstream climate science. The official line is that the science is ‘settled’, and that dissenting scientists should be ignored, discredited or dismissed, and debating with them should be avoided. Name-calling is rampant, and ‘climate deniers’ have even been compared to holocaust deniers. Politicians are expected to jump on the AGW bandwagon – Canadian environmentalist David Suzuki recently called for politicians sceptical of a man-made ‘climate crisis’ to be thrown into jail for committing a ‘criminal act’!
The belief that humans are causing catastrophic global warming and that this presents a serious threat to the environment and our own survival has overtones of the biblical paradigm of sin, guilt, and the need for redemption. Human interference in the climate system is seen as evil, and even schoolchildren are now being taught to repent for their sins against nature. But we can supposedly redeem ourselves by buying carbon offsets or making donations to environmental organizations.
One tactic used by mainstream climatologists is to dismiss any article not published in ‘eminent’, ‘high-quality’ scientific journals such as Nature and Science. It is of course virtually impossible to publish non-conformist articles in these journals. Moreover, Nature demonstrated its ‘eminence’ and ‘quality’ by rubber-stamping and publishing Mann’s thoroughly flawed hockey-stick paper and by refusing to publish articles criticizing it. Nowadays, simple disagreement with the IPCC’s conclusions has become a common basis for rejecting papers for publication in professional journals.
Attempts are frequently made to depict climate realists as being in the pay of Big Oil or as servants of other nefarious interests. But if receiving a grant can corrupt scientists, what are we to make of the immense sums of government money being poured into research supporting the anthropogenic global warming doctrine? Senator James Inhofe (2007) writes:
The imbalance of money between the promoters of climate fears and skeptics is so large that one 2007 U.S. Department of Agriculture grant of $20 million to study how ‘farm odors’ contribute to global warming exceeded ALL of the money the groups skeptical of climate fears allegedly received from ExxonMobil over the past two decades. For years, well over $100 million a year has been flowing from the US federal government to environmental lobby groups.
The best approach is to focus on the scientific arguments and to assume that scientists are sincere in their views, regardless of where their funding comes from. But it is worth noting that big businesses, including leading oil companies and business organizations of which they are members, are overwhelmingly committed to the orthodox view of global warming and see donations to environmental organizations and causes as a way to generate good publicity. For instance, the HSBC bank recently allocated $100 million to ‘tackling climate change’ – the biggest charitable donation ever from a British business. As David Henderson (2007, p. 212) says, ‘any private sponsors of potentially non-conforming studies ... could expect to be the subject of hostile activist campaigns as well as official disapproval: the pressures to conform are strong and unrelenting’.
The division between climate alarmists and climate realists is far from being a straightforward division between the political ‘left’ and the political ‘right’. Most right-wing politicians have now jumped on the global warming bandwagon, while some left-wingers have started voicing dissent. For instance, French geophysicist Claude Allegre, a former French Socialist Party leader, recently switched from being a believer in dangerous man-made warming to being a sceptic. He now says that the cause of climate change is unknown, ridicules the ‘prophets of doom’, and believes that global warming hysteria is motivated by money. A number of other scientists and academics who are also progressive environmentalists believe that the green movement has been ‘co-opted’ and ‘hijacked’ by those promoting climate alarmism (see Inhofe, 2007).
A report issued by the Nongovernmental International Panel on Climate Change argues that the IPCC’s claim that the warming in recent decades is ‘very likely’ caused by the human emissions of greenhouse gases is false, and that the policies adopted or proposed to ‘fight global warming’ are misguided and worthless. It concludes:
It is regrettable that the public debate over climate change, fueled by the errors and exaggerations contained in the reports of the IPCC, has strayed so far from scientific truth. It is an embarrassment to science that hype has replaced reason in the global debate over so important an issue. (Singer, 2008, p. 27)
Beck, Ernst-Georg (2007). 180 years of atmospheric CO2 gas analysis by chemical methods. Energy & Environment, v. 18, no. 2
Bhat, M.I. (2006). Bushy-Blairy about global warming. New Concepts in Global Tectonics Newsletter, no. 41, pp. 58-71, www.ncgt.org
Bond, G.C., et al. (2001). Persistent solar influence on North Atlantic climate during the Holocene. Science, v. 294, pp. 2130-26
Botkin, D.B., et al. (2007). Forecasting the effects of global warming on biodiversity. BioScience, v. 57, pp. 227-36
Briggs, William M. (2008). How to look at RSS satellite-derived temperature data. http://wmbriggs.com
Carter, Robert M. (2006). Human-caused global warming: McCarthyism, intimidation, press bias, censorship, policy-advice corruption and propaganda. www.lavoisier.com.au
Christiansen, F., J.D. Haigh, & H. Lundstedt (2007). Influence of solar activity cycles on earth’s climate: Final report. Danish National Space Center, www.spacecenter.dk
Clark, Stuart (2006). Global warming: will the sun come to our rescue? New Scientist, 18 Sept., http://environment.newscientist.com
Compo, G.P., & P.D. Sardeshmukh (2008). Oceanic influences on recent continental warming. Climate Dynamics, DOI: 10.1007/s00382-008-0448-9
D’Aleo, Joseph (2007). US temperatures and climate factors since 1895. http://icecap.us
D’Aleo, Joseph (2008). Multidecadal ocean cycles and Greenland and the Arctic. www.intellicast.com/Community/Content.aspx?ref=rss&a=128
De Laat, A.T.J., & A.N. Maurellis (2006). Evidence for influence of anthropogenic surface processes on lower tropospheric and surface temperature trends. International Journal of Climatology, v. 26, pp. 897-913
Douglass, D.H., J.R. Christy, B.D. Pearson, & S.F. Singer (2007). A comparison of tropical temperature trends with model predictions. International Journal of Climatology, DOI: 10.1002/joc.1651
Easterbrook, Don J. (2008). Geologic evidence of the causes of global warming and cooling – are we heading for global catastrophe? www.ac.wwu.edu/~dbunny/research/global/geoev.pdf
Endersbee, Lance (2007). Climate change is nothing new! New Concepts in Global Tectonics Newsletter, no. 42, pp. 3-17, www.ncgt.org
Engelbeen, Ferdinand (2008). CO2 measurements. www.ferdinand-engelbeen.be/klimaat/co2_measurements.html
Essex, C., R.R. McKitrick, & B. Andresen (2007). Does a global temperature exist? Journal of Nonequilibrium Thermodynamics, v. 32
Everett, John T. (2008). Global climate change facts: The truth, the consensus, and the skeptics. www.climatechangefacts.info
Frank, Patrick (2007). A climate of belief. Skeptic, v. 14, www.skeptic.com
Glassman, Jeffrey A. (2006a). The acquittal of carbon dioxide. www.rocketscientistsjournal.com
Glassman Jeffrey A. (2006b) Gavin Schmidt’s response to The Acquittal of CO2 should sound the death knell for AGW. www.rocketscientistsjournal.com
Glassman, Jeffrey A. (2006c). On why CO2 is known not to have accumulated in the atmosphere & what is happening with CO2 in the modern era. www.rocketscientistsjournal.com
Glassman, Jeffrey A. (2007). Solar wind has twice the global warming effect of El Niño. www.rocketscientistsjournal.com
Goddard, Steven (2008a). Is the earth getting warmer, or cooler? www.theregister.co.uk
Goddard, Steven (2008b). Painting by numbers: NASA’s peculiar thermometer. www.theregister.co.uk
Goddard, Steven (2008c). Are the ice caps melting? www.theregister.co.uk
Gray, Vincent (2002). The Greenhouse Delusion: A critique of ‘Climate Change 2001’. Brentwood, UK: Multi-Science Publishing Co.
Gray, Vincent (2008a). Unsound science by the IPCC. http://nzclimatescience.net
Gray, Vincent (2008b). The global warming scam. www.climatescience.org.nz
Green, K.C., & J.S. Armstrong (2007), Global warming: forecasts by scientists versus scientific forecasts. Energy & Environment, v. 18, pp. 997-1021
Green, K.P., J. Schwartz, S.F. Hayward (2007). Politics posing as science: a preliminary assessment of the IPCC’s latest climate change report. American Enterprise Institute for Public Policy Research, www.aei.org
Gregory, Ken (2008). Climate change science. www.friendsofscience.org
Harris, Tom, & John McLean (2008). The UN climate change numbers hoax. www.onlineopinion.com.au
Henderson, David (2007). Governments and climate change issues: The case for rethinking. World Economics, v. 7, pp. 183-228
Holgate, S.J. (2007). On the decadal rates of sea level change during the twentieth century. Geophysical Research Letters, v. 34, L01602, doi:10.1029/2006GL028492
Holland, David (2007). Bias and concealment in the IPCC process: the ‘hockey-stick’ affair and its implications. Energy & Environment, v. 18, pp. 951-83
Idso, Sherwood B., & Craig D. Idso (2007). Carbon dioxide and global change: Separating scientific fact from personal opinion. Center for the Study of Carbon Dioxide and Global Change, www.co2science.org
Inhofe, James (2007). 2007: Global warming alarmism reaches a ‘tipping point’. http://epw.senate.gov
IPCC (2007). Fourth Assessment Report: The physical science basis. www.ipcc.ch/ipccreports/ar4-wg1.htm
Jaworowski, Zbigniew (2004). Climate change: incorrect information on pre-industrial CO2. www.warwickhughes.com/icecore
Jaworowski, Zbigniew (2007). CO2: The greatest scientific scandal of our time. 21st Century Science and Technology, Spring/Summer, pp. 14-28
Kauffman, Joel M. (2007). Climate change reëxamined. Journal of Scientific Exploration, 21:4, pp. 723-49
Kiehl, J.T. (2007). Twentieth century climate model response and climate sensitivity. Geophysical Research Letters, v. 34, L22710, doi:10.1029/2007GL031383l
Kirkby, J. (2007). Cosmic rays and climate. Surveys in Geophysics, v. 28, pp. 333-75, doi: 10.1007/s10712-008-9030-6
Koutsoyiannis, D., A. Efstratiadis, N. Mamassis, & A. Christofides (2008). On the credibility of climate predictions. Hydrological Sciences Journal, v. 53, doi: 10.1623/hysj.53.4.671
Landscheidt, Theodor (1999). Solar activity controls el Niño and la Niña. www.john-daly.com
Landscheidt, Theodor (2001). Trends in Pacific decadal oscillation subjected to solar forcing. www.john-daly.com
Landscheidt, Theodor (2003). Decadal-scale variations in El-Niño intensity. www.john-daly.com
Lewis, Jr., Marlo (2007). Al Gore’s science fiction: A skeptic’s guide to An Inconvenient Truth. Washington, DC: Competitive Enterprise Institute, http://cei.org
Leybourne, B.A., & M.B. Adams (2001). El Nino tectonic modulation in the Pacific Basin. MTS Oceans 2001 conference in Honolulu, Hawaii, www.geostreamconsulting.com
Leybourne, B., B. Orr, A. Haas, G.P. Gregori, C. Smoot, I. Bhat (2006), Tectonic forcing function of climate – revisited: four elements of coupled climate evidence of an electromagnetic driver for global warming. New Concepts in Global Tectonics Newsletter, no. 40, pp. 27-34, www.ncgt.org
Lindzen, Richard S. (2005). Is there a basis for global warming alarm? www.independent.org
Lindzen, Richard S. (2008a). The fluid envelope: A case against climate alarmism. www.ecoworld.com
Lindzen, Richard S. (2008b). Climate science: Is it currently designed
to answer questions?
http://arxiv.org/ftp/arxiv/papers/0809/0809.3762.pdf
Lindzen, R.S., M.-D. Chou, & A.Y. Hou (2001). Does the earth have an adaptive infrared iris? Bulletin of the American Meteorological Society, v. 82, pp. 417-32
Loehle, C. (2007). A 2000-year global temperature reconstruction based on non-treering proxies. Energy & Environment, v. 18, pp. 1049-58
Loehle, C. & J.H. McCulloch (2008). Correction to: A 2000-year global temperature reconstruction based on non-tree ring proxies. Energy & Environment, v. 19, pp. 93-100
Lomborg, Bjørn (2006). Stern review. Wall Street Journal, 2 Nov.
Lomborg, Bjørn (2007). Perspective on climate change. www.copenhagenconsensus.com
Mackay, A.W. (2007). The paleoclimatology of Lake Baikal: a diatom synthesis and prospectus. Earth-Science Reviews, v. 82, pp. 181-215
Mackey, R. (2007). Rhodes Fairbridge and the idea that the solar system regulates the earth’s climate. Journal of Coastal Research, SI 50, pp. 955-68
McIntyre, Steve (2007). The Wegman and North Reports for newbies. climateaudit.org
McIntyre, Steve (2008a). How do we ‘know’ that 1998 was the warmest year of the millennium? climateaudit.org
McIntyre, Steve (2008b). Auditing temperature reconstructions of the past 1000 years. climateaudit.org
McKitrick, R. (coordinator) (2007a). Independent Summary for Policymakers: IPCC Fourth Assessment Report. Vancouver, BC: The Fraser Institute, www.fraserinstitute.org
McKitrick, R. (2007b). Response to David Henderson’s ‘Governments and climate change issues: The flawed consensus’. http://ross.mckitrick.googlepages.com
McKitrick, R.R., & P.J. Michaels (2004). A test of corrections for extraneous signals in gridded surface temperature data. Climate Research, v. 26, pp. 159-73; Erratum, Climate Research, v. 27, pp. 265-68
McKitrick, R.R., & P.J. Michaels (2007a). Quantifying the influence of anthropogenic surface processes inhomogeneities on gridded global climate data. Journal of Geophysical Research, v. 112, D24S09, doi:10.1029/2007JD008465
McKitrick, R.R., & P.J. Michaels (2007b). Background discussion on: Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data. www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGR07-background.pdf
McLean, John (2007). Fallacies about global warming. Science and Public Policy Institute, http://scienceandpublicpolicy.org
McLean, John (2008). Prejudiced authors, prejudiced findings: Did the UN bias its attribution of ‘global warming’ to humankind? Science and Public Policy Institute, http://scienceandpublicpolicy.org
Meyer, Warren (2007). A skeptical layman’s guide to anthropogenic global warming. www.lulu.com/items/volume_62/971000/971066/3/print/global_warming_paper_1.0c.pdf
Milloy, Steven J. (2007). This ‘global warming’ thing ... what Watt is what? http://junkscience.com
Monckton, Christopher (2007a). Greenhouse warming? What greenhouse warming? Science and Public Policy Institute, http://scienceandpublicpolicy.org
Monckton, Christopher (2007b). 35 inconvenient truths: The errors in Al Gore’s movie. Science and Public Policy Institute, http://scienceandpublicpolicy.org
Monckton, Christopher (2007c). Trenberth’s twenty-three scientific errors in one short article. Science and Public Policy Institute, http://scienceandpublicpolicy.org
Moriarty, Tom (2008). Applying Monte Carlo simulation to Sloan’s and Wolfendale’s use of Forbush decrease data. 5 September, http://climatesanity.wordpress.com
Mörner, Nils-Axel (2005). Facts and fiction about sea level change. www.publications.parliament.uk
Mörner, Nils-Axel (2007). Claim that sea level is rising is a total fraud (Interview). Executive Intelligence Review, v. 33
Muscheler, R., F. Joos, J. Beer, S.A. Müller, M. Vonmoos, & I. Snowball (2007). Solar activity during the last 1000 yr inferred from radionuclide records. Quaternary Science Reviews, v. 26, pp. 82-97, doi:10.1016/j.quascirev.2006.07.012
Neff, U., et al. (2001). Strong coherence between solar variability and the monsoon in Oman between 9 and 6 kyr ago. Nature, v. 411, pp. 290-3
Peden, James A. (2008). The great global warming hoax? www.middlebury.net/op-ed/global-warming-01.html
Petit, J.R., et al. (1999). Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica. Nature, v. 399, pp. 429-36
Pielke, Sr., R.A., et al. (2007). Unresolved issues with the assessment of multidecadal global land surface temperature trends. Journal of Geophysical Research, v. 112, D24S08, doi:10.1029/2006JD008229
Pielke, Jr., R. (2008). Verification of IPCC temperature forecasts 1990, 1994, 2001, and 2007. http://sciencepolicy.colorado.edu/prometheus
Polyakov, I.V., et al. (2003). Variability and trends of air temperature and pressure in the maritime Arctic, 1875-2000. Journal of Climate, v. 16, pp. 2067-77
Robinson, A.B., N.E. Robinson, & W. Soon (2007). Environmental effects of increased atmospheric carbon dioxide. Journal of American Physicians and Surgeons, v. 12, pp. 79-90
Scafetta, N., & B.J. West (2007). Phenomenological reconstructions of the solar signature in the northern hemisphere surface temperature records since 1600. Journal of Geophysical Research, v. 112, D24S03, doi:10.1029/2007JD008437
Scafetta, N., & B.J. West (2008). Is climate sensitive to solar variability? Physics Today, March
Schwartz, S.E. (2007). Heat capacity, time constant, and sensitivity of earth’s climate system. Journal of Geophysical Research, v. 112, D24S05, doi:10.1029/2007JD008746
Segalstad, Tom V. (1997). Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the ‘Greenhouse Effect Global Warming’ dogma. www.co2web.info
Shaviv, Nir (2006). Cosmic rays and climate. www.sciencebits.com
Shaviv, Nir (2007). The fine art of fitting elephants. www.sciencebits.com
Shaviv, Nir (2008). Is the causal link between cosmic rays and cloud cover really dead? www.sciencebits.com
Sherwood, Keith, & Craig Idso (2008). Biofuels – summary. www.co2science.org
Singer, S. Fred (2008). Nature, not human activity, rules the climate: Summary for policymakers of the report of the Nongovernmental International Panel on Climate Change. Chicago, IL: The Heartland Institute, http://heartland.temp.siteexecutive.com
Singer, Fred S., & Dennis T. Avery (2007). Unstoppable Global Warming: every 1,500 years. Lanham, MD: Rowman & Littlefield
Solanki, S.K., I.G. Usoskin, B. Kromer, M. Schüssler, & J. Beer (2004). Unusual activity of the sun during recent decades compared to the previous 11,000 years. Nature, v. 431, pp. 1084-7
Spencer, R.W., W.D. Braswell, J.R. Christy, & J. Hnilo (2007). Cloud and radiation budget changes associated with tropical intraseasonal oscillations. Geophysical Research Letters, v. 34, L15707, doi:10.1029/2007GL029698
Spencer, Roy W. (2008a). Global warming and nature’s thermostat. www.weatherquestions.com
Spencer, Roy W. (2008b). Global warming: Has the climate sensitivity holy grail been found? www.weatherquestions.com
Spencer, Roy W. (2008c). Climate Confusion: How global warming hysteria leads to bad science, pandering politicians and misguided policies that hurt the poor. New York: Encounter Books
SPPI (2008). ScareWatch: ‘Artic icecap is melting, even in winter’. Science and Public Policy Institute, http://scienceandpublicpolicy.org
Svensmark, H. (2007). Cosmoclimatology: a new theory emerges. Astronomy & Geophysics, v. 48, pp. 1.18-1.24
Svensmark, H., & E. Friis-Christensen (2007). Reply to Lockwood and Frohlich – the persistent role of the sun in climate forcing. Danish National Space Center, Scientific report 3/2007, www.spacecenter.dk
Svensmark, Henrik, & Nigel Calder (2007). The Chilling Stars: A new theory of climate change. Cambridge: Icon Books
Tinsley, B.A. (2008). The global atmospheric electric circuit and its effects on cloud microphysics. Reports on Progress in Physics, v. 71, doi:10.1088/0034-4885/71/6/066801
Tsonis, A.A., K. Swanson, & S. Kravtsov (2007). A new dynamical mechanism for major climate shifts. Geophysical Research Letters, v. 34, L13705, doi:10.1029/2007GL030288
Usoskin, I.G., S. Solanki, & G.A. Kovaltsov (2007a). Grand minima and maxima of solar activity: new observational constraints. Astronomy and Astrophysics, v. 471, pp. 301-9
Usoskin, I.G., & G.A. Kovaltsov (2007b). Cosmic rays and climate of the earth: possible connection. Comptes Rendus Geoscience, doi:10.1016/j.crte.2007.11.001
Veizer, J. (2005). Celestial climate driver: a perspective from four billion years of the carbon cycle. Geoscience Canada, v. 32, p. 13-28
Wingham, D.J., A. Shepherd, A. Muir, & G.J. Marshall (2006). Mass balance of the Antarctic ice sheet. Philosophical Transactions of the Royal Society A, v. 364, pp. 1627-35
Wöppelmann, G., B. Martin Miguez, M.-N. Bouin, & Z. Altamimi (2007). Geocentric sea-level trend estimates from GPS analyses at relevant tide gauges world-wide. Global and Planetary Change, v. 57, pp. 396-406
Zwally, H.J., et al. (2005). Mass changes of the Greenland and Antarctic ice sheets and shelves and contributions to sea-level rise: 1992-2002. Journal of Glaciology, v. 51, pp. 509-27
Websites
wattsupwiththat.com
Climate Etc.
climate4you.com
clintel.org
notrickszone.com
drroyspencer.com
icecap.us
climateaudit.org
CO2science.org
friendsofscience.org
rogerpielkejr.com
tallbloke.wordpress.com
edberry.com
thegwpf.org
netzerowatch.com
realclimatescience.com
rossmckitrick.com
lavoisier.com.au
sepp.org
bobtisdale.blogspot.com
pielkeclimatesci.wordpress.com
appinsys.com/globalwarming
climategate.nl
Climate models and climate catastrophe
Climate change controversies: contents
Climategate and the corruption of climate science
The energy future and climate change