Climate Modeling and What You Haven't Heard
Climate Modeling and What You Haven't Heard
Everyone knows that computer models "are not reality." Everyone knows it, but they either forget it, or act as if they don't understand what this means. A recent article on climate models says it so well: "By their very nature, models cannot capture all the factors involved in a natural system, and those that they do capture are often incompletely understood" (Nature June 14, 2012, p. 183). This makes climate models "impossible to truly verify or validate."
So where does this situation leave us when it comes to discussions of climate change?
A Short History of Climate Prediction⤒🔗
While computer models have been critically important to discussions concerning climate change, the first warning long predated computer models. In the late 1890s Swedish chemist and physicist Svante A. Arrhenius declared that if carbon dioxide concentration in the atmosphere were to double, the global average temperature would increase dramatically. Since carbon dioxide levels fluctuate wildly with location, time of day, and time of year, nobody knew what the long-term trends for carbon dioxide were. There was, however, a concern already then, that the burning of fossil fuels (coal, oil and gas) was adding a lot of carbon dioxide to the atmosphere.
Then, in conjunction with the International Geophysical Year (1957-1958), equipment for continuous monitoring of carbon dioxide was set up in Hawaii and Antarctica. The program ran for twenty years, and by the end it was apparent that carbon dioxide content in the atmosphere was increasing at an ever faster rate. Thus by the late 1970s scientists knew that the trend for carbon dioxide concentration in the air was upward.
But what would they do with this information? An article in Scientific American in August 1982 (p. 43) summed up the situation:
In sum, the carbon dioxide question is obscured by many unknowns and uncertainties. Indeed, about the only facts available are the actual measurements of atmospheric carbon dioxide, particularly the two decade series at Mauna Loa and the South Pole, and some fairly reliable data from the UN on annual consumption of fossil fuels in industrialized countries (emphasis added).
This single known fact of the trend in carbon dioxide concentration was enough to encourage prominent climatologist Stephen Schneider to declare as early as 1973, even before the end of the atmospheric study, that the situation was of enough concern to encourage the limitation of growth in the human population. His reasoning was that more people would mean that more fossil fuels are burned for energy, and this could lead to an increase in global temperature because carbon dioxide absorbs heat. His reasons for taking this position were entirely theoretical: "not because there was any clear sign in the climate that greenhouse warming was coming, but just from the physics of the problem" (World Book Science Year 1990 p. 142).
Computer Models Take Over←⤒🔗
It was computer models, not any evidence from nature, which provided scientists with a way to interpret the carbon dioxide data. Climatologists and mathematicians constructed equations that they thought adequately represented how the processes in nature work together to determine climate. They then plugged these equations into the most powerful computers available, manipulating various inputs to the system, such as carbon dioxide levels, in order to see what the outcome would be under the conditions in question.
One of the early individuals to work with such models was James E. Hansen of NASA, who continues to be prominent in the climate change field to this day. He and his colleagues compared the models with past climates. Could they make their model duplicate past events? The objective, of course, was to predict Earth's future climate.
Whatever the details were in the models employed, all predicted major increases in temperature in future years if carbon dioxide levels continued to increase. These models provided all the validation that the climatologists needed to advocate for massive reductions in the use of fossil fuels. Most specialists considered the actual weather data to be irrelevant — it is the models that matter to them. Why? As we all know, the climate varies considerably from year to year, and from decade to decade. This is the "noise" that makes trends very hard to detect — the real data is much more confusing, and much harder to decipher than the results coming out of the computer models.
But no matter, the climatologists are certain that global warming (climate change) is coming. However, the fact that they consider the details from actual weather as more or less irrelevant, makes it very hard for "climate skeptics" to argue against the mainstream conclusions. Maybe it is time to look at the computer models themselves upon which the whole field of climate change theory is based.
More Sophisticated means more Assumptions, means Less Precise←⤒🔗
A recent article in the journal Nature is of interest when it declares concerning new upgraded climate models that, "climate scientists face a serious public-image problem" (June 14/12 p. 183).
The fact is that new climate models, which make use of significant improvements in our understanding of what controls climate, are unfortunately producing answers that are less clear. The range of conditions under which a given outcome might occur is becoming wider rather than more clearly defined.
One of the problems with any computer model is the uncertainty, which is of necessity built into the equations. The use of more complex equations actually adds more uncertainty to the system. Every equation involves assumptions about the conditions under which the process applies, and so with fancier equations, there are more unknowns and more uncertainties than ever. This situation results in a wider range of values including a wide range of expected temperatures that could result in the future. Despite this growing imprecision the article in Nature declares "None of this means that climate models are useless" (p. 184).
It is like protesting that one is not a crook! Well, let us look at some examples of the kinds of answers these models are providing. Consider, for example, the predictions of one such model for water levels in the Mekong River Basin. The fancy model apparently predicts monthly water volumes that range from 16% below normal to 55% above normal. On an annual basis, the model predicts river discharge to occur anywhere between 5% below normal to almost 5% above normal. The authors state plaintively that, "Advising policy makers becomes extremely difficult when models cannot predict even whether a river catchment system will have more or less water" (Nature June 14/12, p. 183). Perhaps with such uncertainty, scientists should abstain from offering any advice at all!
Droughts are on the Rise ... or are they?←⤒🔗
One of the key areas where the models have been found to be inadequate is in the prediction of extreme events. This is an issue of great concern to most people. Apparently the current Intergovernmental Panel on Climate Change (IPCC) has encountered some serious problems in this regard. According to the recent article in Nature:
When the current IPCC models were tested against four major past climate changes, two were unable to even get the basic climate before the shift correct and the other two had to be fed parameters up to ten times greater than would be realistic to produce the abrupt shift.pp. 183-184
Nowhere is the problem with current models more apparent than with IPCC predictions and conclusions concerning drought. It was the conclusion of the Fourth Assessment Report of the IPCC that "more intense and longer droughts have been observed over wider areas since the 1970s" (Nature November 15, 2012, p. 435). In other words, the report concluded droughts are an increasing problem which result from increased temperatures and decreased precipitation.
More recently, however, some scientists have turned a critical eye on the computer model used to predict drought. The popular equation draws conclusions based on monthly precipitation and temperature data. The critics point out that loss of moisture to the air through evaporation from surfaces and through water loss from green plants depends on far more factors than just temperature. Apparently wind speed, humidity and the intensity of sunlight (among other factors) all contribute to water loss to the air. Some scientists in the November 15, 2012, issue of Nature compared the old equation with a more elaborate one that takes into account several other factors. While the global loss of productive land to drought was estimated at about 0.6% per year from the old equation, it was only 0.08% per year for the new equation. That is a loss seven times smaller than the former calculations. In fact, the overall conclusion from the newer model is that some areas are drier than formerly and other areas wetter, so that there is little evidence over the past 60 years for an increase in total area affected by drought.
This study implies that there is no necessary linkage (correlation) between temperature changes and long-term drought variation (p. 339). The authors muse that it seems strange, to say the least, that many climatologists continue to employ the old, ineffective model (p. 437). Thus it is evident that a popular computer model may yield results that are known to be overestimates. It is obvious that expert declarations about climate change may be based on inadequate models, and the public is ill-equipped to deal with such claims.
Conclusion←⤒🔗
So how do the scientists themselves respond to the uncertainty inherent in the answers from computer models? The authors of the June 14, 2012, article on uncertainty in the models declare that the public and policy-makers need to ignore the uncertainty and simply act on the scientists' recommendations (p. 183). The authors declare, "Despite the uncertainty, the weight of scientific evidence is enough to tell us what we need to know." They declare that, whatever the model used, all predict a rise in global temperature if carbon dioxide levels double from pre-industrial levels. And they say that scientists must package their pronouncements in a more appealing fashion.
One approach to tackling the public perception problem is to subtly rephrase conclusions, placing the uncertainty on the date by which things will happen, rather than onto whether they will happen at all ... This "when" not "if" approach is powerful. (p. 184)
What is really important would be for the public and politicians to realize that their opinions are being manipulated by clever individuals who have an agenda to change many aspects of modern life. So do not follow the clarion call to drastic action advocated by the experts on climate change. Cautious reflection is better than charging off in the wrong direction! The uncertainties in the computer models upon which climate change theory is based should not instil confidence in the public at large or the politicians who represent their interest.
Add new comment