In a comment to the "puzzle analogy" post, Bill Chameides points out an alternative analogy that climate skeptics like to push:
Science skeptics often try to undercut established science (be it global warming or something else) by portraying the knowledge-base as a house of cards. They hope by identifying one weak link, they can bring the whole house down (i.e., create the illusion of uncertainty in the entire subject.)
Examples of this strategy are easy to find. After the NRC hockey stick report concluded that we really don't know what the temperature was 1000 years ago, many skeptics used this to argue that this
repudiated all climate science. This is, of course, nonsense. Important conclusions in science are all subject to multiple tests and verifications, and scientists do not accept a conclusion until it has been multiply verified. As Bill concluded:
In fact, most science is like a jigsaw - lots of interlocking pieces based on multiple, independent lines of inquiry. Even if you take away one piece, the picture is still apparent.
6 comments:
I think what you say is true for the knowledge base...but false for modeling. Proving something to be flawed about a given piece of research or theory doesn't necessarily mean that everything else we know is wrong or flawed. However, pointing out a flaw in a key assumption underlying either an input into a model or a calculation within the model that is intended to simulate a given process DOES call into question how accurately the model is able to simulate future changes involving that input or process. Not all flaws in any given model are necessarily fatal flaws, but until they are thooroughly investigated and the impacts are known, it does degrade the confidence in the model output.
D.B.-
My sense of the present consensus view is that there was indeed a MWP. There is however no way (with today's data) to determine if the MWP was warmer or colder than the last few decades.
Regards
d.b., MBH 98 (the "Mann Hockey Stick") showed a low-amplitude MWP. As Andrew notes, the debate has been over how warm the MWP was relative to the present, but the outcome of that debate was never terribly important. Regarding the role of CO2, the important thing to bear in mind is that we have converted CO2 from a feedback (somewhat similar to the way water vapor behaves) into a forcing, and have increased its levels at a rate far greater than can occur naturally. So even though we can be confident that there have been relatively recent times when temperatures have been approximately the same as at present (if not the MWP then the Holocene thermal maximum, and if not that then the thermal maximum of the prior interglacial), the present GHG-induced warming is an entirely different kettle of fish. Unfortunately, the best historical analogy for what's happening now may be the PETM (Paleocene-Eocene Thermal Maximum), when temps spiked something like 8C in a thousand years. Our descendants would not thank us for inflicting a similar experience on them.
Given the type of questions you have, you may find it useful to spend some time at the Discovery of Global Warming site.
bill f, I think you're making some poor assumptions about both the models and their relationship to climate science as a whole. *None* of the models are perfect, and even if they were (withn reason) there would still be substantial uncertainty about the future course of climate since so much depends on future human actions.
Steve,
I am not making ANY assumptions about the models and their relationship to climate science. I am simply saying that much of what is currently being done with IPCC and other groups is trying to take what we think we know about climate processes and put it into models that can then be used to model the effects of future changes such as increases or decreases in GHG emissions. We are in agreement that under any circumstances, the "predictive" value of the models will be questionable. However, each time the state of the science changes with regard to what we do or don't know about a key process, our models should be examined to see how they use or treat the area where the change in understanding occurred. If the change in our knowledge is in a key value or process in the model, then it has the potential to call into question the validity of the model, if the model calibrated well against previous data sets with the flawed value or process included in it.
(Strawman warning!) This is a very simplified example and I don't intend to imply that climate modeling is this simple or straightforward. If I made a model that predicted the number of sheep that would graze in one of 4 available pastures based on a series of factors such as the season, air temperature, recent precipitation, growth habits of the plants growing in each pasture, etc.; then it would be appropriate for me to validate the model by inputing data from previous years and comparing the model output to the actual sheep counts collected by the rancher. If I calibrated my model so that it predicted the sheep counts from previous years with relative accuracy, and then suddenly found out that the total number of sheep counted by the rancher in each of the 5 pastures from previous years did not add up to the total number of sheep at the ranch during that time frame, it would obviously call into question my model. If my model accurately predicted what turned out to be a flawed count, then the predictive value of my model is potentially flawed. The only way to determine the implication of the flaw is to try to evaluate the source of the counting error and determine if, given more accurate data, will the model still accurately predict the corrected counts. In other words, I have to rerun the model with the new knowledge to see if it still properly calibrates against the known dataset.
In terms of climate modeling, when we learn new things about the strength of feedbacks (such as the recent identification of stronger than expected methane feedbacks in arctic melt lakes or the recent finding of lower than previously assumed CO2 uptake by tropical oceans), it is imperative to know how those processes are treated within a given model to know if the model is flawed because it accurately models past data while including the flawed feedback estimates. It may be that most of the models are not very sensitive to changes in those kinds of variables, but I have seen nothing from the modelers trying to explain how changes such as the recently reported 2 year ocean heat content drop fit within the range of predicted outcomes for their models. I am not saying the models are wrong or that the creators of them aren't doing these things, but if they are, they should be reporting on their findings, because it would add alot of confidence to see that their models can incorporate the changes in recent knowledge and still function well without extensive recalibration.
Glad to see you are giving a seminar at the Rice Earth Sciences Dept., rest assured I will be in the audience ready for your pontifications.
Post a Comment