With all the cold weather of late, I have been thinking of how we should update our beliefs about changes in climate parameters as extreme weather events occur. This analysis applies, of course, to the warm weather events we had in early winter, and the earlier cold snaps back in September around here. My son is also studying conditional probability in high school right now, so this is good for him as well.
There are at least two “takeaways” from this little excursion. The first is that, as we should have suspected, extreme weather events should indeed lead to some updating of our beliefs about climate change, but not a lot. More important, and my second takeaway, is that this excursion is a great way to see some of the basic ideas of Bayesian statistics.
So I am considering how my beliefs about the probability of climate change taking place change when I observe a (local) extreme cold weather event. Intuitively, a cold event should make us wonder if climate change – which should result in warmer temperatures – is actually occurring. But also intuitively, we should be really surprised if we learn too much from one single weather event.
Our starting point is Baye’s Rule:
Pr(climate change| event) =
{Pr(event | climate change) * Pr(climate change)} / Pr(event)
In words: The “posterior” probability of climate change conditional upon observing a weather event equals: The “prior” probability of climate change, Pr(climate change), times the probability of the event happening conditional upon climate change occurring, all divided by the unconditional probability of the weather event occurring.
If you multiply both sides by Pr(event), you see that both sides of the above equation give the probability of both climate change and the weather event occurring. The equality obviously holds therefore.
We normally use the term “posterior probability” for the left hand side of Baye’s Rule. The Rule tells us how our posterior probability is different from our “prior” probability after observing an event.
It is neat to see how Baye’s rule plays out in our climate context.
Let’s suppose that we have “diffuse” prior beliefs on climate change, that is, that Pr(climate change) from the right hand side of the equation equals ½. The question is how that prior belief changes after we observe an extreme weather event.
To do the analysis, we need to know Pr(event | climate change). Again in words, this is the probability of the weather event conditional upon climate change having happened. This is a key calculation. I take “climate change having happened” to mean that the distribution of daily temperatures in my local location having shifted to the right. Specifically, I assume that climate change has increased the average daily temperature by one degree. Let me also define my extreme weather event as a new record low temperature for the day April 16 in Hanover, NH. What is the probability of this event, conditional upon climate change having occurred? Well, it turns out that the long term average daily temperature for April 16 in Hanover is 33 degrees. And the standard deviation, I will assume, is about 7.5 degrees. The record low for this date is 15 degrees, having occurred in 1940.
If we assume that daily temperatures are normally distributed, then the probability of breaking that 15 degree record given the historical mean of 33 degrees is .008198. With climate change, the average shifts up to 34 degrees and the probability of breaking the record goes down: to .005649.
We now have everything that we need to complete Baye’s rule. The denominator of the equation is a little tricky; it is the unconditional probability of the new low temperature record. To find this, we use this equation:
Pr(event) = Pr(event | climate change) * Pr (climate change)
+ Pr(event |no climate change) * Pr(no climate change)
This is just adding up the different ways that we could observe the extreme weather event: there are two ways, either through climate change or not through climate change.
We have all of these probabilities mentioned in the above discussion. If we calculate it out, we get Pr(event) = (.005649*.5) + (.008198*.5) = .0069235.
Now take the first term of Baye’s Rule and divide by the denominator; this is called the likelihood ratio. In our case, this ratio is .005649/.0069235 = .8159168.
Our posterior belief about climate change after observing the extreme cold event is about 82% of our prior belief before we observed the extreme event.
This is more than I would have expected going into this exercise. What determines how much our prior beliefs are affected?
First, note that the stronger our prior beliefs, the less we would change our priors after observing the cold event: The denominator of Baye’s Rule is a weighted average of the probability of the cold event under two different scenarios, climate change or no climate change. The weight on climate change increases as our prior belief on climate change increases, and this makes the weighted average move closer to the probability of the extreme event conditional on climate change.
As the denominator moves closer to the probability of the extreme event conditional on climate change, the likelihood ratio moves closer to one, and our prior belief is affected very little by the extreme cold event. Intuition: With strong prior beliefs on climate change, a cold event does little to affect them. With weak prior beliefs on climate change, a cold event makes us even more skeptical. Wonder why people are affected differently by events? Bayes would not be surprised!
The other thing that affects our much our prior beliefs change is the term
Pr(event | climate change) relative to Pr(event | no climate change). This gets to the heart of how informative the cold event is. Think about it: If climate change really doesn’t do anything to the probability of cold events happening, then these two probabilities would be equal, and the likelihood ratio in Baye’s Rule would equal one. (Why might climate change not affect the probability of extreme cold events? Well, if climate change did nto affect the distribution of daily temperatures very much, that would work.) And in that case, our posterior probability equals our prior probability. Makes sense, no?
On the other hand, what would make the extreme cold event very informative? This would happen if Pr(event | climate change) relative to Pr(event | no climate change) was large. This would be the case if climate change made extreme cold events very unlikely, so that the ratio of these two terms would be very small. Then the likelihood ratio would be small, and our posterior probability of climate change would be small relative to our prior probability.
Well, in conclusion, this got a little more complicated than I thought, but I learned a bit from it.
One lingering concern I have is over a “data mining” sort of issue. Let me put it this way: Seeing a record cold event in SOME locality in the country is much less informative than seeing a record cold event in one particular locality such as Hanover. Somewhere in the country is going to see a record cold event pretty often; one locality will see one only rarely. The above analysis does not apply to the event of just seeing a record cold event SOMEWHERE (well, it does apply, but the probabilities need serious adjusting -- especially the probabilities of observing extreme weather events).
All of this was stimulated because I wanted to spend the night at my camp. When I got there, this late Nor’Easter had blown out the electricity and the downdraft in my woodstove was so extreme I couldn’t get a fire started. Real pain. I was tempted to stay and cook my dinner on the Coleman stove, but came back home instead. Nasty weather.
1 comment:
1. Probability deals with equal likelihood of events; your example makes the raw assumption of randomness. This pseudo-scientific evaluation is, in reality, meta-physical, in that no causal events are known or considered. Two more places where this analysis does NOT work: What is the probability of life on another planet? And what is the probability that I will have a heart attack?
2. You can complain about the cost-value of attempting to mitigate the human causes of climate change, and therefore state that: There isn't enough PROOF to say that if we spent $X there would be a discernable ROI in Y years. Again, methodical, but incorrect. Assume the case of an outhouse built near an aquifer. Is there a question that at some point in time, continued dumping of waste will contaminate the water supply and cause problems? If you say no, then I abandon all hope for you. If the inevitable is obvious, then it should be equally obvious that pollution, in any form is destructive, and whether the catasrophe occurs in 6-months or 6-years, or 60 years, should not be the question. As you should be aware, delayed response to problem resolution is more difficult & expensive than prompt response.
Post a Comment