So I decided that today I’d write about something that’s been a little hard to wrap my head around: Confidence interval interpretation.
Having a little bit of statistics background from high school and undergrad (taught better in high school than in undergrad believe it or not), I jumped right into the homework about confidence intervals – brazenly writing down what used to be rote in my mind: “This means there is a 95% chance that the true mean falls within the given interval.” It was only after going back over the notes for some other reason that I saw it phrased a little differently and decided to do some digging. Lo and behold, that exact sentence came up as a common misconception about confidence intervals.
This led to a bit of a spiral as I began to question my previous stats background, everything I thought I knew, even my very existence! (Ok, maybe just the stats background)
It turns out that the confidence interval can’t have a 95% probability of containing the true mean; the true mean is a fixed value, so a given range either does or doesn’t have it, no maybe. Another way to think about it is if I flip a coin without showing you and ask you the probability of it being heads or tails. You might at first say its 50/50, but really its 100/0 because I already did the flipping.
This seems like a pedantic point (and at the moment I’m still not convinced it isn’t), but serves as a really good example of the difference between the statistical use of probability and what the common person thinks about it. Like a lot of science, perspective can make a huge difference, and the annoying technicality may end up being a really important marker of needing to think about things differently and once you do, a lot of things may fall into place.