Statistics textbooks go out of their way to say that 95% Confidence Intervals (CIs) do not mean that you can be 95% sure that the population parameter of interest is somewhere between the high and low end of the interval. Rather, if your sample was drawn an infinite number of times, 95% of the intervals would contain the population parameter (while 5% would not).
I fail to see the distinction. If I draw one of the infinite number of samples for which 95% CIs are calculated, aren’t I 95% certain that I’ve drawn one the ones whose CI contains the population parameter? Thus, I’m 95% certain that my CI contains the population parameter.
If someone can explain why my thinking is incorrect, I’d really appreciate it. Thank you.
Just to cause more confusion, I went to my old Statistical Methods textbook by Snedecor and Cochran (8th edition), and found the following section on Confidence Intervals:
Notice that they provide a mathematical proof for the inequality relating a population parameter value to a sample confidence interval. In addition, in their example in the middle of page 56, they explicitly state that the population parameter lies within the 95% confidence interval given, except in a 1 in 20 chance.
Snedecor and Cochran's book educated several generations of statisticians, at least here in the US. And, the mathematical proof seems pretty convincing. So now what? Do we believe what the current textbooks are saying (which do not help us in making a statement about the population parameter)? Or, do we go with Snedecor and Cochran and state that we are 95% certain the the population parameter is within our 95% CIs?
Anyone who wishes to comment, please do...I'm at a loss.