Need help interpreting this statement regarding probability
I'm not an expert on statistics, so please do correct me if I'm not saying this precisely, however, here's what I think the issue is:
It's more philosophical than it is mathematical. So consider the (wrong) interpretation: "that there is a 95% chance that between 41% and 49% of Americans intend to vote Republican." Now ask yourself, is it really random chance? It's not like there's a 5% chance that the citizens are not voting between 41% and 49% republican – they already made up their minds! It isn't like if you repeat the election 20 times that at least 1 time will the poll results fall out of that range – if you repeat the election, you should get the same result every single time. The population mean is fixed.
Rather, the 95% is whether or not we have truly "captured" the correct interval. Which is precisely what the second (correct) statement is saying.
So I like to think of it like which is fixed and which one is probabilistic – the actual mean (the proportion who is voting republican in this example) is fixed. There's no probability to it. It either is between 41% and 49% or it isn't. But what isn't fixed is our confidence interval. Depending on our samples of the population, we may or may not be capturing the true population mean – this part is the probabilistic part.
Edit: also I'd suggest checking this out. Although not entirely related, it introduces how what I presented above (and presumably the textbook you're using) is looking at it from a "frequentist" perspective rather than a "Bayesian" perspective.
The only actual philosophical aspect to this question has to do whether we are talking about parameters in the Bayesian sense, or the frequentist sense. The quoted text implies that the discussion falls within the latter context.
So, in the frequentist view of statistical inference, one must be clear about what is random and what is fixed; what is observed or observable, and what is unknown or unknowable.
For instance, suppose we are interested in the mean age (measured as the number of whole years lived) of the human population of Earth as of January 1, 2018. This is represented by a single number whose value theoretically could be calculated, but is so impractical to do that it is effectively impossible. Yet we know it is definite and fixed. Such a quantity is a parameter. It is not subject to randomness. It is a fixed but unknown quantity that is a property of the distribution of ages of all people on Earth at that given moment.
Now, if we take a simple random sample of people and calculate the mean age of the sample, we intuitively understand that this may give some idea of the value of this parameter, but each time we take such a random sample, the sample mean is merely an estimate whose value is not fixed, but may change from one sample to another. The process of selecting people for each sample is where the randomness comes from. The outcome of each sample is a realization of this underlying random process, and it is called a statistic.
So, to recap, the parameter is unknowable but fixed; the statistic (and the random variable(s) from which it is calculated) is observable but random--its value changes each time it is realized.
Instead of merely providing the sample mean each time you take a sample, and using this as your estimate of the true population mean, you could incorporate some measure of uncertainty about your estimate, and this generally depends on the size of your sample. Intuition would suggest that the larger your sample, the more information you have about the population, and the more precise you are able to say the resulting estimate might be. So a confidence interval captures this idea by giving an estimate that is an interval rather than a single number. But because a confidence interval is also an estimate that is derived from the sample, it too is random.
To emphasize, the parameter is fixed but unknown. Each time a sample is taken, the calculated confidence interval varies. By random chance, some confidence intervals might not contain the value of the parameter (but you don't and can't know when this happens for a given sample). However, by adjusting the width of the confidence interval, you can state that the interval has a certain coverage probability that represents the probability that a random sample is taken that results in a confidence interval containing the value of the parameter. The larger you want this coverage probability or confidence level, the wider the resulting interval.
Back to the original question, we see that it makes no sense to state that there is "95% chance that between 41% and 49% of Americans intend to vote Republican," because this amounts to saying that the parameter (the true proportion of eligible Americans intending to vote Republican) is random from sample to sample, and that the confidence limits of 41% and 49% are fixed; when in fact, it is the 41% and 49% that are random, having been calculated from the sample, and the parameter, reflecting the true state of reality, is fixed.
This is a very confusing concept to understand, because when a statistic is realized, it is easy to forget that this was just one possible outcome. We are tempted to think, "well, we know what we saw, so it is fixed." But no! The observation is what it is, of course, but it is just one possible outcome of many. We would not find it reasonable to suppose that a fair six-sided die, rolled once, would then continue to give us the same outcome henceforth.
To be absolutely clear about your example, the incorrect interpretation is to say that there is some "chance" that the parameter is going to fall between two fixed values. The correct interpretation is to say that there is some chance that a random sample will result in an interval estimate that contains the true value of the parameter.
A Bayesian thinks all of the above is misguided, and instead regards the data as fixed, and the parameter as random. And this leads to some very interesting ways of conducting statistical inference, but this discussion is not within the scope of your question.
Consider that you go to every single American. Lets call the population size $N$. For $N$ people you will get an exact proportion of who votes Republican, say $p$. That is, $p$ is our correct value. But as is pointed out, it is infeasible to collect everyone's opinion, let alone possible. Statistics give us a way of estimating $p$.
Lets just choose $n$ people to sample from, out of $N$. The selection of $n$ people should be random. It is then is assumed that our sample $n$ represents everything about the population $N$, from a statistical stand point.
This is where I feel the interpretation can be confusing. One may say we are $95$% confident that our sample is within 4 points of the actual $p$. It does not tell us $p$ is going to be within a certain range $95$% of the time, but rather how well be expect our sample to have done, $95$% of the time.
It is almost a misleading statement and is very dangerous if interpreted incorrectly.