tag:blogger.com,1999:blog-988599552908585128.post1934610220784158384..comments2023-10-16T01:09:58.286-07:00Comments on BC Iconoclast: Polling in CanadaBernardhttp://www.blogger.com/profile/15951619465188564252noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-988599552908585128.post-42974532267007188732010-04-16T06:21:37.517-07:002010-04-16T06:21:37.517-07:00On the wrong population, it is more than that. T...On the wrong population, it is more than that. They are all report people as decided voters that have not voted in the past and will not vote in the next election. These people are unlikely to have a consistent answer a group since they have no personal attachment to the voting process.<br /><br />Including non-voters in their sample of peoples' voting intentions will automatically skew any result. It is like asking non-smokers their favorite cigarette brand.Bernardhttps://www.blogger.com/profile/15951619465188564252noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-11920060665861648502010-04-15T22:04:54.451-07:002010-04-15T22:04:54.451-07:00Just want to respond to a few points:
- "I w...Just want to respond to a few points:<br /><br />- "I was meaning to go to the 90% or 75% confidence level"<br /><br />Sure - just multiply all the margins I gave by 0.84 or 0.59 respectively.<br /><br />I still don't think that you'd get results that are much more erratic than statistical error suggests after correcting for fixed house effects, at least for the 3 major parties (Green and Other numbers do seem more erratic). But I guess we'll have to agree to disagree, since neither of us is inclined to crunch the numbers :)<br /><br />Also, your perception of polls being all over the place is essentially unrelated to the undervoting issue: all pollsters may be polling the wrong population (overall instead of voting), but they are polling the same wrong population, so they should get consistent results!<br /><br />- "The margin of error means for all of the parties, one party out means this poll is not within the one in 20."<br /><br />No. If you look at how the margin is calculated, it is just for one party.<br /><br />- "All we can assume is that there is a much larger margin of error in the results than just the statistical one."<br /><br />One really needs to distinguish between bias and precision. The loss in precision from 1/3 of people lying randomly about whether they'll vote is EXTREMELY small.<br /><br />The resulting bias can indeed be big. But "margin of error" is not a concept that's meant to account for bias, because you can't statistically tell how big various biases are.Election Watcherhttps://www.blogger.com/profile/10276655533153494264noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-8683063745193235772010-04-15T13:58:46.334-07:002010-04-15T13:58:46.334-07:00All we can assume is that there is a much larger m...All we can assume is that there is a much larger margin of error in the results than just the statistical one.<br /><br />We do know that the answer from the people indicating they are a decided voter and they will not be voting is false because they will not choosing the party they indicated at the polls.<br /><br />I think you are asking if the non voters answering are of the same distribution as the actual voters. We have no data to indicate that this would the case. In fact we have nothing to indicate anything about them other than they are lying when they say are decided voters.<br /><br />Once again, think about this, we are reasonably certain that 1/3 of the respondents to a political poll in Canada is lying. One in three lies, that has to be some sort of impact on the resultsBernardhttps://www.blogger.com/profile/15951619465188564252noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-72034954797260448322010-04-15T13:41:32.933-07:002010-04-15T13:41:32.933-07:00I think the 'likely voters' issue is the b...I think the 'likely voters' issue is the big one. If I recall correctly in the US, the results were often quoted as being 'amongst likely voters'. These were determined not by questions about previous voting, but by demographics - folks in age groups, social standing, income level and so on. Judgment calls about likely voters probably make up some of the spread in results, assuming random sampling is done correctly. The other source of error I think is that sampling 1000 voters (say) with very small numbers in some communities which have strong demographic biases to one party or another may skew the results and not give a truly random sample.<br /><br />I'd also argue that you can't really tell whether the fact that more people have voting intentions than will actually vote tells you anything about whether the rest of their answers are false. It only says that you can't tell - without additional information - which of the respondees will vote. I can't see how you can assume anything about the errors on this.Unknownhttps://www.blogger.com/profile/10719135197075018215noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-33818690875972697822010-04-15T11:32:34.295-07:002010-04-15T11:32:34.295-07:00I was meaning to go to the 90% or 75% confidence l...I was meaning to go to the 90% or 75% confidence level to show that the companies should be closer together in their results with the more precise result of the lower confidence level.<br /><br />At the 90% confidence level only one poll in 10 should be outside of the margin of statistical error, an error that is smaller at 90% than the normal 95% that is reported.<br /><br />The margins between the companies is too large to be accounted for on the statistical error alone. I have done the math to certain, and I am unlikely to do it, but just looking at the polling numbers, there are ongoing inconsistencies in the numbers. The margin of error means for all of the parties, one party out means this poll is not within the one in 20.<br /><br />Without some sort of screen for people not voting and a way to have a moderate degree of confidence in the answers, there is a big problem in how the polls are run and the results that come out of them.<br /><br />Even when the polls are 'accurate' with each other, we know that at least one in three people is not responding how they will actually act in an election. If it was a small amount, no big deal, but this is large number.Bernardhttps://www.blogger.com/profile/15951619465188564252noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-90388013985307792852010-04-15T11:17:07.903-07:002010-04-15T11:17:07.903-07:00True, the error changes according to where you are...True, the error changes according to where you are - but not proportionally to your vote share. (For example, at 30%, the margin for a 1,000-person poll is about 2.9%.) Just thought it was confusing (and a bit misleading) that you used percentages rather than percentage points, which people are most used to.<br /><br />The problem with trying to quantitatively to account for sampling bias is that, almost by definition, there's no good way of doing so without very strong assumptions. The problem with your methodology is that it doesn't have a quantitative interpretation: are you trying to get the 75% confidence interval or 99%? As a result, comparing your numbers to the margins given by polling firms is like comparing apples and meat pies.<br /><br />I like your suggestion of new questions for applying a likely voter screen. Some US pollsters use it already. However, almost invariably, a greater proportion of people "recall" voting than actually voted...<br /><br />Btw, two 1,000-person polls are within the margin of error if their numbers are within about 4% of each other (multiply the margin for a poll by sqrt(2)). Thus, a poll showing 34% for a party is not statistically significantly different from a 38% poll.<br /><br />In terms of the margin between two parties, the relevant figure is about 6% (margin of a single poll multiplied twice by sqrt(2) - this is actually an underestimate due to the negative correlation of mistakes on figures for different parties). So the difference between a 33-30 poll and a 36-27 poll are again not statistically significantly different.<br /><br />I don't think we get results that are farther apart far more than once out of 20 (on polls that are in the field at the same time). A little more, maybe: after all, methodologies do vary a bit. But I disagree with your assertion that national polls results are "way out" (except for Greens and Others).Election Watcherhttps://www.blogger.com/profile/10276655533153494264noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-10606709759575207232010-04-15T10:26:16.179-07:002010-04-15T10:26:16.179-07:00Actually, your statistical error bars change based...Actually, your statistical error bars change based on the number you are at. The +- number they state is for a result at the 50% level, if a party comes in at 30%, the error bars change.<br /><br />My point is that the results do not reflect the reality of who is voting, they are not representative samples. Until pollsters fix this major error, one has to calculate how wrong they are. The methodology I have shown here is the best I can come up with to reflect the error of the pollsters.<br /><br />I have yet to see any Canadian pollster indicate they are weighting likely voters in any way of form. Ultimately asking that question is not a good one to ask, asking when and how often people have voted is a much better one to ask. No one asks that question.<br /><br />Given that the results of the different polling firms are not as narrowly clustered as they should be based on the statistical measure, there is something wrong. You can calculate the 90% confidence level for each poll or the 75% confidence level. The margin statistical error becomes very small once one goes to the 90% level. <br /><br />The polls nationally should be very, very close together. The reality is that they are not and they are way out based on the pure statistical model. There is other error going on and that is the error of representativeness in the polls. <br /><br />Where does the error come from? People lying to the pollsters.Bernardhttps://www.blogger.com/profile/15951619465188564252noreply@blogger.comtag:blogger.com,1999:blog-988599552908585128.post-64165404778148654102010-04-15T10:11:47.438-07:002010-04-15T10:11:47.438-07:00A few comments:
- When pollsters cite a range of ...A few comments:<br /><br />- When pollsters cite a range of +/- 3%, they mean 3 percentage points. That is, 30% means 27-33 (not 29.1-30.9).<br /><br />- Polls are inherently a snapshot of the electorate. Thus, even if there were no undervoting (or sampling or dishonesty) issues, using polls to predict election results means that one should take a larger margin of error into account.<br /><br />- Your methodology for calculating the error due to undervoting is unfortunately invalid. If actual voters are randomly drawn from the population that pollsters sample from, the answer should be much much lower. If not, then it is more of an issue with the representativeness of the sample rather than pure statistical error due to undervoting (which is what the 3% is about).<br /><br />- I wouldn't be surprised if some Canadian pollsters actually do weight people according to their voting likelihood. Pollsters in the US do so in "likely voter" polls, and some UK firms do it as well. We don't know if Canadian pollsters do since they reveal less about their methodology.Election Watcherhttps://www.blogger.com/profile/10276655533153494264noreply@blogger.com