Friday, June 17, 2011

Frank Graves of Ekos analyses what happened

Frank Graves of Ekos close enough to the 2008 election to not have worried about the gap between the result of the Conservatives and where he had them.   In this election that error, and that of other parties, was large enough that he has taken a look at what is going on.

I think all the pollsters should take a hard look at what is going and try to understand why none of them came particularly close to the election result.

He has looked at what is going on and why his forecast of the election was wrong.   He highlights three reasons that may have caused the error

  • Sampling error on the part of Ekos
  • There was a lot of movement of voters on election day
  • The people that vote and do not vote are not statistically the same.
The last one is the one I have wondered about for some time and been placated by pollsters saying that the people that vote and do not vote are the same.   I have been convinced that there is a serious systemic error going for some time.   I raised this several times with Frank Graves.

Frank goes through and shows his sampling methodology is accurate.
He also shows that there is no evidence in the data before or after the election to indicate a large change in the voters on election day.

In the end what he finds is that the people that vote are not the same as the people that do not vote.   The support among the voters is higher for the Conservatives than the non-voters.    He manages to figure this out using the data that only Ekos was tracking - how likely people were to actually vote.   When he applies those numbers his results come close to the national result.

I used his numbers to make my final estimate of party support on May 1st and I was closer than any pollster was in the nation.   I still pulled back from going has high on the Conservatives that my data showed or as low for the Liberals, it simply was too far off from what everyone else was saying.

In the last ten days of the election Frank and I were messaging back and forth on Twitter about polling methodology issues, I raised issues about the small sample sizes in BC and the issue of who will actually be voting versus the polling numbers.  

In closing, I highly recommend all of you polling number crunchers read the article.

1 comment:

motorcycleguy said...

I noticed your reference to poll size. Below is copy of comment I made a long time ago on an article by Mario Conseco in the Tyee relating to the NDP leadership campaign. The subject of the poll is irrelevent, but the issue of poll size as it relates to the issue being studied is relevent. So is the general perception by a "normal" TV "news" (I use that term loosely) audience...that polling on any issue is done in a consistent manner to "industry standards". Though I understand it is not always the case, it would be a suprise to most viewers that those polled were on some sort of "panel"....and that the standards are not required to be followed.

I am curious....but I trust the author will be able to answer my query. ESOMAR/WAPOR (the International Code of Practice for the Publication of Public Opinion Poll Results)states that for a pre-election poll where the gap between leading parties is expected to be small then larger sample sizes of 1500 to 2000 should be used. The sample size for this poll was 807. The sample consists of pre-registered Angus Reid Forum panelists....which is completely acceptable if the weighting variables used to correct any anomolies in the data are made public. Also mentioned is the importance of knowing who paid for the poll. Please explain the significance of your well publicized graph that relates to who one would vote for in their constituency being considerably different from the table relating to who one would vote for in BC generally....i.e with the former there are no undecided votes and with the latter there are 17% undecided. This seems contradictory. The percentage of undecided voters significantly affects the predicted results of a poll...if not please advise