Tuesday, October 19, 2010

Gordon Campbell at 9% approval rating

I thought I would weigh on this because many people are making a big deal about this Angus Reid poll and in m opinion the poll shows a lot more problems with the whole process of polling.

My first issue is that they have no indication of how many people are decided voters out of the 804 respondents in the poll.   The number should be about 410 to 445, this then dramatically impacts the statistical measure of error as the sample size is much smaller.   I have a sneaking suspicion that is not what they came up with as so many of the polling companies are wildly off on voter turn out, in fact they are so far off on turn out the results are a statistical freak.   The one statistic that pollsters should hit with the most accuracy in every poll is the number of likely voters.  

It is a simple question "Will you vote in the next election?" and should be a straight forward answer.   If pollsters can not get this right, everything else is very, very suspect.   I have yet to hear anyone explain why if this answer is not right everything else should be accepted as accurate.   Every reason anyone has suggested only says to me that there are very serious flaws in the assumptions of pollsters.

Within the poll they ask people that voted NDP or Liberal how they would vote now and only included decided voters.   This number can not be larger than about 380, if it is larger there is a fundamental flaw in the methodolgy the pollster is using.   The pollster can not have more than 45% of the respondents say they voted for either the NDP or Liberals in 2009.  Then one has to take out of it all the people that do not know who they will vote in the next election.  This leaves us with a question that should not have more than 300 respondents.   This places these results in a margin of error of +- 5.8 percentage points.   This makes much of the grid statistically irrelevant.

Next we come to the question of approval or disapproval and if people's opinions improved or worsened for the leaders.   However you cut it, the 9% approval rating for Gordon Campbell is not good, but does it mean?  I have always hated these sort of leader approval questions because they do not measure what is meant by the answer, it does not measure public indifference or how much people follow politics.    It is a very black and white question to something that is very complex.   It also does not reflect the views of people that are not going to tell a stranger the support and unpopular leader, people often answer pollsters with the answers they think is the 'popular' answer, especially when it comes to leaders or issues.

The question about opinion worsening, staying the same or getting better about the leaders is flawed because of the sort of answers it elicits.   People who despised Gordon Campbell a year ago are going to say their opinion worsened even it has not changed because to answer no change feels like you are supportive of someone you dislike.

So why do people take polls so seriously if they are fundamentally flawed in their results?   Most people do not understand statistics or providence of data.   People are looking for a score sheet to see who is winning between elections and polls provide this.   The results tend to be within the realm of the believable and therefore accepted as accurate.

People tend to believe polls are accurate because there is easy way to show they are inaccurate.   There is one time when we can see how accurate they are and this is on election day.   In the 2009 election three companies released polls in the last five days of the election campaign, not a single one came within the statistical margin of error at the 95% confidence level.   Some came close with individual parties but not all of them, none of them came anywhere close to predicting the number of people that would not be voting.

Pollsters should be very, very close to the results on election day.  A given company should be very close to the final result in 19 elections out of 20.  No one achieves this.   When is the last time you heard a polling company explain what they did wrong in their process that lead to them not being close to the result on election day?   Pollsters need to be able to explain the sources of errors in the polls and how much of a margin of error this adds to their results.

Polling as it is currently done is not an accurate measure of public opinion, it is at best a coarse approximation of it.   Political polls are a fun diversion but they the way they are currently conducted they should not be headline news or considered an accurate way to predict an election.

An example of the political junkies not always believing the polls happens with respect to the Greens.   Many people assume that the results the pollsters get for the Greens are wrong but they never question what the pollster is doing wrong for this to occur.   If the Greens are wrong and pollsters can not get that right, why is any of the rest of it right?
Post a Comment