and the survey says…

The pissing contest the PLP has gotten into with The Royal Gazette over poll results is rather amusing.  The PLP is hell bent on proving that their poll results are the more accurate and ‘fair’ while condemning The Royal Gazette’s as ‘Voodoo Statistics’.  This while neither have moved to make their full results and methods transparent by having them published by the polling firms involved.  If the PLP wants to taken seriously they should have Research 2000 openly publish the full poll results, not cherry picked subsets.  If the Royal Gazette wants to be seen as unbiased they should subsequently request the same of Research.bm.  Allow the people to compare apples to apples not what you wish to be seen.  Only then can we even get a hint of who’s truly biased.

To take a fair look at these polls lets turn to the National Council on Public Polls (NCPP) for some insights into what’s truly considered ‘fair’.  We’ll go through a few of their guidelines to see how things measure up.

Who paid for the poll and why was it done? 

“The important issue for you as a journalist is whether the motive for doing the poll creates such serious doubts about the validity of the results that the numbers should not be publicized.”

“For example, an environmental group trumpets a poll saying the American people support strong measures to protect the environment. That may be true, but the poll was conducted for a group with definite views. That may have swayed the question wording, the timing of the poll, the group interviewed and the order of the questions. You should carefully examine the poll to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.”

The PLP is quick to claim the fairness of their questions and yet one can wonder what details they conveniently omit.  For example, they suggest that for this question 50% responded PLP, 46% UBP, 1% IND and 3% did not vote.

“Did you vote for the PLP candidate, the UBP candidate or an independent candidate in the December 2007 general election?”

Fair enough, but why is it that 4% of respondents refused to identify their race and 2% refused to identify their age?  Interesting how the question of how you voted, easily as controversial as asking racial identity or age had no refusals.  Or is it that such information was left out?  Convenient?  Perhaps.  Subsequently similar details are not fully vetted by The Royal Gazette.  If one wants a fair interpretation, both results should be published openly by the polling firms.

How many people were interviewed for the survey?

While it is absolutely true that the more people interviewed in a scientific survey, the smaller the sampling error, other factors may be more important in judging the quality of a survey.

The PLP claims their sample of 600+ respondents to be far superior to The Royal Gazette’s 400+.  Taken as a whole with a typical 95% confidence rate and measuring against the same pool of total registered voters the PLP’s poll produces a reported 4% rate of error vs. the RG’s 4.9%, as reported by each.  This leaves less than a 2% of difference between the overall +/- accuracy of each poll, not making the PLP’s significantly more accurate than the RG’s when taking respondents as a whole.

The PLP makes the case that their higher rate of sampling for 18-34 year olds makes their results far more accurate for this group.  In this they have some weight in their argument as again assuming a 95% confidence rate and a pool of let’s guess 10,000 registered 18-34 year old voters the RG’s sampling of only 31 voters produces an approximate 18% rate of error.  However, not mentioned by the PLP is that their sampling of 91 respondents in the 18-34 year old range produces an approximate error rate of 10%.  So while the PLP does have a case that their results for this range are more accurate, they too still have a rather large margin of error in their results.

How were those people chosen?

The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents.

While the PLP is quick to claim accuracy of their poll on the part of matching demographics, what they fail to mention is how they came up with respondents.  Similarly the RG’s methodology is not known.  As is hounded over and over again on this blog, you must compare apples to apples for a fair result.  For example, were respondents interviewed via telephone?  Was it via an automated system?  Was the phone list compiled randomly and were businesses excluded?  Were cell phone owners included considering the large number of cell phone only homes these days?  How were individuals who didn’t answer or refused to answer handled? 

Who should have been interviewed and was not? Or do response rates matter?

In recent years, the percentage of people who respond to polls has diminished. There has been an increase in those who refuse to participate. Some of this is due to the increase in telemarketing and part is due to Caller ID and other technology that allows screening of incoming calls

How likely were people to be interviewed?  The PLP used a foreign firm, the RG a local one, what was the turnover rate of calls?  Is there a difference between the number of people attempted to be contacted vs. those who actually responded based upon the source of the calls?

How were the interviews conducted?

There are four main possibilities: in person, by telephone, online or by mail. Most surveys are conducted by telephone, with the calls made by interviewers from a central location. However, some surveys are still conducted by sending interviewers into people’s homes to conduct the interviews.

This kind of information is crucial to comparing each of the polls.  Both the PLP and the RG should be asking their representative firms to be publishing their results and methods in full.

What other kinds of factors can skew poll results?

Question phrasing and question order are also likely sources of flaws. Inadequate interviewer training and supervision, data processing errors and other operational problems can also introduce errors. Professional polling operations are less subject to these problems than volunteer-conducted polls, which are usually less trustworthy.   Be particularly careful of polls conducted by untrained and unsupervised college students.  There have been several cases where the results were at least in part reported by the students without conducting any survey at all.

What were the questions, the phrasings and the order of each survey?  Neither have been made public.  Subsequently why is it that when the error rate in the overall samples should indicate that overall results should be similar they are not.  The PLP’s poll suggests much higher favour ability for both Premier Brown and Opposition Leader Swan, why?  One or both of these polls are likely to be inaccurate but without having made both public to ensure that we are comparing apples to apples it is impossible to say which.

Comments

comments

This entry was posted in Uncategorized by . Bookmark the permalink.

4 thoughts on “and the survey says…

  1. excellent article Dennis,todays news that “Deodorant dale butler” is the most popular MP,proves your thesis.
    butler is one of the most money grubbing, self promoting clowns of all the MPs,
    if 65% whites value him more than all UBP MPs,they are drinking the same kool-aid as EB’s fools in paradise

  2. If a political party wants to comission their own surveys and polls, that’s fine. But it’s bogus of them to claim that it’s fair and unbiased when they make certain attributes public, therefore fueling the propoganda machine, while not revealing the full untampered results.
    So, whatever. Let both parties brag and boast about how certain poll results show that they’re on the right track and the opponent is floundering.

Leave a Reply

Your email address will not be published. Required fields are marked *