Five Things to Ponder When Looking at Polls


By Mario Canseco

1) “A snapshot”

A public opinion poll is not predictive of behaviour that will take place a week, a month or a year later. Pollsters track perceptions of residents on a wide range of issues every day, both for clients and the media. The adage of “a poll is a snapshot in time” is definitely true. Elections are increasingly being decided in the final stages of campaigns. Voters may change their minds after taking a survey and before casting their ballot, causing some forecasts to be off the mark.

At Insights West, we have issued 22 correct electoral forecasts on the final day of campaigning. This approach minimizes the possibility of voters changing their mind and allows us to work with the freshest data available.

If a party or candidate is ahead at a specific point in time, it does not mean that the election has been “called” already. Our experience in Alberta perfectly outlines the fluctuations of parties. In April 2014, 50% of decided voters supported the Wildrose Party. In December 2014, 42% would have cast a ballot for the Progressive Conservatives.

When the electoral campaign was about to start, only 22% of Albertans were willing to vote for the New Democratic Party (NDP). The final poll of the campaign—concluded the day before the election took place—pointed to the New Democrats getting more than 40% of the vote. They did. Would it have been possible to know this a week, a month or a year in advance? Absolutely not.

2) How Data is Collected

The polling industry has gone through several transformations in the way data is collected. Knocking on people’s doors in the 1970s gave way to the telephone in the 1980s. In this century, more pollsters are relying on online panels to collect data, at a time when people in North America are abandoning their landlines and getting a person to discuss issues on the phone is becoming harder.

There has always been some confusion among the public and the media about the nature of online data collection. There is an enormous difference in a survey that any person can click and participate in (let’s call them “insta-polls”) and a survey conducted on an online panel.

The “insta-polls”—like the ones that appear on websites for media outlets—allow any person to click on a question, without collecting any demographic data or applying weights to ensure the proper representation of groups. These started as fun exercises to gauge the point of view of web visitors, but have sometimes mistakenly been interpreted to be scientific assessments of public sentiment.

Online panels, like the one Insights West relies upon, are carefully recruited to be representative of all components of a specific population. The surveys are then conducted with a sample of online panel members.

In the same way that a pollster in the 1970s designated which doors to knock on (or a pollster in the 1980s chose which telephone numbers to dial), data collection in an online panel requires a selection of people based on census targets. Not every member of an online panel will participate in every survey.

3) What is the Margin of Error?

A survey of 800 British Columbians, the industry standard for most province-wide polling, will yield a margin of error of +/- 3.5 percentage points. In a “horse race” poll, there are always undecided voters. A calculation of the level of support for a political party is based on “decided voters”—respondents to a poll who expressed a preference.

If, for the sake of argument, 300 respondents to a poll of 800 people are undecided, the “decided voter” analysis would be based on 500 interviews. This sample would yield a margin of error of +/- 4.5 per cent. This means that, in an election in which two contending parties have the support of 35 per cent of decided voters, support for one of the parties could be as high as 40 per cent and support for the other could be as low as 30 per cent.

It must be outlined that having a larger sample size does not necessarily guarantee precision. The Insights West poll of Americans was the most accurate online survey conducted in the 2016 U.S. Presidential Election (pegging support for the three main candidates within one percentage point of their final “popular vote” result). A survey of more than 70,000 Americans (which should yield a margin of error of +/- 1.0 per cent) understated the level of support for the Republican Party’s nominee by four percentage points.

4) From General Population to Voters

The type of polling that looks at the sentiments of a city, province or country is based on census targets. It is simple to use this data to figure out how many adults from specific genders, ages and regions must be interviewed in order to achieve a statistically-valid sample. Figuring out who will actually cast a ballot on election day adds a layer of complexity.

Respondents to a poll (regardless of methodology) may not be citizens of the country (and therefore, ineligible to vote). Also, while all of us are residents, with ideas and views on policies and government decisions, the turnout rates in provincial elections have never hit the 100% mark.

There are several factors that come into play when looking at a General Population sample and turning it into a sample of voters. One of them is motivation.

In the 2015 federal election, for instance, Canadians aged 18-to-34 voted at a higher rate than they had in previous ballots. Insights West’s final poll of British Columbians showed that 35 per cent of decided voters in the province would cast a ballot for the federal Liberal Party. This included 44 per cent of decided voters aged 18-to-34. In the end, the Liberals indeed received 35 per cent of the vote in the province. If our survey had underweighted this group in the final calculation, the results would have not been satisfactory.

It is not always an issue of placing a higher emphasis on older voters to get a sense of what the true electorate will be in an election. In fact, those who followed this path in 2015—with turnout models that overweighted older voters—ended up having the worst electoral forecasts in the country.

5) Aggregators and Seat Counters

A “horse race” poll offers an opportunity to look at the percentage of voters who will choose a contending party or candidate. Being ahead, or having more support than other choices, does not mean that an election will be won (as exemplified in the 1979 Canadian federal election, the 1996 British Columbia provincial election and the 2000 and 2016 United States presidential elections).

In Canada, we elect individual members of legislative bodies in constituencies. We do not currently have a proportional representation system to ensure that the party with the most votes will get the most seats.

In recent times, aggregators and seat counters have become part of the trifling analysis of polling. The methodology used by these websites cannot be based solely on the results of published polls by public opinion firms.

There is no evidence to suggest that aggregating polling results will result in a superior assessment of public opinion, or a perfect indicator of which candidate will win each seat in an election. Aggregators are just as likely to enhance error as they are to reduce it. The best way to look at the accuracy of polling is to review the track record of individual firms, and compare the last survey of voters released before an election with the results of the actual democratic process.