Old School Polling Methods Hid Trump’s Popularity With Voters
Traditional data sampling and analysis methods that worked well for decades suddenly failed during the Presidential election of 2016
This level of annoyance can mean that people who are called may not answer and if they answer they may provide responses intended to get the caller off the phone rather than being truthful.
Meanwhile, the advent of CallerID means that people tend to screen their calls and not answer calls from numbers they don’t recognize. On top of that, voters in the prized 18 to 35 age group are highly likely to be using wireless phones rather than landlines, so the polling frequently misses those people.
Of course, people lie to pollsters. One pollster for CBS recently revealed that his company had run an experiment to see if they got different answers from the same people in polls conducted on-line versus on the phone.
Read More: Silicon Valley mostly picks Clinton over Trump
Hidden response
They did. The findings revealed an apparent reluctance on the part of college-educated respondents to admit that they were planning to vote for Trump when they were speaking to a human, but did admit it when responding to a machine.
While it’s tempting to focus on social media sentiment analysis as a substitute to polling, that’s no panacea either. The sample is automatically skewed to reflect only those people on social media, meaning it’s not a random sample. And, of course, people also lie on social media because they’re embarrassed to tell their friends who they’re supporting.
And, of course, there are the social media robots that appear to generate enthusiasm that may not reflect the actual level of enthusiasm among real voters. While social media analysis companies are learning to filter out traffic generated by the bots, software developers are working to make the filtering less effective.
While presidential election polls are only conducted every four years, there are plenty of other surveys out there being used for all sorts of things. They may include product preference surveys, for example, or they may be customer satisfaction surveys. While these don’t suffer from exactly the same problem as political surveys, they are still subject to the same limitations and sources of bad data.
Data limitations
If you’re planning to field such a survey, then you need to make sure you know the limitations before you start, and that you find ways to view the results, which are always inaccurate in one way or another, with some understanding of the limitations. The same is true with social media studies.
And if you’re presented with the results of a survey, you need to know what the limitations are. Was it a phone survey? How were the respondents selected? What is the level of confidence that a representative sample was taken? Of course you also need to review the poll or survey questions for signs that the questions themselves can induce bias. And it might be a good idea to use more than one survey method.
None of these ideas will necessarily make a poll that’s accurate, but at least it will give you better understanding of the results and the limitations of those results. And it might also encourage you to pair a poll or survey with social media analysis to get a more complete result.
Originally published on eWeek