By around 10 p.m. eastern time on Nov. 8 it was clear that the Presidential election results weren’t playing out as expected. That first became apparent when states that most poll watchers expected to be easy wins by Democrat Hillary Clinton had suddenly become squeakers.
Margins everywhere were getting very narrow and soon states regarded as solid for Clinton were being called for Republican Donald J. Trump.
Even though a few western states had yet to close their polls, it was already clear that the chances of a Clinton victory were steadily fading. By the time I gave up and went to bed about 1 a.m. Nov. 9, it was clear that it was only a matter of time before Trump was declared the winner in enough states to get the 270 electoral votes required to become the next U.S. president.
Read More: Silicon Valley mostly picks Clinton over Trump
The answer is that the opinion polling community was still following the data gathering and analysis practices that it had for decades. But the world had changed to the point that those practices were no longer completely relevant. When I wrote my column on Oct. 31 about how social media analysis was showing a much closer race than predicted by the most recent polls, it was clear that traditional polling was not correctly tracking public opinion.
But until Election Day, it wasn’t clear just how far off the polling was. Now that it’s clear, it’s important to realize the pitfalls of traditional polling and to find ways to correct the means we use to measure public sentiment. But I also need to mention that social media analysis is not a magic bullet either. It has its own set of flaws.
In the case of election polling there’s always been a challenge in finding the right sample, and making sure the sample is the right size. A sample that’s not chosen properly will give you incorrect results. A sample that’s too small yields a margin of error that’s too large to be useful. A sample that’s too large can take too long to process, is too expensive to field or both.
There are other factors that affect polling results once the results have been gathered. In the case of political pollsters, results are weighted to reflect how likely the respondents are to actually vote, for example. One such assumption frequently used in such polls is that voters in rural areas are less likely to go to the polls, so their responses receive less weight.
Likewise, people in certain demographic categories are assumed to be more or less likely to vote, depending on the specific demographic. Finally, respondents are assumed to be telling the truth about personal decisions such as voting intent.
Conspiring against the previous responsiveness of voters being polled is the dramatic increase in unwanted marketing calls, fraudulent calls attempting to scam people, calls for donations and political calls. People are getting wary of talking on the phone, and have repeatedly asked the Federal Communications Commission and the Federal Trade Commission for relief.
Originally published on eWeek
Page: 1 2
Facebook parent Meta joins other big data centre operators and explores use of nuclear energy…
UK competition regulator slightly delays its provisional report on the cloud computing sector and whether…
After “retiring” former CEO Pat Gelsinger last weekend, Intel approaches respected chip industry veteran and…
Founder of former cryptocurrency lender Celsius Network, Alex Mashinsky, pleads guilty to two counts of…
Internet Watch Foundation confirms Telegram messaging app has joined child abuse imagery crackdown, after years…
Latest salvo in escalating trade war, as Chinese companies are told by local industry bodies…