Post

Avatar
Lots of discussions of polls and polling. Lots of good questions but also lots of misconceptions. E.g. it’s been a very long time (over a decade) since telephone polls excluded cellphones.
Avatar
But there are more polls being done in different ways than at any time in history, and it’s legitimately difficult to be an informed consumer of polling data these days.
Avatar
I am a pollster and researcher focused on accuracy and bias in polls, and part of my job is to help people understand and make sense of polling data. So if you have questions about polls or polling, feel free to tag me and I will try to answer them.
Avatar
I know my answer to this, but curious about yours: When evaluating an election poll (and let's assume you are blind to the organization sponsoring and/or executing it), what in the methods statement do you give the most (positive or negative) weight in your evaluation of the poll?
Avatar
Positive weight: probably details on weighting (e.g. what variables they weighted on and especially the source of weighting benchmarks). Negative weight: referring to a sample as "nationally representative" in the absence of the aforementioned weighting details.
Avatar
Personally, I look for sampling from, or matching back to, a voter file. This is specifically for election surveys; I realize that general population surveys are more desirable for other kinds of work. In the absence of a voter file I’m looking for a good frame that can be used for weighting.
Avatar
What I think that our two approaches have in common is a greater emphasis on weighting methods than mode in terms of evaluating survey quality
Avatar
Yeah, more generally I'd say that details on the sample source (e.g. voter file, RDD, etc...) is another big positive weight. It's hard to know if any given poll is going to be accurate, but I have more faith in polls that detail **why** we should think the sample is representative.
Avatar
This is not super-convincing. People will register to vote (and conversely voter files will be purged closer to election, just in case), and the current copies of the voter files available from merchants are at least a few months old.
Avatar
So 90%, 94%, who knows, accurate but neither necessary nor sufficient. Ofc there is also great deal of marketing data that you often get with the voter records, and can use to weight the data
Avatar
It certainly true that with same-day voter registration, they voter file does not necessarily cover all of the potential electorate. But the value of vote history and primary vote history as waiting variables, in my mind, offsets those small coverage errors
Avatar
Some sampling / recruitment statements make me cry “why are you doing this to me”. Like, “we combined landline IVR with opt-in online panel”.
Avatar
Avatar
How should we best respond to people who say "ignore the polls" when they see results they don't like?
Avatar
I'm not sure you have to? It's hard to argue with motivated reasoning. But otherwise, it seems like it would really depend on the particulars of the situation.
Avatar
What is the historical accuracy of polls correctly predicting past presidential election outcomes 6, 5, or 4 months in advance of that election?
Avatar
Not very accurate. The best source on this is Jennings and Wlezian (2018) who looked at polls from 351 elections in 45 countries between 1942 and 2017. This is the key graph showing that it's only in the final month or so when polls start to become predictive of the outcome.
Avatar
Avatar
Avatar
Avatar
I should add that this definitely does NOT mean that polls don't accurately reflect the state of affairs six months out. Just that things change over time, and what people say they plan to do in six months is often different from what they ultimately end up doing.
Avatar
When we see poll numbers shift, how do we know it’s not just different groups of people being willing to respond to the polls? When news for one party is better, could it be that they’re just more enthusiastic to answer polls? Do we know polls are measuring changes in public sentiment?
Avatar
I think in the immediate aftermath of a news event, this is definitely a risk. Peoples' attitudes also take some time to stabilize after these kinds of events. Generally, I wouldn't look at short term changes and focus on longer term trends over weeks/months.
Avatar
Thank you for taking the time to answer my question!
Avatar
So a lot of us have projected our revulsion of suspicious spam calls and the like onto actual poll respondents, deeming them irreversibly weird. How much of a problem is it really?
Avatar
As in assuming that people who answer polls must be weird because they’re the type of people who answer spam calls?
Avatar
Avatar
So most polls are actually online now, so that specifically is not really an issue for those. The larger issue is that people willing to take polls (of any kind) tend to be more civically engaged, more likely to vote, volunteer, etc...
Avatar
The extent to which this is a problem depends on whether what you're studying is related to civic engagement. E.g. if you were doing a survey about airport noise in various areas (a real survey!), the civically engaged folks probably have similar opinions to everyone else.
Avatar
Or if you only want to study voters (as in many election polls), the fact that nonvoters aren't in your polls may not be such a problem. For most of what I work on, it's a big issue though. We try to correct it by weighting down registered voters and volunteers although that doesn't fix everything.