you can reweight this stuff if you assume that the people who are not responding to the calls are uncorrelated with their polling answers. but if it's correlated, then you're measuring your instrument, not the thing you're measuring. i don't know how you fix that with phone polls.
I'm being 100% serious here. Scientists have to deal with systematics in their instrumentation all the time. This is what instrument calibration is. I have literally no idea how you can calibrate correlations in polling non-responses by conducting another poll.
Like, it wouldn't be that hard to figure out what kind of asymmetry in polling responses you would need to take a dead even race and create a five point margin, say. This would actually be useful for understanding the polling results.
like, if you can measure that asymmetry somehow, you could reweight things and it would be a reasonable approach. but, unlike things like "how many african americans are there in the united states?" where we have census data and suchlike, i just don't know how you measure the asymmetry.
And partially indicated that in 2022, which was not reflected in 2022 actual voting.
But I think the traditional method is to calibrate to election results, and they probably are trying.
You don't. You're obliged to turn to a different metric, like election results, but that's not going to give you a reliable gauge given the small sample sets
right! so maybe you could ask yourself "given normal historical variability between elections, how far removed are my crosstabs from that normal variability?" but that doesn't fix the underlying problem, it just tells you how weird your crosstabs are.
I’m gonna shout “I can weight my way out of anything!” from now on whenever my wife gives me that look she reserves for when I order a meal for myself that is meant to serve 6