Post

Avatar
Guys. I am not a quant person but even I know that pushing a sub sample of a larger phone poll from early in June with a 0.8% response rate, where no one who ignores unlabeled numbers on their cellphone responded, tells you nothing about the electorate. Stop letting bad polls drive your politics.
Avatar
If you're talking about the cognitive capacity poll, it says June 28-29, which is not "early in June".
Avatar
Should have said earlier in June. It was following the same population from a June 17 poll, which they helpfully note at the bottom of the CBS piece
Avatar
I don't think that correction fixes all the errors in your description "a sub sample of a larger phone poll from early in June with a 0.8% response rate, where no one who ignores unlabeled numbers on their cellphone responded". Where do you see this is a phone poll with a 0.8% response rate?
Avatar
Because that’s the original poll they drew these respondents from. It was like 800,000 calls to get like 10,000 responses
Avatar
Avatar
Having trouble finding that in the PDF detail for the June 17-21 poll...
Avatar
If you click through to the prior poll that formed the base for this sample group, there are lots and lots of questions being raised about methods and questions. Sorry Jim, but I think your skepticism needs to be directed at the hysterics taking this poll as some sort of election harbinger.
America's new generation gap: Young voters say they'll inherit a more challenging world. But will they vote in it?www.cbsnews.com A look at young voters and the 2024 election.
Avatar
I just know that I'm an independent in a swing state (AZ) where other independents younger than I am are, if anything, more concerned than I am about Biden's cognitive capacity, and have been since before the debate. Can you supply a pointer to the methodological criticisms you mention?
Avatar
Two things: 1) vibes and personal observations are anecdotes, not data 2) scroll my feed if you want to know my views, I won't do your homework for you
Avatar
The “weighted to account for response rates” is hiding a lot of bullshit here because if this is the poll I’m thinking of like 80% of the respondents were over 65, so they’re inflating the like 5% of people under 30 who answered into 35% of the poll response
Avatar
Whenever I have seen this language in academic survey work it means they have an assumption of what the ideal response rate is per demographic and then they arbitrarily apply multiples to the actual responses to match that assumption. It is a red flag that the survey vibes, not science.
Avatar
Yep, and now you’re just amplifying statistical noise