Post

Avatar
🚨New PNASNexus🚨 We join work on misinformation & harmful language: -More harmful language in tweets w low-quality news links β=0.1 & in false headlines β=0.19 -Users who share more misinfo use more harmful language in non-news tweets β=0.13 academic.oup.com/pnasnexus/ar... w @mmosleh.bsky.social
Avatar
Misinfo & harmful language are both problematic, but treated largely independently. They could be connected: -hateful/toxic posts may use inaccurate or misleading claims about their targets to insult or belittle -posts that seek to mislead may use harmful lang as persuasive tool
Avatar
But they might also be independent: - false claims need not involve harmful language - can insult and denigrate targets without making inaccurate claims So we wanted to investigate if they were actually related or not empirically
Avatar
We study 8.6 million posts from 6832 Twitter users -classifiers identify harmful language -URL news domain quality scores to measure info quality Also analyze 14k true and false headlines (as evaluated by professional fact-checkers)
Avatar
RESULTS -Tweets with links to lower-quality news domains are more likely to contain harmful language -False headlines are more likely to contain harmful language than true headlines -Users who share more low quality links use more harmful language - even in non-news posts
Avatar
CONCLUSION -Misinformation & harmful language *are* related in important ways- but not so strongly related that harmful language is a useful diagnostic for info quality -Shows opportunities to integrate largely disconnected strands of research & understand psychological connections
Avatar
Our dataset of 14k headlines available online: osf.io/q5h49/