Samantha Lai

Profile banner

Samantha Lai

@samlai.bsky.social

Senior research analyst at the Carnegie Endowment for International Peace.
Reposted byAvatar Samantha Lai
Avatar
This story is horrifying on multiple levels. (1) It very likely cost lives in Philippines and around the world, including here in the US due to diminished trust in vaccines. (2) When democratic govs employ this kind of info operation, they undermine the values and trust that sustain democracies.
Reposted byAvatar Samantha Lai
Avatar
Pleased to join my SAIS colleagues Thomas Rid, Olga Belogolova, and Lee Foster in imploring 🇺🇲 politicians, journalists, researchers, and tech companies to tone down their propaganda hype 👇
Don’t Hype the Disinformation Threatwww.foreignaffairs.com Underplaying the risk helps foreign propagandists—but so does exaggerating it.
Reposted byAvatar Samantha Lai
Avatar
This is the beginning of something I've been working on for a while - if you read it, answer the question at the end. It's like a choose-your-own-adventure, except in this instance, I'll be deciding what to do next: lageneralista.com/the-informat...
The Information Animallageneralista.com
Reposted byAvatar Samantha Lai
Avatar
Reposted byAvatar Samantha Lai
Avatar
ICYMI: As the Senate considers a TikTok ban and major newspapers recycle hype about 🇨🇳/🇷🇺 influence ops, I argue that online manipulation is an overestimated threat. Treating "data" as the key to human suasion, and the public as an empty vessel, are likely far graver threats to democracy 👇
From Panic to Policy: The Limits of Foreign Propaganda and the Foundations of an Effective Response - Texas National Security Reviewtnsr.org American leaders and scholars have long feared the prospect that hostile foreign powers could subvert democracy by spreading false, misleading, and inflammatory information by using various media. Dra...
Reposted byAvatar Samantha Lai
Avatar
In which I argue that fears of foreign subversion online are likely overblown--and that the damage to democracy is more likely to stem from the assumption that online manipulation is more prevalent, our neighbors more gullible, and human behavior more dependent on media than is likely the case. A🧵:
From Panic to Policy: The Limits of Foreign Propaganda and the Foundations of an Effective Response - Texas National Security Reviewtnsr.org American leaders and scholars have long feared the prospect that hostile foreign powers could subvert democracy by spreading false, misleading, and inflammatory information by using various media. Dra...
Avatar
Policymakers from the White House to the United Nations have turned to “information integrity” as the approach for bettering their information ecosystems. But how can it be applied in the context of democracies? Read more here from @kamyayadav.bsky.social and me:
What Does Information Integrity Mean for Democracies?www.lawfaremedia.org Disinformation is only a symptom of a much larger problem.
Reposted byAvatar Samantha Lai
Avatar
As democracies grapple with how to combat disinformation, Kamya Yadav and @samlai.bsky.social ask how policymakers can build secure and resilient information ecosystems:
What Does Information Integrity Mean for Democracies?www.lawfaremedia.org Disinformation is only a symptom of a much larger problem.
Avatar
Decentralized social media platforms face significant challenges to robust and scalable governance. @yoyoel.com and I explore what it will take to change that: Out now in the Journal of Online Trust & Safety: tsjournal.org/index.php/jo...
View of Securing Federated Platforms | Journal of Online Trust and Safety tsjournal.org Journal of Online Trust and Safety
Reposted byAvatar Samantha Lai
Avatar
New paper from me and @samlai.bsky.social out today in the Journal of Online Trust and Safety, looking at the moderation capabilities of federated and decentralized social media, and what needs to happen to make these platforms resilient to online harms. doi.org/10.54501/jot...
Reposted byAvatar Samantha Lai
Avatar
Avatar
How can democracies fight disinformation? A new report by Jon Bateman and Dean Jackson assesses the effectiveness of 10 proposals, including supporting journalism, fact-checking and media literacy education. Read more here: carnegieendowment.org/2024/01/31/c...
Countering Disinformation Effectively: An Evidence-Based Policy Guidecarnegieendowment.org A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation.
Avatar
Reposted byAvatar Samantha Lai
Avatar
Delighted to see this paper published at HKS Misinformation Review with Gillian Murphy (not yet on BlueSky!) and a fantastic group of post docs and RAs. This was a huge review examining all misinfo studies published since 2016 misinforeview.hks.harvard.edu/article/what...
Reposted byAvatar Samantha Lai
Avatar
Avatar
“Calls grow louder to regulate artificial intelligence, counter disinformation, and social media. But how can democracies govern the information environment if they don’t know how it affects people’s thinking and behavior?” More from @lageneralista.bsky.social: www.oecd-forum.org/amp/posts/se...
The OECD Forum Networkwww.oecd-forum.org
Avatar
Reposted byAvatar Samantha Lai
Avatar
Reposted byAvatar Samantha Lai
Avatar
GW IDDP launched a new project tracking platform transparency globally and attempting to measure the Brussels effect of new EU transparency policies. We hope this will be a tool for advocates, and researchers. (Thanks techpolicypress and @justinhendrix.bsky.social)
Platform Researcher Access Tools & The Brussels Effecttechpolicy.press Anna Lenhart on the launch of a new Platform Researcher Access Tools & The Brussels Effect Tracker
Reposted byAvatar Samantha Lai
Avatar
Do we pay too much attention to disinformation? On the latest Tech Policy Podcast, we're joined by @lageneralista.bsky.social of Carnegie Endowment to discuss how to study and better understand the overall information environment. Full episode below: podcast.techfreedom.org/episodes/358...
Reposted byAvatar Samantha Lai
Avatar
It's impossible to condense over 3 decades of research and writing about trust and safety and internet governance into a 14 week class... but here's a first draft of the syllabus and reading list for my course next semester: docs.google.com/document/d/1... I'd love your feedback and suggestions.
Avatar
As summer winds down, here is a recap of what the Partnership for Countering Influence Operations has been up to in 2023:
Reposted byAvatar Samantha Lai
Avatar
Key findings: -Skewed evidence base: >80% of studies from Global North; 36% test debunking -Inoculation & debunking reduce misinfo belief; other evidence more limited/mixed -Interventions that experts rate highest — media literacy, journalist training, and platform alterations — among least studied
Our new literature review essay commissioned by USAID: Interventions to Counter Misinformation: Lessons From the Global North and Applications to the Global South https://pdf.usaid.gov/pdf_docs/PA0215JW.pdf (joint with Rob Blair, Jessica Gottlieb, Laura Paler, Pablo Argote and Charlene Stainfield)
Reposted byAvatar Samantha Lai
Avatar
This is a fascinating paper about a ChatGPT-based botnet for so many reasons, my favorite of which is: Fancy LLM/bot detectors all struggled to catch this, whereas human review and good old-fashioned social graph analysis revealed it quickly.
You have seen how powerful AI models like ChatGPT are. But what if they are used to supercharge malicious social bots? New paper: Anatomy of an AI-powered malicious social botnet Preprint: https://doi.org/10.48550/arXiv.2307.16336 Thread: https://twitter.com/yang3kc/status/1686751689078976512
Reposted byAvatar Samantha Lai
Avatar
if you're interested in the legal background of what state medical boards can and can't do, I dug into it here:
This is a pretty devastating report by the Washington Post about the failure/inability of state medical boards to discipline doctors spreading dangerous falsehoods around the coronavirus https://www.washingtonpost.com/health/2023/07/26/covid-misinformation-doctor-discipline/
www.washingtonpost.com
The Professional Price of Falsehoodsknightcolumbia.org
Reposted byAvatar Samantha Lai
Avatar
Reposted byAvatar Samantha Lai
Avatar
Reposted byAvatar Samantha Lai
Avatar
Reposted byAvatar Samantha Lai
Avatar
New from me in @lawfare.bsky.social: What we can learn from Twitter's struggles with Chrissy Teigen calling Trump a "pussy ass bitch," and why rigid legalism has led content moderation to a harmful dead end. https://www.lawfaremedia.org/article/content-moderation-s-legalism-problem
Content Moderation’s Legalism Problemwww.lawfaremedia.org Could a public editor solve social media’s crisis of trust?
Reposted byAvatar Samantha Lai
Avatar
this is an amazing paragraph