Conference CFP: Rethinking the Inevitability of AI: The Environmental and Social Impacts of Computing in Historical Context
April 22, 2024: 300-500 word paper proposals due
July 18, 2024: Date of online conference
Submit your fabulous work!
uva.theopenscholar.com/rethinking-t...
“Academics, public policy workers, journalists, and community activists.” Hmm… not practitioners? I continue to read acadenics and activists who think they are ahead of practitioners on social impact topics. In the area racial justice, I think not.
Could be wrong, but there is significant dissatisfaction about academic work, public policy, and journalism among practitioners who see the history of anti-Black racism as a guide to understand what’s happening now.
People working in the tech industry, specifically those who work in AI but who don’t represent official company lines and whose points of view differ from those in companies. I want to tell a story about anti-Black and anti-race-mixing racism and trace through lines fl where we are in AI Ethics.
My interest in this also comes from professional experience and I'd consider us eligible participants even if the categories don't match exactly. Somewhere between the policy worker and the community activist.
I agree ahead of companies, but ahead of practitioners, sone of them Black or Black-adjacent, who see issues and social impact? I think not. I hope you’re welcome a historical perspective on that with though lines to why AI doesn’t have plausible answers on systemic racism and systemic sexism.
Completely agree, & definitely learn so much from them—especially whistleblowers like Anika Collier Navaroli, Timnit Gebru & many others. One issue is they’re often not at liberty to talk publicly until leaving the industry, but if any wanted to speak at this conference I would be really stoked.
When I say practitioners, I’m not necessarily talking only about researchers or lawyers. I’m talking about rank-and-file people who notice the relationship of AI to disparities and want both ethical statements, work products, and outcomes to meet aspirational, corporate impact statements.
I don’t know exactly what to suggest to modify your call, but I would propose that you need something that invites the kind of contribution we are discussing without people being afraid of what might be said, and, without inviting a bunch of corporate fluff, that doesn’t get out your purpose. Thanks
Thanks so much! I will solicit this to a few people, but you will be receiving at least one proposal. Feedback welcome, as that will likely involve a combination of high academic researchers and rank and file people who in my opinion are awake to actual impact and loyally critical of some answers.
Great to know. Margaret Mitchell would be another. You’ve identified a really important structural factor. That said, some companies have social media, and non-retaliation policies.
I’m not saying you’re doing this, but I often read the conceit, among many academics, journalists, and activists that they are ahead on some of the social impact of AI. I work with a lot of incredibly well studied Black practitioners among others, and I both agree and disagree.