Post

Avatar
I'm not going to litigate the specifics of this situation, but there are some critical lessons here for people who are thinking of running a labeler (and to some extent they're the lessons of T&S in general, but they are even more important given the paradigm of composable moderation).
Avatar
This thread covers the two fundamental things all labelers need to decide on up front and stick to: 1) Who is doing the moderation, what are their biases, and how are those biases mitigated? 2) Are you moderating/labeling objective actions/content, or subjective characteristics?
Avatar
Each of these two points have a lot (and I mean A LOT) of nuance. (Like everything having to do with T&S!) Let's start with #1: bias mitigation. People who oppose community-driven moderation are now smugly parading around going "of course anyone who wants to be a mod is biased!"
Avatar
This is the wrong way to look at it. It's not an inherent problem with community moderation: it's an inherent problem with people. Everyone is biased, in a million different ways. We all have our viewpoints of what we think is good vs bad.
Avatar
Elon Musk thinks the word "cis" is a slur and should be moderated: that's a bias. I think people who create accounts only to advertise things are spammers and should be moderated: bias. You may think associating a wallet name with an account name is doxing and should be moderated: bias. Etc.
Avatar
T&S, inherently, is a biased process: it involves someone's definitions of what should and shouldn't be actioned. There is no such thing as neutral, unbiased moderation. Anyone who says otherwise is simply asserting societal prejudices that are declared "objective" because of who holds them.
Avatar
And, crucially, people don't want moderation to be "unbiased", or to fall back solely on externalities such as "is this content legal". Don't believe me? Look at the months-long Discourse on child safety: most of the content many people very loudly want removed is legal under US law.
Avatar
What people are calling "bias" here, me included (because it's shorter), is actually better termed "viewpoint". Moderation is a function of viewpoint. You choose a viewpoint lens through which to moderate and apply it to your policies and actions.
Avatar
The neat thing about Bluesky's experiment in composable moderation (which, as everyone who's been following me for ages knows, I am still dubious about the long term likelihood of success of, but this is *not* the reason why) is that you can pick which viewpoint you want to view the site through.
Avatar
What people starting up labelers are going to have to do, though, is work out how to ensure the agents doing the work to action reports are going to apply *the labeling service's* viewpoint and not their own. This is an incredibly, incredibly difficult problem.
Avatar
The fundamental tension here: a labeler with a strong viewpoint built from the (actual or perceived) consensus of a specific group as to what should be moderated will naturally want to draw its agents from members of that group, who have a familiarity with the group's social norms and practices.
Avatar
This allows contextual interpretation of reported content. Failures of cultural competency result in problems where the members of the group can easily understand why a post should be moderated, but an outsider has no idea and thinks the post is inocuous. This happens *all the time*.
Avatar
However, members of the group, who will have social connections within the group and have already formed opinions and reads on people in the group, will, always, need to compensate for the human tendency to read charitably when you agree with/like the speaker and uncharitably when you don't.
Avatar
Let me be very clear here: this is not an individual failing of any specific person. It's fundamental human nature. You can compensate for it when you know the tendency exists, but you can never eliminate it. I do it. You do it. Every moderator ever has done it.
Avatar
Real example: the Taurus labeler is getting notice with Aegis gone. I knew about it. But that "zoophilia" label had me wondering: is this run by someone who thinks feral art (SFW or not) is zoophilia? 1/2
Avatar
It's A Discourse someone outside furry wouldn't be privy to seeing that, and wouldn't know enabling might block (for example) Lion King or Balto fanart. I set it to Warn, and so far so good. But it's a viewpoint that could find its way in with the addition of a mod, for example. 2/2
Avatar
I read this as “compostable moderation” 😅 I wonder how exactly one would compost their moderation Why are you dubious of the long term likelihood?
Avatar
That is a whole separate 150-post thread, heh
Avatar
Not rahaeli (obv.) but did moderation on Reddit, & whose moderation model parallels here, & there’s significant flaws with Volunteer Third Party Moderation there, too. My own biggest issue is «What happens when “moderators” are actually chaos agents»; From which arises «Who watches the watchers»
Avatar
Imo reddit could use some composable moderation. A lot of subreddit rules should really be filters. Another thing (which is also true with other platforms) is that (most?) people aren’t satisfied with not seeing objectionable content, they want it off the platform entirely.
Avatar
And to make it harder: sometimes they're right and allowing that content is harmful to the platform as a whole, sometimes they're wrong and allowing that content is fine if you sandbox and label it, and nobody agrees on how to separate the two categories
Avatar
I hope you’ll make that thread someday, rah
Avatar
Me too (no pressure, just appreciation!)
Avatar
that's how I keep the gardening feed in shape
Avatar
I actually also thought it was compostable and figured that was just rah saying it was garbage, until eventually reading it correctly in a later post.
Avatar
this is a much funnier reading than my own haha
Avatar
Haha nice to know I wasn’t the only one, probably because posTing is what we do here
Avatar
Oh, but that--seems obvious to me? The point of moderation is that you are shaping the community you want, right? It's like bonsai, in a way. ... did-- ... do people not think that's what the point is? o.o
Avatar
Part of the issue is that a lot of humans from all sorts of viewpoints are here, and we don’t share the same assumptions about what’s appropriate behavior.
Avatar
Yeah. There's a fundamental tension between "your application of the policy should be as concrete and objective as possible to ensure consistency in application across multiple agents" and "the formation of the policy is inherently subjective".
Avatar
People say "this moderation is biased", but what they mean is that they perceive the moderation has failed at the former. (Whether or not that perception is accurate is another story.) I think it's very worth making the tension explicit.