In a reply to @kenwhite.bsky.social, @mosseri.bsky.social claimed this was a bug, or due to a safety measure for the domain applied by mistake.
Neither Mosseri nor Meta comms have followed up to explain what happened & why.
Zuckerberg & company know about the Streisand effect. Would be a daft play.
Meta blocked a newspaper’s critical report about it on Facebook and its other social sites for hours, sparking a backlash that intensified after the company appeared to subsequently block links to the website of an independent journalist who republished the report.
Looks like it a few domains, including thehandbasket.co, were mistakenly classified as a phishing site, which has since been corrected as Andy mentioned. Unfortunately, at our scale, we get false positives on safety measures all the time. Apologies for the trouble.
Hi Adam. No, it hasn’t been corrected. My site suddenly started being blocked across the Meta network at the same time as thehandbasket.co, with the same completely false warning about “malware,” after a link was posted to the same article at my site. It’s still blocked right now.
Check his handle.
I find the admission that your scale is the problem interesting, from a regulatory point of view. I mean the logical conclusion of company leadership saying they’re just too big to prevent themselves from constantly abusing people inadvertently is that it should be broken up, no?
Not really. Any major platform that has millions of things posted a day, or even thousands, is going to have to rely on classifiers to make decisions. No classifier is perfect. But if you held every platform to a standard of zero mistakes then there would be no platforms, including this one.
You do realize that this "explanation" is just an admission that your company should be broken up. If it is too big to properly moderate, which you are admitting here, then you are to big to be allowed to exist.
Then you could make the same argument for food companies (and others). No process is perfect and it’s a question of acceptable margin of error. www.cnn.com/2019/10/04/h...
You do realize that food companies are heavily regulated right? And are supposed to be regularly inspected for compliance? And held to very standards such that massive food recalls are often done when things do happen?
Now tell me how much that applies to internet outlets.
There are six vertically integrated did companies and four consolidated meat packing and distribution companies.
Your argument isn't the winner you think it is..
Thank you for reflecting on scale.
What do you aim for, as an acceptable error rate in moderation or false positive rate in flagging spam or manipulation?
How do you balance using algorithmic moderation at scale against retaining consumer trust? Do you whitelist creators with higher engagement?
We aim to minimize prevalence of problems, ideally to a level too low to be able to measure, while minimizing false positives. We don’t except large accounts, but we do offer appeals when we take content and accounts down so we can correct mistakes.
Having gone through your appeals process, allow me to say that you are very, very bad at your stated aim, and your redress ability just outright sucks.
But I am not surprised that you all chose the "we're not evil, we're simply incompetent at our whole entire business" line of argument.
Is this why my Facebook feed is full of absolute garbage from people I don’t follow and didn’t ask for?
IDK why anyone cares what you can post to FB; the site is so wretchedly worthless that it makes no sense to ever look at it.
So the film Brazil. You want to make the film Brazil a reality. You don't mind mistakes that could lock innocent people out of their networks for criticizing Meta, as long as its marginal enough that the media doesn't notice. Is that what you're telling me?
Appreciate you engaging here. There’s a missing piece here, which is that other sites which share content on your platform don’t have tools for managing trust. You could improve signal and reduce false positives. (We’ve had our platform blocked on IG & FB with no recourse but me reaching out.)
Amazing how every single independent "safety" process and decision here erred on the side of preventing access to information critical of your company. How do explain this extraordinary coincidence?
What happens when someone is venting (on their own profile, to friends only) about something that happened that hurt them and the automated system mistakes it for harmful content, then refusing any appeals as the post is permanently deleted without warning?
This has happened to me several times.
So surely at this stage you should be hand-removing the false positives?
You've got the people with the sites right here in the thread.
Surely you're not operating at a scale that you are telling us you're literally unable to manage?
MySpace couldn't seem to keep up with the spam. It seemed users preferred a curated real name only space at that time. Getting bought by an entertainment company probably didn't help.
Then, years later, Instagram seemed to grab a lot of the same demographics without having to force real names.
What should the procedure be to get something like this corrected other than publicly posting about it? On both my personal page and on brand pages I've managed, it's maddening trying to get issues like this fixed with no option of contacting the company directly.