These are automated detection models.
Easier would be to demand the platform runs those models on all content.
Of course, if the false positive rate is high, that would be a real problem too...
They already have a lot of protections in place. No protection will be 100%, it's simply not possible to cover every edge case. Getting the most of it out is already a lot and makes the platform less attractive to people who want to spread CSAM.