In my opinion the biggest issue the author points out is that cached materials are sometimes retained even after moderator action. Which honestly just sounds like a straight up bug more than anything. Though if I were running an instance, the feds showing up at my door with a warrant because I’ve been accidentally distributing CSAM would be my nightmare scenario. And of course jurisdiction plays a part, too: an American user on a Canadian server might see drawn depictions of sexualized minors, think “weird but not illegal,” and now the Canadian admin has content that’s illegal in Canada on their Canadian server and has no idea.
IMO I think the best solution to this is something similar to what Renaud Chaput (Mastodon’s resident infra boffin) described in his recent blog post. Effectively, give admins a way to hand this off to pluggable third-party services. Admins that are worried about this sort of thing can then have some degree of safety via e.g. PhotoDNA, whereas others can take on additional risk and preserve additional privacy.
All that said: yeah the headline makes it sound like .social is some 8chan-esque hellhole, whereas in reality my feed is 99% German programmers sharing milquetoast political takes.
So the standard approach to this is so-called “perceptual hashing.” Effectively, using cryptographic hashes (sha256, etc.) doesn’t really work well in this case. Given a piece of illegal content, that content is likely to still be just as illegal with a single pixel changed – however, it’ll have a completely different cryptographic hash. So instead, a hash function that determines how “similar-looking” two images are, ignoring things like dimensions, color palette, JPEG compression artifacts, etc. This is obviously way fuzzier, and is prone to both false positives and negatives.
Because all this is inherently kinda fuzzy, the exact database of hashes is usually “secret sauce” if you will. If it were public, it would be super easy to circumvent. As an example, given an illegal image:
As a result even “public” databases are distributed with NDAs etc. This obviously does not jive well with an open source, federated network like Mastodon, and I have my doubts as to how willing the relevant agencies would be to give their databases to every rando with $5 to spin up a Pleroma instance on a VPS. A public DB might help in some cases, but unfortunately more illegal content is produced every day, and so it would be extremely hard to keep up with the bad actors.