Effective CSAM filters are impossible because what CSAM is depends on context

submited by
Style Pass
2024-07-09 14:30:03

Automatically tagging or filtering child sexual exploitation materials (CSAM) cannot be effective and preserve privacy at the same time, regardless of what kind of tech one throws at it. Because what is and what is not CSAM is highly dependent on context.

Literally the same photo, bit-by-bit identical, can be an innocent memorabilia when sent between family members, and a case of CSAM if shared on a child porn group.

The information necessary to tell whether or not it is CSAM is not available in the file being shared. It is impossible to tell it apart by any kind of technical means based on the file alone. The current debate about filtering child sexual exploitation materials (CSAM) on end-to-end encrypted messaging services, like all previous such debates (of which there were many), mostly ignores this basic point.

Whenever CSAM and filtering of it become a topic of public debate, there is always a bunch of technical tools being peddled as a “solution” to it. These might involve hashing algorithms, machine learning systems (or as the marketing department would call them, “AI”), or whatever else.

Leave a Comment