New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the watermark, or even whether an image is watermarked to begin with.
There are other privacy issues with having an indelible marker as to the origin and chain of custody of every digital artifact. And other non-privacy issues.
So the idea here is that my phone camera attaches a crypro token to the metadata of every photo it takes? (Or worse, embeds it into the image steganographically like printer dots.) Then if I send that photo to a friend in signal, that app attaches a token indicating the transfer? And so on?
If that’s a video of say, police murdering someone, maybe I don’t want a perfect trail pointing back to me just to prove I didnt deep fake it. And if that’s where we are, then every video of power being abused is going to “be fake” because no sane person would sacrifice their privacy, possibly their life, to “prove” a video isnt AI generated.
And those in power, the mainstream media say, aren’t going to demonstrate the crypto chain of custody on every video they show on the news. They’re going to show whatever they want, then say “its legit, trust us!” and most people will.
These are the fundamental issues with crypto that people actually don’t understand: too much of it is actually opt-in, it’s unclear to most people what’s actually proved or protected, and it doesn’t actually address or understsnd where trust, authority, and power actually come from.