Fun fact, this loop is kinda how one of the generative ML algorithms works. This algorithm is called Generative Adversarial Networks or GAN.
You have a so-called Generator neural network G that generates something (usually images) from random noise and a Discriminator neural network D that can take images (or whatever you’re generating) as input and outputs whether this is real or fake (not actually in a binary way, but as a continuous value). D is trained on images from G, which should be classified as fake, and real images from a dataset that should be classified as real. G is trained to generate images from random noise vectors that fool D into thinking they’re real. D is, like most neural networks, essentially just a mathematical function so you can just compute how to adjust the generated image to make it appear more real using derivatives.
In the perfect case these 2 networks battle until they reach peak performance. In practice you usually need to do some extra shit to prevent the whole situation from crashing and burning. What often happens, for instance, is that D becomes so good that it doesn’t provide any useful feedback anymore. It sees the generated images as 100% fake, meaning there’s no longer an obvious way to alter the generated image to make it seem more real.
Well the AI-based AI detector isn’t actively making creative people’s work disappear into a sea of gen-AI “art” at least.
There’s good and bad use cases for AI, I consider this a better use case than generating art. Now the question is whether or not it’s feasible to detect AI this way.
I have an Immich instance running on my home server that backs up my and my wife’s photos. It’s like an open source Google Photos.
One of its features is an local AI model that recognises faces and tags names on them, as well as doing stuff like recognising when a picture is of a landscape, food, etc.
Likewise, Firefox has a really good offline translation feature that runs locally and is open source.
AI doesn’t have to be bad. Big tech and venture capital is just choosing to make it so.
They’re gonna use AI to detect the use of AI.
Ouroboros
Not if I use AI to hide my use of AI first!
Fun fact, this loop is kinda how one of the generative ML algorithms works. This algorithm is called Generative Adversarial Networks or GAN.
You have a so-called Generator neural network G that generates something (usually images) from random noise and a Discriminator neural network D that can take images (or whatever you’re generating) as input and outputs whether this is real or fake (not actually in a binary way, but as a continuous value). D is trained on images from G, which should be classified as fake, and real images from a dataset that should be classified as real. G is trained to generate images from random noise vectors that fool D into thinking they’re real. D is, like most neural networks, essentially just a mathematical function so you can just compute how to adjust the generated image to make it appear more real using derivatives.
In the perfect case these 2 networks battle until they reach peak performance. In practice you usually need to do some extra shit to prevent the whole situation from crashing and burning. What often happens, for instance, is that D becomes so good that it doesn’t provide any useful feedback anymore. It sees the generated images as 100% fake, meaning there’s no longer an obvious way to alter the generated image to make it seem more real.
Sorry for the infodump :3
They are going to implement an AI detector detector detector.
Trace busta busta?
This one
It’s detectors all the way down.
But of course, as a tortoise, you would know that.
Did I seduce you a lot with my hard, polished shell? You can speak freely, nobody will ever know!
No, it was your intense, Gowron-like stare that truly drilled into my heart.
Glory to seduction! Glory to the empire!
But then they’ll implement an AI detector detector deflector.
Spoken like a true AI.
Well the AI-based AI detector isn’t actively making creative people’s work disappear into a sea of gen-AI “art” at least.
There’s good and bad use cases for AI, I consider this a better use case than generating art. Now the question is whether or not it’s feasible to detect AI this way.
Indeed.
I have an Immich instance running on my home server that backs up my and my wife’s photos. It’s like an open source Google Photos.
One of its features is an local AI model that recognises faces and tags names on them, as well as doing stuff like recognising when a picture is of a landscape, food, etc.
Likewise, Firefox has a really good offline translation feature that runs locally and is open source.
AI doesn’t have to be bad. Big tech and venture capital is just choosing to make it so.
Seems that way: