It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.
So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to fix it? Let's ask AI to detect itself. Like asking murderers to judge their own trials.
Sorry folks, I hate to sound like Greta Thunberg, but this won't end well.
https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
@wildgrass saidAI is not self-aware, despite how movies and pop-culture make it seem. AI has no capacity to deliberately mislead unless programmed to do so. No matter how close to sentient humans make them appear it's all just an illusion. They're just lines of code.
Let's ask AI to detect itself. Like asking murderers to judge their own trials.
@vivify saidIt does deliberately mislead. It's a mimicry program. The whole point of the algorithm is to mimic photography and make art that is realistic enough to avoid detection from its own detection tools.
AI is not self-aware, despite how movies and pop-culture make it seem. AI has no capacity to deliberately mislead unless programmed to do so. No matter how close to sentient humans make them appear it's all just an illusion. They're just lines of code.
And the detection tools all disagree with each other even with real art.
At current levels, I don't know how intelligence communities are able to reliably tell the difference between real and deepfake photos and audio. Every new tool to discern real from fake initiates a new round of computation that allows it to evade detection.
@wildgrass saidWell, yes, as far as I know it's possible to distinguish between deepfakes (AI generated or otherwise) and reality. There's always a tell, and it could boil down to a statistical analysis of the color of every pixel in the image to detect the workings of an algorithm.
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.
So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
The problem is that such forensic analyses of deepfakes are still costly because it requires expertise, so we have a situation where those without means could be framed and prosecuted by the US's already dodgy criminal justice system on the basis of "evidence" that has been fabricated. It should become standard procedure to carry out an analysis of any images or videos that are presented as evidence in a court of law for signs of tampering, manipulation, or outright fabrication. If an image or video can now lie like a human, then it stands to reason that what an image or video "says" should be considered hearsay until it's been properly authenticated.
@wildgrass saidHumans use AI to deceive but the AI itself doesn't. It just does what you tell it to.
It does deliberately mislead. It's a mimicry program. The whole point of the algorithm is to mimic photography and make art that is realistic enough to avoid detection from its own detection tools.
It seemed you making a case that AI can make a "conscious" choice to mislead people.
@vivify saidIt's been asked to deceive AI algorithms designed to detect it, and the detection algorithms are being trained using images that may or may not have been generated by AI. That creates a fundamental conflict of interest. Right?
Humans use AI to deceive but the AI itself doesn't. It just does what you tell it to.
It seemed you making a case that AI can make a "conscious" choice to mislead people.
The AI artists are succeeding in avoiding detection by the AI police.
@soothfast saidIn the NYT article they fed a bunch of real and fake images into several different sophisticated detection tools and found many cases of disagreement about what is real. False positives and false negatives were also inconsistent among the tools. Maybe there's always a tell, but the tools we have don't know what the tells are.
Well, yes, as far as I know it's possible to distinguish between deepfakes (AI generated or otherwise) and reality. There's always a tell, and it could boil down to a statistical analysis of the color of every pixel in the image to detect the workings of an algorithm.
The problem is that such forensic analyses of deepfakes are still costly because it requires expertis ...[text shortened]... t what an image or video "says" should be considered hearsay until it's been properly authenticated.
@wildgrass saidI think we're in a time in history when the sophistication of fakery methods has run ahead of authentication techniques. There is a lot of motivation right now (the profit motive especially) to hone methods of image generation and/or manipulation. I expect the "other side" will make up some ground in the near future, though I am dubious the free market will do much.
In the NYT article they fed a bunch of real and fake images into several different sophisticated detection tools and found many cases of disagreement about what is real. False positives and false negatives were also inconsistent among the tools. Maybe there's always a tell, but the tools we have don't know what the tells are.
Nonprofits and the open source coding community may help, but it's likely going to require government grants to speed research and development of forensic tools to detect deepfakes.
@soothfast saidThe experts interviewed in the news story were arguing that it's only going to get worse from here as the machine learning tools obtain more information about how to hide their tracks.
I think we're in a time in history when the sophistication of fakery methods has run ahead of authentication techniques. There is a lot of motivation right now (the profit motive especially) to hone methods of image generation and/or manipulation. I expect the "other side" will make up some ground in the near future, though I am dubious the free market will do much.
...[text shortened]... o require government grants to speed research and development of forensic tools to detect deepfakes.
@wildgrass saidUltimately we will have to stop using cctv and other photographic evidence in civil and criminal cases without corroborating evidence until er AI catches up with itself 🤔
The experts interviewed in the news story were arguing that it's only going to get worse from here as the machine learning tools obtain more information about how to hide their tracks.
I’m sure it won’t be long before the average person does not take any photographic evidence at face value. Given the corrosive effect of a lot of paparazzi type photojournalism it’s maybe not a bad thing
@wildgrass saidHey, bud.
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.
So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
I can't get into the link. I Think I used all my free reads from NYTimes and they want me to subscribe.
so sad.
@wildgrass saidSounds like Bladerunner to me.
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.
So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
And I totally agree. This isn’t going to end well.
Just look at the human made misinformation on the internet and especially social media, and the consequences that’s having.
For example: Vaccination of children in the Netherlands (for the normal child diseases) has dropped below 90%.
If this continues measles and the like are going to return.
Now, when you add AI into the mixture, we’re certainly looking at problematic human choices in the near future. And that’s without AI making conscientious decisions for us.
@shavixmir saidThere might be a happy ending. Perhaps an AI will take over the world and become a benevolent dictator of all humanity, and sort us all out compassionately, rationally, and for the greatest good. Clearly humanity can't get its act together on its own.
Sounds like Bladerunner to me.
And I totally agree. This isn’t going to end well.
Just look at the human made misinformation on the internet and especially social media, and the consequences that’s having.
For example: Vaccination of children in the Netherlands (for the normal child diseases) has dropped below 90%.
If this continues measles and the like are going to ...[text shortened]... matic human choices in the near future. And that’s without AI making conscientious decisions for us.