Go back
Asking AI to detect itself..

Asking AI to detect itself..

Debates

w

Joined
20 Oct 06
Moves
9627
Clock
28 Jun 23
3 edits

It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.

So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to fix it? Let's ask AI to detect itself. Like asking murderers to judge their own trials.

Sorry folks, I hate to sound like Greta Thunberg, but this won't end well.

https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html

vivify
rain

Joined
08 Mar 11
Moves
12456
Clock
28 Jun 23

@wildgrass said
Let's ask AI to detect itself. Like asking murderers to judge their own trials.
AI is not self-aware, despite how movies and pop-culture make it seem. AI has no capacity to deliberately mislead unless programmed to do so. No matter how close to sentient humans make them appear it's all just an illusion. They're just lines of code.

w

Joined
20 Oct 06
Moves
9627
Clock
28 Jun 23
1 edit

@vivify said
AI is not self-aware, despite how movies and pop-culture make it seem. AI has no capacity to deliberately mislead unless programmed to do so. No matter how close to sentient humans make them appear it's all just an illusion. They're just lines of code.
It does deliberately mislead. It's a mimicry program. The whole point of the algorithm is to mimic photography and make art that is realistic enough to avoid detection from its own detection tools.

And the detection tools all disagree with each other even with real art.

At current levels, I don't know how intelligence communities are able to reliably tell the difference between real and deepfake photos and audio. Every new tool to discern real from fake initiates a new round of computation that allows it to evade detection.

Soothfast
0,1,1,2,3,5,8,13,21,

☯️

Joined
04 Mar 04
Moves
2709
Clock
28 Jun 23
1 edit

@wildgrass said
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.

So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
Well, yes, as far as I know it's possible to distinguish between deepfakes (AI generated or otherwise) and reality. There's always a tell, and it could boil down to a statistical analysis of the color of every pixel in the image to detect the workings of an algorithm.

The problem is that such forensic analyses of deepfakes are still costly because it requires expertise, so we have a situation where those without means could be framed and prosecuted by the US's already dodgy criminal justice system on the basis of "evidence" that has been fabricated. It should become standard procedure to carry out an analysis of any images or videos that are presented as evidence in a court of law for signs of tampering, manipulation, or outright fabrication. If an image or video can now lie like a human, then it stands to reason that what an image or video "says" should be considered hearsay until it's been properly authenticated.

vivify
rain

Joined
08 Mar 11
Moves
12456
Clock
28 Jun 23

@wildgrass said
It does deliberately mislead. It's a mimicry program. The whole point of the algorithm is to mimic photography and make art that is realistic enough to avoid detection from its own detection tools.
Humans use AI to deceive but the AI itself doesn't. It just does what you tell it to.

It seemed you making a case that AI can make a "conscious" choice to mislead people.

w

Joined
20 Oct 06
Moves
9627
Clock
28 Jun 23
1 edit

@vivify said
Humans use AI to deceive but the AI itself doesn't. It just does what you tell it to.

It seemed you making a case that AI can make a "conscious" choice to mislead people.
It's been asked to deceive AI algorithms designed to detect it, and the detection algorithms are being trained using images that may or may not have been generated by AI. That creates a fundamental conflict of interest. Right?

The AI artists are succeeding in avoiding detection by the AI police.

w

Joined
20 Oct 06
Moves
9627
Clock
28 Jun 23

@soothfast said
Well, yes, as far as I know it's possible to distinguish between deepfakes (AI generated or otherwise) and reality. There's always a tell, and it could boil down to a statistical analysis of the color of every pixel in the image to detect the workings of an algorithm.

The problem is that such forensic analyses of deepfakes are still costly because it requires expertis ...[text shortened]... t what an image or video "says" should be considered hearsay until it's been properly authenticated.
In the NYT article they fed a bunch of real and fake images into several different sophisticated detection tools and found many cases of disagreement about what is real. False positives and false negatives were also inconsistent among the tools. Maybe there's always a tell, but the tools we have don't know what the tells are.

Soothfast
0,1,1,2,3,5,8,13,21,

☯️

Joined
04 Mar 04
Moves
2709
Clock
28 Jun 23

@wildgrass said
In the NYT article they fed a bunch of real and fake images into several different sophisticated detection tools and found many cases of disagreement about what is real. False positives and false negatives were also inconsistent among the tools. Maybe there's always a tell, but the tools we have don't know what the tells are.
I think we're in a time in history when the sophistication of fakery methods has run ahead of authentication techniques. There is a lot of motivation right now (the profit motive especially) to hone methods of image generation and/or manipulation. I expect the "other side" will make up some ground in the near future, though I am dubious the free market will do much.

Nonprofits and the open source coding community may help, but it's likely going to require government grants to speed research and development of forensic tools to detect deepfakes.

w

Joined
20 Oct 06
Moves
9627
Clock
28 Jun 23

@soothfast said
I think we're in a time in history when the sophistication of fakery methods has run ahead of authentication techniques. There is a lot of motivation right now (the profit motive especially) to hone methods of image generation and/or manipulation. I expect the "other side" will make up some ground in the near future, though I am dubious the free market will do much.

...[text shortened]... o require government grants to speed research and development of forensic tools to detect deepfakes.
The experts interviewed in the news story were arguing that it's only going to get worse from here as the machine learning tools obtain more information about how to hide their tracks.

k
Flexible

The wrong side of 60

Joined
22 Dec 11
Moves
37304
Clock
29 Jun 23

@wildgrass said
The experts interviewed in the news story were arguing that it's only going to get worse from here as the machine learning tools obtain more information about how to hide their tracks.
Ultimately we will have to stop using cctv and other photographic evidence in civil and criminal cases without corroborating evidence until er AI catches up with itself 🤔
I’m sure it won’t be long before the average person does not take any photographic evidence at face value. Given the corrosive effect of a lot of paparazzi type photojournalism it’s maybe not a bad thing

Earl of Trumps
Pawn Whisperer

My Kingdom fora Pawn

Joined
09 Jan 19
Moves
20424
Clock
30 Jun 23
Vote Up
Vote Down

@wildgrass said
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.

So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
Hey, bud.

I can't get into the link. I Think I used all my free reads from NYTimes and they want me to subscribe.

so sad.

shavixmir
Lord

Sewers of Holland

Joined
31 Jan 04
Moves
89770
Clock
30 Jun 23
1 edit

@wildgrass said
It's common knowledge that many things on the internet now - news articles, spacebook feeds, images, videos - are written or designed by Artificial Intelligence.

So it's become harder and harder to know what's real. If you look at some of the images shown in this article (generated by a computer), the line between real and fake is currently unknowable to humans. How to ...[text shortened]... ww.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html
Sounds like Bladerunner to me.

And I totally agree. This isn’t going to end well.
Just look at the human made misinformation on the internet and especially social media, and the consequences that’s having.

For example: Vaccination of children in the Netherlands (for the normal child diseases) has dropped below 90%.
If this continues measles and the like are going to return.

Now, when you add AI into the mixture, we’re certainly looking at problematic human choices in the near future. And that’s without AI making conscientious decisions for us.

Soothfast
0,1,1,2,3,5,8,13,21,

☯️

Joined
04 Mar 04
Moves
2709
Clock
30 Jun 23
Vote Up
Vote Down

@shavixmir said
Sounds like Bladerunner to me.

And I totally agree. This isn’t going to end well.
Just look at the human made misinformation on the internet and especially social media, and the consequences that’s having.

For example: Vaccination of children in the Netherlands (for the normal child diseases) has dropped below 90%.
If this continues measles and the like are going to ...[text shortened]... matic human choices in the near future. And that’s without AI making conscientious decisions for us.
There might be a happy ending. Perhaps an AI will take over the world and become a benevolent dictator of all humanity, and sort us all out compassionately, rationally, and for the greatest good. Clearly humanity can't get its act together on its own.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.