Go back
Artificial Intelligence

Artificial Intelligence

Debates

shavixmir
Lord

Sewers of Holland

Joined
31 Jan 04
Moves
89787
Clock
16 Mar 23
Vote Up
Vote Down

In the Netherlands, at any government level, the use of AI has to be ethnically measured and put in a register.

It’s an attempt to make it more accountable and transparant.

But the more I read about it andlearn about it, the coming levels of AI scare the living bejesus out of me.
Adding AI to the already tedious business of data mining and looking at the many examples where there is no human intervention in decision making, this looks like a right royal fukk-up in the making.

Here’s BBC article on it:
https://www.bbc.com/news/world-us-canada-64967627

Just an example.
You could look to sci-fi, say 2001: A Space Odessey, to see what the current generation is already capable of: deciding that human intervention has a negative impact on achieving goals (think driverless cars).

Add to that endless amounts of data…

I don’t know. Something tells me that the coming 10 years are going to turn into a nightmare.

What do you think should be done?

k
Flexible

The wrong side of 60

Joined
22 Dec 11
Moves
37304
Clock
16 Mar 23
Vote Up
Vote Down

@shavixmir said
In the Netherlands, at any government level, the use of AI has to be ethnically measured and put in a register.

It’s an attempt to make it more accountable and transparant.

But the more I read about it andlearn about it, the coming levels of AI scare the living bejesus out of me.
Adding AI to the already tedious business of data mining and looking at the many example ...[text shortened]... me that the coming 10 years are going to turn into a nightmare.

What do you think should be done?
I’m gonna spend as time getting stoned in my recliner as possible and just go with the data flow. I don’t think AI could do a worse job than its predecessor.

moonbus
Über-Nerd (emeritus)

Joined
31 May 12
Moves
8703
Clock
16 Mar 23
1 edit

@shavixmir

As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.

My wife works on the cutting edge of neuroradiology. There is now the fascinating ability to scan living human brains in realtime and decipher what people are thinking. This has vast potential, for both good and evil. On the good side, it can help to establish whether people in a coma are still in there, in a so-called locked-in state where they are conscious and can hear and understand, but cannot move even their eyeballs. Such people can be put into a scanner, the doctors can speak to them: "If you hear me and understand, think 'yes'". A certain part of the brain lights up, if they are aware, if there is a functioning consciousness. Then the doctor says something like "If your wife's name is not Mildred, think 'no.'" Of course, the medical records show that the wife's name is Hannah. Once the doctors know which parts of the brain light up for 'yes' and 'no', they can communicate, albeit on a very primitive level: "Do you know where you are?" "Are you in pain?" "Do you want us to shut down the life support machines and let you die?" etc.

If there is no response to any questions, the doctors know they have a beating-heart cadaver, not a person in locked-in state, and they can safely turn off the life support machines and start looking for organ-donor recipients.

This is useful technology.

But the CIA will someday try to weaponize this and use it as sort of lie-detector or worse, as a mean to coerce answers out of people who refuse to talk. Pure evil. Like a hideous torture device from a Star Trek episode.

The tech itself is neutral. It's people who use it for good or evil.

AI will be no different. It is alreadly being employed to analyze huge datasets for early signs of dementia, before symptoms are present. This may help to find ways, treatments, drugs, dietary changes, etc., to prevent dementia, rather than trying to halt it once it sets in.

There is no chance of stopping technological innovation. But there has to be political and legal oversight. The difficulty is, most politicians are not tech-savvy enough to understand the technical issues, to understand where to draw the lines, what sorts of research should be pursued (analyzing large medical datasets) and what not (eugenics and cloning, for example). Moreover, as the tech advances, the lines have to be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.

My two cents.

shavixmir
Lord

Sewers of Holland

Joined
31 Jan 04
Moves
89787
Clock
17 Mar 23

@moonbus said
@shavixmir

As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.

My wife works on the cutting edge of neuroradiology. There is now the fascinating abi ...[text shortened]... be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.

My two cents.
Good post. Interesting!

shavixmir
Lord

Sewers of Holland

Joined
31 Jan 04
Moves
89787
Clock
17 Mar 23

@moonbus said
@shavixmir

As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.

My wife works on the cutting edge of neuroradiology. There is now the fascinating abi ...[text shortened]... be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.

My two cents.
Thought about it.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.

moonbus
Über-Nerd (emeritus)

Joined
31 May 12
Moves
8703
Clock
17 Mar 23
3 edits

@shavixmir said
Thought about it.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.
I am very much in favor of having expert human opinion test the conclusions reached by pure AI analysis. I read Game Changer with great interest; that's the book by a GM and an IM on AlphaZero's spectacular win over Stockfish. The human analysis is worth the price of the book; without it the book would have been dull as a bag of dirt. AlphaZero's games themselves are marvels to play through but AlphaZero can't explain the strategies behind the moves, only humans can do that.

And another thing: although AlphaZero reached such a level that not even the best human player could match it under tournament time controls, it did not actually discover any new principles of strategy previously unknown to humans. It independently rediscovered all the same principles Tarrasch and Lasker and Alekhine and Nimzovitsch and Fischer and Carlsen know and knew -- put rooks behind passed pawns, knights are better in closed positions, bishops are better in open ones, etc. etc. -- but it combines them very more effectively than humans do.

vivify
rain

Joined
08 Mar 11
Moves
12456
Clock
17 Mar 23
1 edit
Vote Up
Vote Down

@shavixmir said
Thought about it.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.
The issue is whether or not an AI is programmed to write its own code, which does exist.

Merely being programmed to learn is nothing to worry about; what matters is how the AI is programmed to act on said information and whether it was programmed to write it's own code. AI can't do anything it's not programmed to, no matter how much information it collects, even if it collections information on how to write its own code.

If AI merely collects information to return more accurate queries, that's not a problem since it can't do anything beyond that. If you give such AI the ability to write it's own code or scripts, that's a cause for concern.

moonbus
Über-Nerd (emeritus)

Joined
31 May 12
Moves
8703
Clock
17 Mar 23
Vote Up
Vote Down

@vivify

For example, if one set the program to optimize sustainable material resources for the next 500 years, and its conclusion was to eliminate h. saps from the biosphere, I would be inclined to recalibrate some of the parameters. 😆

moonbus
Über-Nerd (emeritus)

Joined
31 May 12
Moves
8703
Clock
19 Mar 23
Vote Up
Vote Down

@shavixmir

Link to an interview, summarizing the potential and the risks:

https://edition.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.