In the Netherlands, at any government level, the use of AI has to be ethnically measured and put in a register.
It’s an attempt to make it more accountable and transparant.
But the more I read about it andlearn about it, the coming levels of AI scare the living bejesus out of me.
Adding AI to the already tedious business of data mining and looking at the many examples where there is no human intervention in decision making, this looks like a right royal fukk-up in the making.
Here’s BBC article on it:
https://www.bbc.com/news/world-us-canada-64967627
Just an example.
You could look to sci-fi, say 2001: A Space Odessey, to see what the current generation is already capable of: deciding that human intervention has a negative impact on achieving goals (think driverless cars).
Add to that endless amounts of data…
I don’t know. Something tells me that the coming 10 years are going to turn into a nightmare.
What do you think should be done?
@shavixmir saidI’m gonna spend as time getting stoned in my recliner as possible and just go with the data flow. I don’t think AI could do a worse job than its predecessor.
In the Netherlands, at any government level, the use of AI has to be ethnically measured and put in a register.
It’s an attempt to make it more accountable and transparant.
But the more I read about it andlearn about it, the coming levels of AI scare the living bejesus out of me.
Adding AI to the already tedious business of data mining and looking at the many example ...[text shortened]... me that the coming 10 years are going to turn into a nightmare.
What do you think should be done?
@shavixmir
As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.
My wife works on the cutting edge of neuroradiology. There is now the fascinating ability to scan living human brains in realtime and decipher what people are thinking. This has vast potential, for both good and evil. On the good side, it can help to establish whether people in a coma are still in there, in a so-called locked-in state where they are conscious and can hear and understand, but cannot move even their eyeballs. Such people can be put into a scanner, the doctors can speak to them: "If you hear me and understand, think 'yes'". A certain part of the brain lights up, if they are aware, if there is a functioning consciousness. Then the doctor says something like "If your wife's name is not Mildred, think 'no.'" Of course, the medical records show that the wife's name is Hannah. Once the doctors know which parts of the brain light up for 'yes' and 'no', they can communicate, albeit on a very primitive level: "Do you know where you are?" "Are you in pain?" "Do you want us to shut down the life support machines and let you die?" etc.
If there is no response to any questions, the doctors know they have a beating-heart cadaver, not a person in locked-in state, and they can safely turn off the life support machines and start looking for organ-donor recipients.
This is useful technology.
But the CIA will someday try to weaponize this and use it as sort of lie-detector or worse, as a mean to coerce answers out of people who refuse to talk. Pure evil. Like a hideous torture device from a Star Trek episode.
The tech itself is neutral. It's people who use it for good or evil.
AI will be no different. It is alreadly being employed to analyze huge datasets for early signs of dementia, before symptoms are present. This may help to find ways, treatments, drugs, dietary changes, etc., to prevent dementia, rather than trying to halt it once it sets in.
There is no chance of stopping technological innovation. But there has to be political and legal oversight. The difficulty is, most politicians are not tech-savvy enough to understand the technical issues, to understand where to draw the lines, what sorts of research should be pursued (analyzing large medical datasets) and what not (eugenics and cloning, for example). Moreover, as the tech advances, the lines have to be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.
My two cents.
@moonbus saidGood post. Interesting!
@shavixmir
As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.
My wife works on the cutting edge of neuroradiology. There is now the fascinating abi ...[text shortened]... be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.
My two cents.
@moonbus saidThought about it.
@shavixmir
As with almost all technological inventions, there are advantages and dangers. With the exception of atom bombs and biological weapons, most technology is morally neutral, but it often magnifies the motives, for good and for evil, of the humans who put the technology to use.
My wife works on the cutting edge of neuroradiology. There is now the fascinating abi ...[text shortened]... be re-drawn, over and over, as we unlock deeper and deeper realms of micro-biology.
My two cents.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.
@shavixmir saidI am very much in favor of having expert human opinion test the conclusions reached by pure AI analysis. I read Game Changer with great interest; that's the book by a GM and an IM on AlphaZero's spectacular win over Stockfish. The human analysis is worth the price of the book; without it the book would have been dull as a bag of dirt. AlphaZero's games themselves are marvels to play through but AlphaZero can't explain the strategies behind the moves, only humans can do that.
Thought about it.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.
And another thing: although AlphaZero reached such a level that not even the best human player could match it under tournament time controls, it did not actually discover any new principles of strategy previously unknown to humans. It independently rediscovered all the same principles Tarrasch and Lasker and Alekhine and Nimzovitsch and Fischer and Carlsen know and knew -- put rooks behind passed pawns, knights are better in closed positions, bishops are better in open ones, etc. etc. -- but it combines them very more effectively than humans do.
@shavixmir saidThe issue is whether or not an AI is programmed to write its own code, which does exist.
Thought about it.
The difference between most technological advances like nuclear weapons and neuroradiology, is that AI can teach itself (sometimes even shift instructed parameters). And within that context can reach conclusions which wouldn’t necessarily benefit humanity, even if that was the prime goal.
Merely being programmed to learn is nothing to worry about; what matters is how the AI is programmed to act on said information and whether it was programmed to write it's own code. AI can't do anything it's not programmed to, no matter how much information it collects, even if it collections information on how to write its own code.
If AI merely collects information to return more accurate queries, that's not a problem since it can't do anything beyond that. If you give such AI the ability to write it's own code or scripts, that's a cause for concern.
@shavixmir
Link to an interview, summarizing the potential and the risks:
https://edition.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters