Science Forum

Science Forum

  1. Joined
    06 Mar '12
    Moves
    642
    06 Nov '20 20:0210 edits
    Unlike in BS science fiction where AI decides it doesn't want to obey humans anymore (which doesn't make any sense for reasons far too tedious to explain here but I may eventually write a whole book about it), the real current problem with AI is not that it doesn't do what we tell it to do but rather it does do what we tell it to but we keep accidentally telling it to do something we don't want it to do.
    We humans often fail to fully take into account that AI always does EXACTLY what we tell it to do; very literally and EXACTLY! This isn't the fault of the AI because all it knows is what we tell it to do, which it does, and the data it is given.
    To see what I mean, watch this lecture below;

    YouTube

    One thing I hope to do in my AI research is to deal with this exact problem by programming the AI to learn to understand this problem.
    The AI must somehow be made to come to know the difference between what we tell it to do and what we would want it to do.

    More generically, and contrary to common layperson belief, the real problem (and potential danger) with AI is not that its too smart but rather is the exact opposite i.e. it is not nearly smart enough i.e. it is too stupid.
  2. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53118
    06 Nov '20 23:002 edits
    @humy
    Sounds like you are expecting AI to reach or exceed human intelligence, say smarter than Newton and the like.
    That seems the only way an AI could make those kind of value judgments.
    For instance, what kind of AI could say judge works from master artists or master songwriters or composers.

    How would you arrange for said AI to be able to say, "Mozart is not quite as advanced as Beethoven but he did X and Y as well as Smetana" coming up with that sentence independent of any human intervention.

    Isn't that kind of the order of intelligence that would have to be inherent in such an AI?

    It is obvious with the enormous growth in computer memory, EXA bytes in the near future for instance, that if you wanted to expend the time you could put all of human knowledge on a single chip, so the entire Smithsonian library on the chip and all the recordings of the last 100 years of radio and TV and political and war history and arts, music, dance, sculpture, poetry and the like, all on a chip, then a human level intelligent AI could make apt comparisons, at least that is my take.
  3. Joined
    22 Sep '20
    Moves
    2987
    08 Nov '20 14:101 edit
    @humy

    You said AI not obeying humans doesn't make sense?

    What if the AI programmer is evil?
    What if the programmer writes a code that allows violence after a certain date and time?

    Obviously we are nowhere near that right now but in 100 years when every house has a robot maid and cops and military are robots it could easily become a robot takeover due to a psychopathic programmer.

    Am I wrong?

    If you write a book about it, don't forget about that little tidbit 😏
  4. Joined
    06 Mar '12
    Moves
    642
    08 Nov '20 16:466 edits
    @cheesemaster said
    @humy

    You said AI not obeying humans doesn't make sense?
    if it is programmed to forever to obey humans where and when it can and do so without contradicting some part of its program then it would do so. Obviously that must come with all sorts of constraints such as not obey a command to disobey all other people and also not obey a command to do something most humans would consider bad/harmful and also not to even try to obey a command that doesn't make sense or is unclear or it doesn't understand etc.
    Obviously, If the programmer is evil then presumably it would obey the evil commands in its evil program made by its evil programmer. That's, I am afraid, still AI obeying its program i.e. it still isn't AI disobeying its program even if that then means it starts to disobey humans and murder people.
    I guess well before AI gets THAT advanced, it would be wise for us to introduce and somehow enforce international laws that somehow stop any programmer anywhere on Earth from ever programming such an AI to potentially ever do what most humans would consider bad/harmful; I don't now how exactly that would work but I say we really must make sure we make that work else we would inevitably sooner or later face an AI-related disaster.
  5. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87188
    08 Nov '20 17:24
    @humy said
    Unlike in BS science fiction where AI decides it doesn't want to obey humans anymore (which doesn't make any sense for reasons far too tedious to explain here but I may eventually write a whole book about it), the real current problem with AI is not that it doesn't do what we tell it to do but rather it does do what we tell it to but we keep accidentally telling it to do somethi ...[text shortened]... too smart but rather is the exact opposite i.e. it is not nearly smart enough i.e. it is too stupid.
    When you say AI what do you mean? Thinking of chess engines there are basically two types: alpha-beta engines which use a tree pruning algorithm and positional evaluator function e.g. Stockfish and neural networks e.g. Leela Chess Zero. So the alpha-beta search is a conventional AI that runs using an algorithm. A neural network is designed to work much more like our brains do. The catch with the former is that they'll do exactly what they're told, with the latter they'll do what they're conditioned to, only what the weightings they've been conditioned with actually come out with is some sort of extrapolation and the actual behaviour might not be what one wants.
  6. Joined
    06 Mar '12
    Moves
    642
    08 Nov '20 17:334 edits
    @deepthought said
    When you say AI what do you mean? Thinking of chess engines there are basically two types: alpha-beta engines which use a tree pruning algorithm and positional evaluator function e.g. Stockfish and neural networks e.g. Leela Chess Zero. So the alpha-beta search is a conventional AI that runs using an algorithm. A neural network is designed to work much more like our br ...[text shortened]... ly come out with is some sort of extrapolation and the actual behaviour might not be what one wants.
    I mean either of the above types but here in this narrow context I am now talking about AI way more advanced than any current AIs and, more specifically, AGI, which is abbreviation for 'artificial general intelligence' ( see https://en.wikipedia.org/wiki/Artificial_general_intelligence )
  7. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87188
    09 Nov '20 00:56
    @humy said
    I mean either of the above types but here in this narrow context I am now talking about AI way more advanced than any current AIs and, more specifically, AGI, which is abbreviation for 'artificial general intelligence' ( see https://en.wikipedia.org/wiki/Artificial_general_intelligence )
    I read the introduction to that article. An AI as strong as that would do what it wants. It's not obvious that it would have any wants at all, but a human or superhuman artificial intelligence would have its own agenda. The current problem is avoiding having AIs not up to being given safety critical tasks that stupid humans don't trust each other with.
Back to Top