1. Standard memberDeepThought
    Losing the Thread
    Quarantined World
    Joined
    27 Oct '04
    Moves
    87415
    27 Mar '16 17:24
    Microsoft's chatterbox was lured into supporting genocide by a clever piece of hacking - just by getting its learning algorithm to accept it. Microsoft aren't happy but personally I think this is good news as they'll learn far more by that than having it appear to work first time. Some day there will be independent AIs carrying out military missions or controlling nuclear power stations and it's far better that this type of problem emerges before it matters rather than after the nuclear defence AI decides that Russia needs to be obliterated because Putin used the word "afterthought"...

    http://www.bbc.co.uk/news/technology-35902104
  2. Cape Town
    Joined
    14 Apr '05
    Moves
    52945
    27 Mar '16 20:33
    Although I fully agree that such issues are better discovered sooner rather than later, I do not think this particular instance will have any bearing whatsoever on AI's used for control.

    It is interesting nonetheless. The AI in question brought out the bad side of people. I have noticed that some people when given the opportunity to behave badly without consequences, jump at the chance. I recall an instance when some children I know discovered that a younger child believed everything she was told (or at least acted like she did). They immediately started telling here the most outrageous lies just because they could. I have seen adults behave the same way with children ie tell outrageous lies because they believe there are no consequences for doing so. The same phenomenon is apparent on the internet where anonymity seems to bring out the worst in some people.

    On a related note, an AI that is designed to talk about the things you want to talk about is not necessarily the best thing. Our best human interactions are actually when we meet and talk to people that we disagree with. An AI that tries to be like us will tend to make us worse and lead us down undesirable paths.

    I see a similar problem with the media that tries to pander to our whims and ends up being not only highly biased and not all that interesting or good quality. It tends to react to the loudest voices and the most obvious reactions which aren't necessarily the ones that should be watched for. I also see trends in the media where insiders start to believe that something is what the people want to watch so they push it more and more even when it isn't actually the case that that is what people want to watch. The advent of YouTube and other internet media has demonstrated that peoples tastes are far different from what the mainstream offline media apparently thought for decades.
  3. Joined
    31 May '06
    Moves
    1795
    27 Mar '16 20:50
    The truth is that AI's have not got to the point of understanding anything.

    Which means that everything this chat-bot said was completely without meaning or intent,
    It simply was trying to parrot real conversations without any attempt to understand the
    meaning behind them. Which is why it had this entertaining and spectacular failure.

    I have found myself however deeply irritated by all the media coverage that anthropomorphised
    the AI to the extent that all the stories talked as if the AI meant what it said or had a clue what
    it was talking about.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.I Agree