Please turn on javascript in your browser to play chess.
Debates Forum

Debates Forum

  1. 20 Apr '17 20:24 / 1 edit
    https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

    "AI programs exhibit racial and gender biases, research reveals:
    Machine learning algorithms are picking up deeply ingrained race and gender
    prejudices concealed within the patterns of language use, scientists say."

    "An artificial intelligence tool that has revolutionised the ability of computers to interpret
    everyday language has been shown to exhibit striking gender and racial biases.
    The findings raise the spectre of existing social inequalities and prejudices
    being reinforced in new and unpredictable ways as an increasing number of
    decisions affecting our everyday lives are ceded to automatons."

    "However, as machines are getting closer to acquiring human-like language
    abilities, they are also absorbing the deeply ingrained biases concealed
    within the patterns of language use, the latest research reveals.

    Joanna Bryson, a computer scientist at the University of Bath and a co-author, said:
    “A lot of people are saying this is showing that AI is prejudiced. No.
    This is showing we’re prejudiced and that AI is learning it.”

    "The latest paper shows that some more troubling implicit biases seen in
    human psychology experiments are also readily acquired by algorithms.
    The words “female” and “woman” were more closely associated with
    arts and humanities occupations and with the home, while “male”
    and “man” were closer to maths and engineering professions.

    And the AI system was more likely to associate European American names
    with pleasant words such as “gift” or “happy”, while African American names
    were more commonly associated with unpleasant words.

    The findings suggest that algorithms have acquired the same biases
    that lead people (in the UK and US, at least) to match pleasant words
    and white faces in implicit association tests.

    These biases can have a profound impact on human behaviour. One previous
    study showed that an identical CV is 50% more likely to result in an interview
    invitation if the candidate’s name is European American than if it is African American.
    The latest results suggest that algorithms, unless explicitly programmed
    to address this, will be riddled with the same social prejudices.

    “If you didn’t believe that there was racism associated with people’s
    names, this shows it’s there,” said Bryson.""

    "Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said:
    “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”"
  2. 20 Apr '17 20:36 / 1 edit
    https://www.theguardian.com/commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language

    "Robots are racist and sexist. Just like the people who created them:
    Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem."
    --Laurie Penny

    “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia
    Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has
    to have been collected in the past, and since society changes, you can end up with
    patterns that reflect the past. If those patterns are used to make decisions that affect
    people’s lives you end up with unacceptable discrimination.”

    Robots have been racist and sexist for as long as the people who created them have been
    racist and sexist, because machines can work only from the information given to them,
    usually by the white, straight men who dominate the fields of technology and robotics."

    "Last year Microsoft created a chatbot, Tay, which could “learn” and develop as it engaged
    with users on social media. Within hours it had pledged allegiance to Hitler and started
    repeating “alt-right” slogans – which is what happens when you give Twitter a baby to raise."

    "There are other frightening futures, however, and one of them is the society where we
    allow the weary bigotries of the past to become written into the source code of the present.

    Machines learn language by gobbling up and digesting huge bodies of all the available writing
    that exists online. What this means is that the voices that dominated the world of literature
    and publishing for centuries – the voices of white, western men – are fossilised into the
    language patterns of the instruments influencing our world today, along with the assumptions
    those men had about people who were different from them. This doesn’t mean robots
    are racist: it means people are racist, and we’re raising robots to reflect our own prejudices.

    "Human beings, after all, learn our own prejudices in a very similar way. We grow up
    understanding the world through the language and stories of previous generations.
    We learn that “men” can mean “all human beings”, but “women” never does – and so we
    learn that to be female is to be other – to be a subclass of person, not the default.
    We learn that when our leaders and parents talk about how a person behaves to their “own people”,
    they sometimes mean “people of the same race” – and so we come to understand that
    people of a different skin tone to us are not part of that “we”. We are given one of two
    pronouns in English – he or she – and so we learn that gender is a person’s defining
    characteristic, and there are no more than two. This is why those of us who are concerned
    with fairness and social justice often work at the level of language – and why when
    people react to having their prejudices confronted, they often complain about “language
    policing”, as if the use of words could ever be separated from the worlds they create.

    Language itself is a pattern for predicting human experience. It does not just describe our
    world – it shapes it too. The encoded bigotries of machine learning systems give us an
    opportunity to see how this works in practice. But human beings, unlike machines, have
    moral faculties – we can rewrite our own patterns of prejudice and privilege, and we should.

    Sometimes we fail to be as fair and just as we would like to be – not because we set out
    to be bigots and bullies, but because we are working from assumptions we have internalised
    about race, gender and social difference. We learn patterns of behaviour based on bad,
    outdated information. That doesn’t make us bad people, but nor does it excuse us from
    responsibility for our behaviour. Algorithms are expected to update their responses
    based on new and better information, and the moral failing occurs when people refuse to
    do the same. If a robot can do it, so can we."

    What an eloquent article by Laurie Penny.

    I have observed that nearly writers here seem blind--often willfully blind--to their own
    internalized assumptions or prejudices about gender, race, and other differences.
    I hope that artificial intelligence can become less unwilling to learn than these people are.
  3. Subscriber divegeester
    Nice suit...
    20 Apr '17 21:22
    Originally posted by Duchess64
    https://www.theguardian.com/commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language

    "Robots are racist and sexist. Just like the people who created them:
    Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem."
    --Laurie Penny

    “Machine learning” is a fancy way of saying “finding pa ...[text shortened]... .
    I hope that artificial intelligence can become less unwilling to learn than these people are.
    Read another newspaper.
    And think about something other than sexism or rape.
  4. Subscriber kmax87
    You've got Kevin
    20 Apr '17 21:43
    Originally posted by Duchess64
    https://www.theguardian.com/commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language

    "Robots are racist and sexist. Just like the people who created them:
    Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem."
    --Laurie Penny

    “Machine learning” is a fancy way of saying “finding pa ...[text shortened]... .
    I hope that artificial intelligence can become less unwilling to learn than these people are.
    Why not rename AI to what it is,. MHI. Mimicked Human Intelligence.
    It may not solve the problem, but may go a long way to alert people to the fact of what they are dealing with.
  5. 20 Apr '17 21:58 / 4 edits
    Originally posted by kmax87
    Why not rename AI to what it is,. MHI. Mimicked Human Intelligence.
    It may not solve the problem, but may go a long way to alert people to the fact of what they are dealing with.
    Technically speaking, the artificial intelligence in chess engines does *not* attempt to
    imitate how the best human players think. Instead, it takes advantage of the computer's
    strengths (tactical calculation) while minimizing its relative weaknesses (positional understanding).
    So it's inaccurate to say that all artificial intelligence just attempts to imitate human intelligence.

    I know many people (usually white men) who believe that advancing technology alone will (magically) solve all social problems.
    They apparently believe that people don't have to think about issues like racism or sexism,
    examine their embedded assumptions, or consider changing their attitudes or behavior.
    All racism and sexism must disappear as soon as technology advances far enough.

    I disagree. I don't believe that advanced technology must be intrinsically morally improving.
    Technology is morally neutral. And people put technology to use according to their own
    perceived self-interests and reflecting their own biases. Some advanced technology
    could be used as a more efficient or 'humane' method of genocide. Or advanced genetic
    engineering (created 'designer babies' ) could be used to pre-select a 'master race'.

    My point is that it's uncertain whether advanced technology will lead to a utopia, dystopia, or something else.
  6. 21 Apr '17 04:41
    Originally posted by Duchess64
    Technically speaking, the artificial intelligence in chess engines does *not* attempt to
    imitate how the best human players think. Instead, it takes advantage of the computer's
    strengths (tactical calculation) while minimizing its relative weaknesses (positional understanding).
    So it's inaccurate to say that all artificial intelligence just attempts to ...[text shortened]... t it's uncertain whether advanced technology will lead to a utopia, dystopia, or something else.
    I posted a thread a while back about implicit biases. It was pretty interesting, I thought.

    With CRISPR, we will be able to have designer babies soon enough. It kind of scares me.
  7. Subscriber kmax87
    You've got Kevin
    21 Apr '17 06:02 / 1 edit
    Originally posted by Duchess64
    “Machine learning” is a fancy way of saying “finding patterns in data”. Of course, as Lydia Nicholas, senior researcher at the innovation thinktank Nesta, explains, all this data “has to have been collected in the past, and since society changes, you can end up with patterns that reflect the past. If those patterns are used to make decisions that affect people’s lives you end up with unacceptable discrimination.”
    .
    Wanting A.I. to be neutral and fair would seem desirable, yet if I had access to A.I. in pre-WWW2 Germany and its job was to hide me from the Nazis, then I would be grateful if it used its knowledge of data patterns and profiled the group of uniformed young men at my door as the scary search party I needed as much time as possible to hide from.

    The ability of humanity to reset long held suspicions and fears and move on is fundamental to our ability to evolve and grow. If machine intelligence lags behind because it needs the vast catalogue of the written word to update before it stops it's biased behaviour, then we're in for a long haul.

    The question is the extent to which machines will be allowed to make life defining choices for us humans. If they can't resonate with an inner logic that wipes the slate clean and approach human interaction with a positive expectation each and every time then we do have a problem Houston.

    My biggest concern is, if machine intelligence can sift through mountains of data and make decisions that objectively are no worse than made by human operators, then will programmers or legislators not care, other than the lowered cost, as to whether or not the most fundamental part of the programming chain, the language, had inherent bias.

    Long term that's a recipe for disaster. Machine efficiency will kick in and its odds​ on to provide an even more efficient blockage of beneficial social change.
  8. Subscriber mchill
    cryptogram
    21 Apr '17 06:58 / 4 edits
    Originally posted by Duchess64
    https://www.theguardian.com/commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language

    "Robots are racist and sexist. Just like the people who created them:
    Machines learn their prejudices in language. It’s not their fault, but we still need to fix the problem."
    --Laurie Penny

    “Machine learning” is a fancy way of saying “finding pa ...[text shortened]... .
    I hope that artificial intelligence can become less unwilling to learn than these people are.
    Now we have racist and sexist computers?? I'll wager these horrible machines are white, and have testosterone in-bedded in their circuitry!! Life is just unbearable, isn't it Duchess?!

    Look, over there, my white corkscrew is now making sexist gestures to the wire whisk and that young attractive lamp! WILL THESE HORRORS NEVER END??!!
  9. 21 Apr '17 19:04
    Originally posted by mchill
    Now we have racist and sexist computers?? I'll wager these horrible machines are white, and have testosterone in-bedded in their circuitry!! Life is just unbearable, isn't it Duchess?!

    Look, over there, my white corkscrew is now making sexist gestures to the wire whisk and that young attractive lamp! WILL THESE HORRORS NEVER END??!!
    Mchill keeps showing that he's an idiotic racist troll.
  10. 21 Apr '17 19:24 / 1 edit
    Originally posted by divegeester
    Read another newspaper.
    And think about something other than sexism or rape.
    Edward Snowden trusted the Guardian (more than any other British newspaper) with his story.

    The troll Divegeester cannot know all of which I think.
  11. 25 Apr '17 21:59
    Originally posted by Duchess64
    Technically speaking, the artificial intelligence in chess engines does *not* attempt to
    imitate how the best human players think. Instead, it takes advantage of the computer's
    strengths (tactical calculation) while minimizing its relative weaknesses (positional understanding).
    So it's inaccurate to say that all artificial intelligence just attempts to imitate human intelligence.
    http://en.chessbase.com/post/on-human-and-computer-intelligence-in-chess

    "On human and computer intelligence in chess"