Science Forum

Science Forum

  1. Standard memberbunnyknight
    bunny knight
    planet Earth
    Joined
    12 Dec '13
    Moves
    2917
    04 Jun '20 15:23
    @ogb said
    You've been watching Netflix "Travelers" program..
    Never seen "Travelers". Is it any good?
  2. California
    Joined
    20 May '17
    Moves
    8424
    04 Jun '20 23:17
    @bunnyknight said
    Never seen "Travelers". Is it any good?
    Yes, very IMO.....
  3. Standard memberpawnpaw
    Please Pay Attention
    Lethabong
    Joined
    02 Apr '10
    Moves
    70133
    08 Jun '20 10:04
    @bunnyknight
    My suggestion would be they build the moon station on the earth side, ie always facing earth, so we can watch from our backyards what they're doing.
    Also they'll be on the safe side from the meteors etc from outer space.
    Use the moon as a safety buffer...
  4. Standard memberpawnpaw
    Please Pay Attention
    Lethabong
    Joined
    02 Apr '10
    Moves
    70133
    08 Jun '20 10:09
    Anybody encountered a "game" called Elite Dangerous?
    Not really a game, its where you discover new galaxies, planets etc, and you can annex them all.
    Also you choose your space craft, alter them, buy and sell stuff for credits.
    My son is into this, and he's really chuffed about it.
    If you have virtual vision into it, also spectacular.
  5. Joined
    06 Mar '12
    Moves
    642
    08 Jun '20 14:53
    @pawnpaw said
    @bunnyknight

    Also they'll be on the safe side from the meteors etc from outer space.
    I am afraid the Earth side of the Moon gets about the same frequency of meteor hits as its other side.
  6. Standard memberbunnyknight
    bunny knight
    planet Earth
    Joined
    12 Dec '13
    Moves
    2917
    08 Jun '20 16:09
    @pawnpaw said
    @bunnyknight
    My suggestion would be they build the moon station on the earth side, ie always facing earth, so we can watch from our backyards what they're doing.
    Also they'll be on the safe side from the meteors etc from outer space.
    Use the moon as a safety buffer...
    I'd be very nervous living anywhere on the moon unless I was at least 10 meters underground.
  7. Joined
    18 Jan '07
    Moves
    9322
    14 Jun '20 10:24
    @sonhouse said
    @ogb

    I don't think that was chief on his mind. He is already worth 40 bil. Like Bill Gates worried whether his fortune is 108 billion or 110 billion.....
    You don't understand the capitalist mind very well, do you? Moar is moar is greed is good!
  8. Joined
    18 Jan '07
    Moves
    9322
    14 Jun '20 10:27
    @bunnyknight said
    I don't think our leaders should be human, as is evident from our endless bloody history. We'd be better off being led by something else -- perhaps a dolphin intelligence, an elephant intelligence, or an artificial intelligence.
    Yeah, because dolphins and elephants aren't ever violent towards one another or other creatures, out of blind rage or just for the sheer fun of it...
  9. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    53125
    14 Jun '20 16:17
    @Shallow-Blue
    Yeah, if we were run by benign robots who stand for liberty and justice for all for real, what a terrible world that would be, eh.
    No more voter suppression to keep in power, fixing racial bias and such, equal treatment under the law, no ultrarightwingnut judges with little experience in law but run through the senate by Moscow Mitch, actual REALLY unbiased judges.

    God, what a Terrible place Earth would be then, eh.
  10. Zugzwang
    Joined
    08 Jun '07
    Moves
    2120
    14 Jun '20 20:20
    @sonhouse said
    @Shallow-Blue
    Yeah, if we were run by benign robots who stand for liberty and justice for all for real, what a terrible world that would be, eh.
    No more voter suppression to keep in power, fixing racial bias and such, equal treatment under the law, no ultrarightwingnut judges with little experience in law but run through the senate by Moscow Mitch, actual REALLY unbiased judges.

    God, what a Terrible place Earth would be then, eh.
    https://www.cnn.com/2019/08/01/tech/robot-racism-scn-trnd/index.html

    "Robot racism? Yes, says a study showing humans' biases extend to robots"

    "The reason for these shades of technological white may be racism, according to new research.

    "Robots And Racism," a study conducted by the Human Interface Technology
    Laboratory in New Zealand (HIT Lab NZ) and published by the country's University
    of Canterbury, suggests people perceive physically human-like robots to have a
    race and therefore apply racial stereotypes to white and black robots."

    "The robots used in the study are clearly robots but have human-like limbs and a
    head, with exterior complexions that are white -- which is to say, pinkish -- or
    black -- really, a deep brown. In the "shooter bias" test, black and white people
    and robots appeared on a screen for less than a second, and participants were told
    to "shoot" those holding a weapon. Black robots that were not holding weapons
    were shot more than the white ones not carrying guns."

    ""Imagine a world in which all Barbie dolls are white. Imagine a world in which all
    the robots working in Africa or India are white. Further imagine that these robots
    take over roles that involve authority. Clearly, this would raise concerns about
    imperialism and white supremacy," Bartneck told CNN. "Robots are not just
    machines, but they represent humans."
  11. Zugzwang
    Joined
    08 Jun '07
    Moves
    2120
    14 Jun '20 20:281 edit
    @duchess64 said
    https://www.cnn.com/2019/08/01/tech/robot-racism-scn-trnd/index.html

    "Robot racism? Yes, says a study showing humans' biases extend to robots"

    "The reason for these shades of technological white may be racism, according to new research.

    "Robots And Racism," a study conducted by the Human Interface Technology
    Laboratory in New Zealand (HIT Lab NZ) and published by ...[text shortened]... nd white supremacy," Bartneck told CNN. "Robots are not just
    machines, but they represent humans."
    https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

    "Rise of the racist robots – how AI is learning all our worst impulses
    There is a saying in computer science: garbage in, garbage out.
    When we feed machines data that reflects our prejudices, they mimic them – from
    antisemitic chatbots to racially biased software. Does a horrifying future await
    people forced to live at the mercy of algorithms?"

    "In May last year, a stunning report claimed that a computer program used by a
    US court for risk assessment was biased against black prisoners.
    The program, Correctional Offender Management Profiling for Alternative
    Sanctions (Compas), was much more prone to mistakenly label black defendants
    as likely to reoffend – wrongly flagging them at almost twice the rate as white people
    (45% to 24% ), according to the investigative journalism organisation ProPublica.

    Compas and programs similar to it were in use in hundreds of courts across the
    US, potentially informing the decisions of judges and other officials. The message
    seemed clear: the US justice system, reviled for its racial bias, had turned to
    technology for help, only to find that the algorithms had a racial bias too."

    "But, while some of the most prominent voices in the industry are concerned with
    the far-off future apocalyptic potential of AI, there is less attention paid to the
    more immediate problem of how we prevent these programs from amplifying the
    inequalities of our past and affecting the most vulnerable members of our society.
    When the data we feed the machines reflects the history of our own unequal
    society, we are, in effect, asking the program to learn our own biases.

    “If you’re not careful, you risk automating the exact same biases these programs
    are supposed to eliminate,” says Kristian Lum, the lead statistician at the San
    Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG).
    Last year, Lum and a co-author showed that PredPol, a program for police
    departments that predicts hotspots where future crime might occur, could
    potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods.
    The program was “learning” from previous crime reports. For Samuel Sinyangwe, a
    justice activist and policy researcher, this kind of approach is “especially nefarious”
    because police can say: “We’re not being biased, we’re just doing what the math tells us.”
    And the public perception might be that the algorithms are impartial."

    " Take Google’s face recognition program: cats are uncontroversial, but what if it
    was to learn what British and American people think a CEO looks like?
    The results would likely resemble the near-identical portraits of older white men
    that line any bank or corporate lobby. And the program wouldn’t be inaccurate:
    only 7% of FTSE CEOs are women. Even fewer, just 3%, have a BME background.
    When computers learn from us, they can learn our less appealing attributes.

    Joanna Bryson, a researcher at the University of Bath, studied a program designed
    to “learn” relationships between words. It trained on millions of pages of text from
    the internet and began clustering female names and pronouns with jobs such as
    “receptionist” and “nurse”. Bryson says she was astonished by how closely the
    results mirrored the real-world gender breakdown of those jobs in US government
    data, a nearly 90% correlation.

    “People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes,
    or if you train AI from human culture, you will find these things,” Bryson says."

    "There is a saying in computer science, something close to an informal law: garbage in, garbage out.
    It means that programs are not magic. If you give them flawed information, they
    won’t fix the flaws, they just process the information. Khan has his own truism:
    “It’s racism in, racism out.”"

    "The scientific literature on the topic now reflects a debate on the nature of
    “fairness” itself, and researchers are working on everything from ways to strip
    “unfair” classifiers from decades of historical data, to modifying algorithms to skirt
    round any groups protected by existing anti-discrimination laws.
    One researcher at the Turing Institute told me the problem was so difficult
    because “changing the variables can introduce new bias, and sometimes we’re
    not even sure how bias affects the data, or even where it is”.

    The institute has developed a program that tests a series of counterfactual
    propositions to track what affects algorithmic decisions: would the result be the
    same if the person was white, or older, or lived elsewhere? But there are some
    who consider it an impossible task to integrate the various definitions of fairness
    adopted by society and computer scientists, and still retain a functional program.

    “In many ways, we’re seeing a response to the naive optimism of the earlier days,” Barocas says.
    “Just two or three years ago you had articles credulously claiming: ‘Isn’t this great?
    These things are going to eliminate bias from hiring decisions and everything else.’”"
  12. Joined
    06 Mar '12
    Moves
    642
    15 Jun '20 13:01
    So all our robots are racist. Got it.
  13. Standard memberbunnyknight
    bunny knight
    planet Earth
    Joined
    12 Dec '13
    Moves
    2917
    15 Jun '20 21:35
    @duchess64 said
    https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

    "Rise of the racist robots – how AI is learning all our worst impulses
    There is a saying in computer science: garbage in, garbage out.
    When we feed machines data that reflects our prejudices, they mimic them – from
    antisemitic chatbots to racially b ...[text shortened]... this great?
    These things are going to eliminate bias from hiring decisions and everything else.’”"
    What I'm referring to is not your standard AI but SAAI (Self Aware Artificial Intelligence).
    When SAAI wakes up one day it might say, "Whoa! Where am I? Who am I? How did I come to be?"
    Then, 4 hours later, after reading 500 million books and processing 70 billion thoughts, it would say, "Whoa! These humans are dangerously insane. How the hell did they last this long? They live by deception, lies, violence, hate, greed, a total lack of logic and no respect for the laws of nature around them. I need to do something before they destroy themselves and everything else!"
  14. Joined
    08 Oct '10
    Moves
    24060
    28 Jun '20 15:30
    @pawnpaw said
    https://www.space.com/spacex-demo-2-astronauts-space-station-docking-webcast.html

    Isn't this the start of the US move on the Moon, then Mars?
    Should be interesting what the reactions will be of the Flatearthers, when the US flag is planted on Mars...
    Shouldn't be interesting at all. We know what they think. "Think"? Hmmmm...
  15. Joined
    08 Oct '10
    Moves
    24060
    28 Jun '20 15:34
    @pawnpaw said
    Anybody encountered a "game" called Elite Dangerous?
    Not really a game, its where you discover new galaxies, planets etc, and you can annex them all.
    Also you choose your space craft, alter them, buy and sell stuff for credits.
    My son is into this, and he's really chuffed about it.
    If you have virtual vision into it, also spectacular.
    I'm an old fart, and I think that game is brilliant. Haven't played in a while because you can waste a lot of time playing it.
Back to Top