Please turn on javascript in your browser to play chess.
Science Forum

Science Forum

  1. 05 Jul '18 18:09
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably avoid accidents."

    "That leaves Tesla and other autonomy companies with a scary question:
    Will self-driving cars keep getting better, like image search, voice recognition,
    and the other AI success stories? Or will they run into the generalization
    problem like chat bots? Is autonomy an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
  2. Subscriber sonhouse
    Fast and Curious
    05 Jul '18 18:40
    Originally posted by @duchess64
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably av ...[text shortened]... y an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
    I don't know the techniques used for self driving cars but wouldn't using technology like Alpha Go help in that regard? Neural networks come closest to human cognition I think.

    So it may be just a matter of finding the right kind of AI.
  3. Standard member DeepThought
    Losing the Thread
    05 Jul '18 20:09
    Originally posted by @duchess64
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably av ...[text shortened]... y an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:
    Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around.
    For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
  4. Subscriber sonhouse
    Fast and Curious
    05 Jul '18 20:32 / 1 edit
    Originally posted by @deepthought
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
    So far anyway. I think in a few years self drivers will exceed humans in ability to quickly recognize danger, danger to other cars, abutments, icy roads and so forth AND to humans who get in the way.

    I don't think even the best system will ever be 100% reliable, situations can come up faster than ANY AI or human can deal with.

    One example from my life, I hit deer twice in thee years, first time, about 5000 dollars damage to my car and second time, car totaled.

    First time I am driving home at night, dark, only headlights on, and a deer was hidden by a bush and when I was within about 2 meters of me it jumped out and I had MAYBE one tenth of a second to respond. It wouldn't have mattered if the world's best AI or professional driver was onboard ATT, you can't do anything, no quick steering around or stopping unless you could stop with a couple hundred G's.

    Second time I am going to work early morning, coming around a bend in a 4 lane road and a deer ran out in front of me, no problem, I swerved left and missed it but at the same time the REST of his family jumped a road barrier and same thing, less than a second to respond. NOTHING could have stopped either of those collisions since the laws of physics of inertia would not allow even the fastest reaction times from evading those deer, even if the AI could react in a nanosecond, it would not have helped.
  5. Subscriber moonbus On Vacation
    Uber-Nerd
    07 Jul '18 22:26
    Originally posted by @deepthought
    For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
    Good point. A human driving in a residential neighbourhood can anticipate that if a ball bounces into the street between parked cars, a child might suddenly run into the street, and queue his foot towards the brake pedal, just in case. Would an autonomous vehicle anticipate this, too? It's a matter of programming, of course, for all possible eventualities.
  6. 07 Jul '18 22:59
    Originally posted by @moonbus to DeepThought
    Good point. A human driving in a residential neighbourhood can anticipate that if a ball bounces into the street between parked cars, a child might suddenly run into the street, and queue his foot towards the brake pedal, just in case. Would an autonomous vehicle anticipate this, too? It's a matter of programming, of course, for all possible eventualities.
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus

    Each fatal accident will result in programming for another eventuality.
  7. Subscriber moonbus On Vacation
    Uber-Nerd
    08 Jul '18 05:47
    Originally posted by @duchess64
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus

    Each fatal accident will result in programming for another eventuality.
    Yup, just like real life: we learn from our mistakes.
  8. Standard member DeepThought
    Losing the Thread
    08 Jul '18 13:45
    Originally posted by @duchess64
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus NJ

    Each fatal accident will result in programming for another eventuality.
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different behaviours depending on whether the ball is rolling or bouncing, what size it is, potentially even what colour, and what is going on in the background. Then, assuming the child appears, it has to correctly identify the child as such and stop, with an even greater set of variations between children. It can't be decide that now the ball has gone past the hazard has gone past and fail to recognise the presence of the child. In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
  9. 08 Jul '18 19:26
    Originally posted by @deepthought
    In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
    Well said. Even the semi-autonomous vehicles run into trouble here, because the driver thinks he is absolved of the complex decision making that goes into the types of common-sense reasoning we make constantly on the road. It's going to take a lot of troubleshooting before computers can drive safely amidst all the unpredictability.

    Until, of course, humy comes up with the equation for everything that proves determinism is real.
  10. 08 Jul '18 19:30
    Originally posted by @deepthought
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different beha ...[text shortened]... ing achieves that. The problem is more holistic than than programming for the last eventuality.
    "Each fatal accident will result in programming for another eventuality."
    --Duchess64

    Perhaps I should have labeled my sarcasm.

    "I don't think this is a case of incremental programming."
    --DeepThought

    My sarcasm was aimed at criticizing the assumption that this complex real world problem
    is necessarily linear and can be inevitably solved through 'incremental programming'.
  11. Standard member DeepThought
    Losing the Thread
    09 Jul '18 03:52
    Originally posted by @duchess64
    "Each fatal accident will result in programming for another eventuality."
    --Duchess64

    Perhaps I should have labeled my sarcasm.

    "I don't think this is a case of incremental programming."
    --DeepThought

    My sarcasm was aimed at criticizing the assumption that this complex real world problem
    is necessarily linear and can be inevitably solved through 'incremental programming'.
    I was aware of a level of irony, I just wasn't quite sure what level the irony was working at. In any case I think my post was worth writing anyway.
  12. Subscriber moonbus On Vacation
    Uber-Nerd
    13 Jul '18 06:33
    Originally posted by @deepthought
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different beha ...[text shortened]... ing achieves that. The problem is more holistic than than programming for the last eventuality.
    We have seen brute-force algorithms trained on large data sets in search engines; google for example. Computers have learned to correct grammar mistakes by looking at millions of examples of correct grammar, but without actually knowing rules of grammar. Similarly, the currently strongest chess programs have played themselves millions of games but not been programmed with strategic principles (as previous programs were).

    This is different to how humans learn. Humans tend to learn by fewer examples and eventually generalize by learning rules (of grammar) or principles (of strategy).

    It remains to be seen whether brute-force-on-big-date-sets will work for autonomous vehicles as well, driving them (virtually) millions of miles but without programming them for specific situations (such as residential streets with balls bouncing into the path of the vehicle).