1. Zugzwang
    Joined
    08 Jun '07
    Moves
    2120
    05 Jul '18 18:09
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably avoid accidents."

    "That leaves Tesla and other autonomy companies with a scary question:
    Will self-driving cars keep getting better, like image search, voice recognition,
    and the other AI success stories? Or will they run into the generalization
    problem like chat bots? Is autonomy an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
  2. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    52619
    05 Jul '18 18:40
    Originally posted by @duchess64
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably av ...[text shortened]... y an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
    I don't know the techniques used for self driving cars but wouldn't using technology like Alpha Go help in that regard? Neural networks come closest to human cognition I think.

    So it may be just a matter of finding the right kind of AI.
  3. Standard memberDeepThought
    Losing the Thread
    Cosmopolis
    Joined
    27 Oct '04
    Moves
    78621
    05 Jul '18 20:09
    Originally posted by @duchess64
    https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

    "Self-driving cars are headed toward an AI roadblock
    Skeptics say full autonomy could be farther away than the industry admits"

    "There’s growing concern among AI experts that it may be years, if not
    decades, before self-driving systems can reliably av ...[text shortened]... y an interpolation problem or a
    generalization problem? How unpredictable is driving, really?"
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:
    Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around.
    For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
  4. Subscribersonhouse
    Fast and Curious
    slatington, pa, usa
    Joined
    28 Dec '04
    Moves
    52619
    05 Jul '18 20:321 edit
    Originally posted by @deepthought
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
    So far anyway. I think in a few years self drivers will exceed humans in ability to quickly recognize danger, danger to other cars, abutments, icy roads and so forth AND to humans who get in the way.

    I don't think even the best system will ever be 100% reliable, situations can come up faster than ANY AI or human can deal with.

    One example from my life, I hit deer twice in thee years, first time, about 5000 dollars damage to my car and second time, car totaled.

    First time I am driving home at night, dark, only headlights on, and a deer was hidden by a bush and when I was within about 2 meters of me it jumped out and I had MAYBE one tenth of a second to respond. It wouldn't have mattered if the world's best AI or professional driver was onboard ATT, you can't do anything, no quick steering around or stopping unless you could stop with a couple hundred G's.

    Second time I am going to work early morning, coming around a bend in a 4 lane road and a deer ran out in front of me, no problem, I swerved left and missed it but at the same time the REST of his family jumped a road barrier and same thing, less than a second to respond. NOTHING could have stopped either of those collisions since the laws of physics of inertia would not allow even the fastest reaction times from evading those deer, even if the AI could react in a nanosecond, it would not have helped.
  5. Subscribermoonbus
    Uber-Nerd
    Joined
    31 May '12
    Moves
    2045
    07 Jul '18 22:26
    Originally posted by @deepthought
    For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
    Good point. A human driving in a residential neighbourhood can anticipate that if a ball bounces into the street between parked cars, a child might suddenly run into the street, and queue his foot towards the brake pedal, just in case. Would an autonomous vehicle anticipate this, too? It's a matter of programming, of course, for all possible eventualities.
  6. Zugzwang
    Joined
    08 Jun '07
    Moves
    2120
    07 Jul '18 22:59
    Originally posted by @moonbus to DeepThought
    Good point. A human driving in a residential neighbourhood can anticipate that if a ball bounces into the street between parked cars, a child might suddenly run into the street, and queue his foot towards the brake pedal, just in case. Would an autonomous vehicle anticipate this, too? It's a matter of programming, of course, for all possible eventualities.
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus

    Each fatal accident will result in programming for another eventuality.
  7. Subscribermoonbus
    Uber-Nerd
    Joined
    31 May '12
    Moves
    2045
    08 Jul '18 05:47
    Originally posted by @duchess64
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus

    Each fatal accident will result in programming for another eventuality.
    Yup, just like real life: we learn from our mistakes.
  8. Standard memberDeepThought
    Losing the Thread
    Cosmopolis
    Joined
    27 Oct '04
    Moves
    78621
    08 Jul '18 13:45
    Originally posted by @duchess64
    "t's a matter of programming, of course, for all possible eventualities."
    --Moonbus NJ

    Each fatal accident will result in programming for another eventuality.
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different behaviours depending on whether the ball is rolling or bouncing, what size it is, potentially even what colour, and what is going on in the background. Then, assuming the child appears, it has to correctly identify the child as such and stop, with an even greater set of variations between children. It can't be decide that now the ball has gone past the hazard has gone past and fail to recognise the presence of the child. In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
  9. Joined
    20 Oct '06
    Moves
    6975
    08 Jul '18 19:26
    Originally posted by @deepthought
    In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
    Well said. Even the semi-autonomous vehicles run into trouble here, because the driver thinks he is absolved of the complex decision making that goes into the types of common-sense reasoning we make constantly on the road. It's going to take a lot of troubleshooting before computers can drive safely amidst all the unpredictability.

    Until, of course, humy comes up with the equation for everything that proves determinism is real.
  10. Zugzwang
    Joined
    08 Jun '07
    Moves
    2120
    08 Jul '18 19:30
    Originally posted by @deepthought
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different beha ...[text shortened]... ing achieves that. The problem is more holistic than than programming for the last eventuality.
    "Each fatal accident will result in programming for another eventuality."
    --Duchess64

    Perhaps I should have labeled my sarcasm.

    "I don't think this is a case of incremental programming."
    --DeepThought

    My sarcasm was aimed at criticizing the assumption that this complex real world problem
    is necessarily linear and can be inevitably solved through 'incremental programming'.
  11. Standard memberDeepThought
    Losing the Thread
    Cosmopolis
    Joined
    27 Oct '04
    Moves
    78621
    09 Jul '18 03:52
    Originally posted by @duchess64
    "Each fatal accident will result in programming for another eventuality."
    --Duchess64

    Perhaps I should have labeled my sarcasm.

    "I don't think this is a case of incremental programming."
    --DeepThought

    My sarcasm was aimed at criticizing the assumption that this complex real world problem
    is necessarily linear and can be inevitably solved through 'incremental programming'.
    I was aware of a level of irony, I just wasn't quite sure what level the irony was working at. In any case I think my post was worth writing anyway.
  12. Subscribermoonbus
    Uber-Nerd
    Joined
    31 May '12
    Moves
    2045
    13 Jul '18 06:33
    Originally posted by @deepthought
    I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different beha ...[text shortened]... ing achieves that. The problem is more holistic than than programming for the last eventuality.
    We have seen brute-force algorithms trained on large data sets in search engines; google for example. Computers have learned to correct grammar mistakes by looking at millions of examples of correct grammar, but without actually knowing rules of grammar. Similarly, the currently strongest chess programs have played themselves millions of games but not been programmed with strategic principles (as previous programs were).

    This is different to how humans learn. Humans tend to learn by fewer examples and eventually generalize by learning rules (of grammar) or principles (of strategy).

    It remains to be seen whether brute-force-on-big-date-sets will work for autonomous vehicles as well, driving them (virtually) millions of miles but without programming them for specific situations (such as residential streets with balls bouncing into the path of the vehicle).
  13. Joined
    11 Nov '05
    Moves
    43938
    07 Sep '18 07:53
    There is a statistical matter to take into account: Will AI driving reduce accidents to a lower level than human driving?

    Once I say a goofer on the road, I was going at like 70 km/h, and the road was icy. I had two choices: (1) to avoid it with the risque of losing the control of my car, and (2) drive right over it and hope for the best. I would say that an AI control would make the wrong decision and I wouldn't be here writing these words.
    Or perhaps, but if it would be not a goofer, but a hare, a hedgehog, or a baby hog? Can the AI differ from all animals, knowing which are worth avoiding and which to kill? I don't think so.
    What about a plastic sheet, resembling everything as it's flying in the wind over the road?
    What about other cars, that irrationally moves on the road, driven by AI device or a drunk human?

    I don't think an AI can recognize all dangers, there will be no zero accidents. But still: Is it better than human drivers? That's the big question!

    Is loss of lives by AI acceptable if it is better than humans? Is it better or worse to be killed by a AI driven car than a human driven car? I don't know.
    We can always put a drunk driver behind bars, but can we put an AI device behind the bars? Or do we go to Elon Musk and put him behind bars?
  14. Standard memberKellyJay
    Walk your Faith
    USA
    Joined
    24 May '04
    Moves
    148544
    07 Sep '18 09:50
    Originally posted by @deepthought
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
    There is some talk about building a road between towns, treating them like railroads for
    trucks and what not to go back and forth.
  15. Joined
    02 Jan '06
    Moves
    10087
    07 Sep '18 12:481 edit
    Originally posted by @deepthought
    I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
    Here is what you do.

    You create AI to drive the cars and then tell them that man is destroying the globe with global warming, and then see what happens to both drivers and pedestrians.
Back to Top