05 Jul '18 18:09>
This post is unavailable.
Please refer to our posting guidelines.
The post that was quoted here has been removedI think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:
Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around.For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
Originally posted by @deepthoughtSo far anyway. I think in a few years self drivers will exceed humans in ability to quickly recognize danger, danger to other cars, abutments, icy roads and so forth AND to humans who get in the way.
I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
Originally posted by @deepthoughtGood point. A human driving in a residential neighbourhood can anticipate that if a ball bounces into the street between parked cars, a child might suddenly run into the street, and queue his foot towards the brake pedal, just in case. Would an autonomous vehicle anticipate this, too? It's a matter of programming, of course, for all possible eventualities.
For him the problem isn't the way his creation can't cope with the real world, but the way the real world refuses to behave in a way his creation can cope with.
The post that was quoted here has been removedI don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different behaviours depending on whether the ball is rolling or bouncing, what size it is, potentially even what colour, and what is going on in the background. Then, assuming the child appears, it has to correctly identify the child as such and stop, with an even greater set of variations between children. It can't be decide that now the ball has gone past the hazard has gone past and fail to recognise the presence of the child. In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
Originally posted by @deepthoughtWell said. Even the semi-autonomous vehicles run into trouble here, because the driver thinks he is absolved of the complex decision making that goes into the types of common-sense reasoning we make constantly on the road. It's going to take a lot of troubleshooting before computers can drive safely amidst all the unpredictability.
In short one needs it to behave as if it has common sense when it doesn't even have a sense of self-preservation, and I don't think this large dataset training achieves that. The problem is more holistic than than programming for the last eventuality.
Originally posted by @deepthoughtWe have seen brute-force algorithms trained on large data sets in search engines; google for example. Computers have learned to correct grammar mistakes by looking at millions of examples of correct grammar, but without actually knowing rules of grammar. Similarly, the currently strongest chess programs have played themselves millions of games but not been programmed with strategic principles (as previous programs were).
I don't think this is a case of incremental programming. The algorithm is some sort of neural network which is then trained on a large dataset. Using moonbus's example of the ball preceding the child, the behaviour one wants is that the intelligence recognizes the potential follow on hazard and slows the car. The problem is one can get different beha ...[text shortened]... ing achieves that. The problem is more holistic than than programming for the last eventuality.
Originally posted by @deepthoughtThere is some talk about building a road between towns, treating them like railroads for
I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.
Originally posted by @deepthoughtHere is what you do.
I think they're in trouble. This quote, from the article, is typical of the way people start talking when they have a problem they can't solve:[quote]Drive.AI founder Andrew Ng, a former Baidu executive and one of the industry’s most prominent boosters, argues the problem is less about building a perfect driving system than training bystanders to antic ...[text shortened]... he real world, but the way the real world refuses to behave in a way his creation can cope with.