Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As the sibling comment says, it does depend on self-driving cars matching human level performance. But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

I further agree with you it's up to the proponents to prove that. It's a good thing to force a really high bar for self-driving cars. Then assuming the technology is maintained once AI passes the bar it should only ever get better.



> Each of the crashes that self-driving cars can be fixed and prevented from happening again. The list I gave are human flaws that will almost certainly never be fixed.

Not if you put neural networks / deep learning in the equation. This stuff is black boxes connected to black boxes, that work fine until they don't, and then nobody knows why they failed - because all you have is bunch of numbers with zero semantic information attached to them.


Neural Networks are only a small part of self driving car algorithms. The planning and sensor fusion etc. is usually not done with deep learning (for this reason). Only visual detection, because we have nothing else working better in this realm. But lidar, radar, sonar, what have you all work without any deep learning. The decision making on a high level is also without deep learning.

The only questionable parts will be where the vision system fails, and those are similar actually to human problems. Because human vision also often fails (sunlight on windshield, lack of attention, darkness, etc.)


> But with all AI/Neural Networks it is very possible to match human performance because most of the time you can throw more human-level performance data at it.

Are you in very vague words implying that AGI has been invented? AI might have matched humans in image recognition, but it is far away in general decision making.

And finally, I am tired of listening to "safer than a human". That should never be the comparison, but a human at the helm and an AI running in the background which will take over when the human does an obvious mistake -- you know, like a emergency braking system,


"Each of the crashes that self-driving cars can be fixed and prevented from happening again."

If those situations recur exactly as they happened the first time, sure they can be prevented from happening again.

That is, if a car approaches the exact same intersection as the exact same time of day, and a pedestrian that looks exactly like the pedestrian in this accident crosses the street in exactly the same way, with exactly the same other variables (like all the other pedestrians and cars around there that the sensors can see), the data could be enough the same that the algorithm will detect it at close enough to the original situation to avoid the accident this time.

But it's not at all clear how well their improvements will generalize to other situations which humans would consider to be "the same" (ie. when any pedestrian in any intersection crosses any street).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: