Post

Avatar
"It marks the second incident in two months involving a Tesla’s near miss-with a train while utilising its driver assistance system." You can tell Tesla is close to "solving self-driving" because they are down to the tough edge cases like *checks notes* trains nz.news.yahoo.com/tesla-autopi...
Tesla autopilot appears to veer electric vehicle onto train track it mistook for roadnz.news.yahoo.com Local police urged drivers to remain ‘vigilant while using Tesla's autopilot feature,’ noting that it ‘can fail’
Avatar
I think it's underappreciated that for humans the process is Visual data -> mental model of surroundings -> decision Whereas for a Tesla it's Visual data -> some simple rules -> decision so sometimes the simple rules condemn you or somebody nearby to death.
Avatar
The Tesla also uses a "model" rather than hard-coded "rules," but it's a statistical model that fundamentally lacks any of the social and psychological insights humans constantly use without even trying while driving. Driving is a highly social task, and statistics only get you so far in those.
Avatar
This is at the core of AI failures, it uses stats to “infer” the actual answer. Doesn’t use known answers.
Avatar
Avatar
I should have specified: a set of rules that the programmers don't really know either :)
Avatar
Also curious if they classify cars by their behavior. Like you'll see an aggressive driver in the rear view and think "that person is going to aggressively try to get in front rather than do a clean merge behind me" and be more prepared when it happens.
Avatar
One of the fun things about driving in SF is watching the (real) self-driving cars slowly get better at this. They got better about showing intent, about reading other drivers intent, etc. I'd love to know what things they have discovered about real driver behavior and how local it is.