Mark Rober just set up one of the most interesting self-driving tests of 2025, and he did it by imitating Looney Tunes. The former NASA engineer and current YouTube mad scientist recreated the classic gag where Wile E. Coyote paints a tunnel onto a wall to fool the Road Runner.

Only this time, the test subject wasn’t a cartoon bird… it was a self-driving Tesla Model Y.

The result? A full-speed, 40 MPH impact straight into the wall. Watch the video and tell us what you think!

  • TheYang@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    5 hours ago

    The whole idea is they should be safer than us at driving. It only takes fog (or a painted wall) to conclude that won’t be achieved with cameras only.

    Well, I do still think that cameras could reach “superhuman” levels of safety.
    (very dense) Fog makes the cameras useless, A self driving car would have to slow way down / shut itself off. If they are part of a variety of inputs they drop out as well, reducing the available information. How would you handle that then? If that would have to drop out/slow down as much, you gain nothing again /e: my original interpretation is obviously wrong, you get the additional information whenever the environment permits.
    And for the painted wall. Cameras should be able to detect that. It’s just that Tesla presumably hasn’t implemented defenses against active attacks yet.

    You had a lot of hands in this paragraph. 😀
    I like to keep spares on me.

    I’m exceptionally doubtful that the related costs were anywhere near this number.

    cost has been developing rapidly. Pretty sure several years ago (about when tesla first started announcing to be ready in a year or two) it was in the tens of thousands. But you’re right, more current estimations seem to be more in the range of $500-2000 per unit, and 0-4 units per car.

    it’s inconceivable to me that cameras only could ever be as safe as having a variety of inputs.
    Well, diverse sensors always reduce the chance of confident misinterpretation.
    But they also mean you can’t “do one thing, and do it well”, as now you have to do 2-4 things (camera, lidar, radar, sonar) well. If one were to get to the point where you have either one really good data-source, or four really shitty ones, it becomes conceivable to me.

    From what I remember there is distressingly little oversight for allowing self-driving-cars on the road, as long as the Company is willing to be on the hook for accidents.