• 9 Posts
  • 353 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle


  • If the data is falsified that’d be illegal.

    Oh no! It would be illegal!

    And what would be the punishment if it was found out that they released illegal data? A fine that could amount to hundreds of thousands of dollars? On top of their tens of millions of dollars of profits?

    Do you have a reason to think otherwise?

    Yes, they are directly incentivized to either push their data in a biased direction or outright falsify their numbers, in order to facilitate the marketing strategy of these taxis being a “safe” technology, and increase their profit margin.

    Fuck… have we learned nothing from the tobacco industry?!





  • There’s around a million people dying from cars every year and we just shrug and normalize them. Human or not, we just have to have cars and “accidents” are just that.

    The difference is accountability. If a human kills another human because of a car accident, they are liable, even criminally liable, given the right circumstances. If a driverless car kills another human because of a car accident, you’re presented with a lose-lose scenario, depending on the legal implementation:

    1. If the car manufacturer says that somebody must be behind the wheel, even though the car is doing all of the driving, the person is suddenly liable for the accident. They are expected to just sit there and watch for a potential accident, but the behavior of what an AI model will do is undefined. Is the model going to stop in front of that passenger as expected? How long do they wait to see before they take back control? It’s not like cruise control, a feature that only controls part of the car, where they know exactly how it behaves and when to take back control. It’s the equivalent of asking a person to watch a panel with a single red light for an hour, and push a button as fast as possible when it blinks for a half-second.

    2. If the model is truly driverless (like these taxis), then NOBODY is liable for the accident. The company behind it might get sued, or might end up in a class-action lawsuit, but there is no criminal liability, and none of these lawsuits will result in enough financial impact to facilitate change. The companies have no incentive to fix their software, and will continue to parrot this shitty line about how it’s somehow better than humans at driving, despite these easily hackable scenarios and zero accountability.

    Humans have an incentive to not kill people, since nobody wants to have that on their conscience, and nobody wants to go to prison over it.

    Corporations don’t. In fact, they have an incentive to kill people over profits, if the choice presents itself!


  • If a robotic taxi can lower the taxi category of accidents by 91% across the board, including death rates, then that’s a positive improvement to society any way you slice it. Not saying it isn’t a horrifying dystopian world we’re potentially building, but at the moment, given the numbers, it would be 91% safer in that category.

    You need to prove this number. Looking at the behavior of current driverless cars, the software is still shit, and nothing has reached Level 5 Autonomous Driving. There are too many edge cases, and conflicting behavior points. Navigating a world of humans driving in different ways with complex urban and rural streets is a very very messy affair.

    Hell, nobody in the space can even answer this simple question correctly: If the speed limit is 55 MPH on the highway, and everybody is going 65 MPH, and we know that the delta of speed is what kills people in highway car accidents, what speed does the driverless car use?

    (Hint: the correct answer is not 55.)