• 0 Posts
  • 332 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle
  • masterspace@lemmy.catoTechnology@lemmy.worldArch Linux and Valve Collaboration
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    16
    ·
    edit-2
    7 hours ago

    I’d like to see a Sankey graph of where Valve’s money goes before I praise them that much for helping out a Linux distribution a bit.

    Lots of major companies like Microsoft and IBM also contribute to Linux, it doesn’t make them saints nor even necessarily compare to what they get for using the volunteer dev work inside Linux.

    Gabe Newell is a billionaire, Steam is a defacto monopoly that objectively charges more than they have to, and literally everyone who works at Valve is in the 1%. Let’s not fall over ourselves dick-riding them.


  • The difference is that Uber’s model of using an app to show you the route, give driver feedback, be able to report problems and monitor and track the driver, etc. is actually a huge improvement to both rider safety and experience compared to calling a cab company and then waiting who knows how long for someone to show up and hopefully bring you where you want to go.

    Not saying that their model of gig workers, or dodging up front training is good, but they legitimately offered up a fundamentally better taxi experience than anything that came before, which I think encouraged regulators to really drag their feet on looking into them.






  • They claim he made a threat. The article failed to print his side of the story for some curious reason. It isn’t printing any testimony from the bystanders, either.

    Fair enough, supposedly they were wearing body cams so hopefully some of what actually happened can be answered objectively, I’m just pointing out what the article said. If he didn’t make a threat or have a knife, then tasering him is a wild escalation, it’s just that if he did, then the police can’t really just let him get on a train.

    Cops will often lie about the danger of a suspect in order to justify elevating their use-of-force. That said, they weren’t that concerned by his unreasonableness when they deployed tasers into the crowd first. They didn’t switch to guns until they realized the tasers weren’t going to work.

    Again, assuming what the article says is true, which is a big assumption, it’s not that crazy to taser a guy who just got onto a train with a knife and threatened to you. At that point you’re looking at a potential mass stabbing incident if you do nothing.

    Again, who knows, maybe the cops are blowing his behaviour wildly out of proportion, I’m just saying that, based on the article, it sounds like he wasn’t just gunned down for jumping a turnstile.



  • “This isn’t a meeting about the budget per se”

    “This isn’t exactly a meeting about the budget”

    If you finish those sentences, it becomes clear why per se is used:

    “This isn’t a meeting about the budget per se, it’s a meeting about how much of the budget is spent on bits of string”

    “This isn’t exactly a meeting about the budget, it’s a meeting about how much of the budget is spent on bits of string”

    In this situation, using per se provides a more natural sentence flow because it links the first part of the sentence with the second. It’s also shorter and fewer syllables.

    “Steve’s quite erudite.”

    “Steve’s quite intellectual.”

    I think intellectual might be a closer synonym, but intellectual often has more know-it-all connotations than erudite which seems to often refer to a more pure and cerebral quality.

    “Tom and Jerry is a fun cartoon because of the juxtaposition of the relationship between cat and mouse.”

    “Tom and Jerry is a fun cartoon because of the side by side oppositeness of the relationship between cat and mouse that is displayed

    For those to say precisely the same thing it would have to be more like the above which doesn’t really roll off the tongue.

    “I don’t understand, can you elucidate that?”

    “I don’t understand, can you explain?”

    Elucidate just means to make something clear in general, explaining something usually inherently implies a linguistic, verbal, explanation, unless otherwise stated.

    Honestly, these all seem like very reasonable words to me for the most part. I can understand not using them in some contexts, but for the most part, words exist for a reason, to describe something slightly differently, and it takes forever to talk and communicate if we only limit ourselves to the most basic unnuanced terms.


  • When people use industry specific jargon and acronyms with someone not in their industry.

    It is a very simple rule of writing and communication. You never just use an acronym out of nowhere, you write it out in full the first time and explain the acronym, and then after that you can use it.

    Artificial diamonds can be made with a High Temperature, High Pressure (HTHP) process, or a …

    Doctors, military folk, lawyers, and technical people of all variety are often awful at just throwing out an acronym or technical term that you literally have no way of knowing.

    Usually though, I don’t think it’s a conscious effort to sound smart. Sometimes, it’s just people who are used to talking only with their coworkers / inner circle and just aren’t thinking about the fact that you don’t have the same context, sometimes it’s people who are feeling nervous / insecure and are subconsciously using fancy terms to sound like they fit in, and sometimes it’s people using specific terminology to hide the fact that they don’t actually understand the concepts well enough to break them down further.



  • The work is reproduced in full when it’s downloaded to the server used to train the AI model, and the entirety of the reproduced work is used for training. Thus, they are using the entirety of the work.

    That’s objectively false. It’s downloaded to the server, but it should never be redistributed to anyone else in full. As a developer for instance, it’s illegal for me to copy code I find in a medium article and use it in our software. I’m perfectly allowed to read that Medium article, learn from it, and then right my own similar code.

    And that makes it better somehow? Aereo got sued out of existence because their model threatened the retransmission fees that broadcast TV stations were being paid by cable TV subscribers. There wasn’t any devaluation of broadcasters’ previous performances, the entire harm they presented was in terms of lost revenue in the future. But hey, thanks for agreeing with me?

    And Aero should not have lost that suit. That’s an example of the US court system abjectly failing.

    And again, LLM training so egregiously fails two out of the four factors for judging a fair use claim that it would fail the test entirely. The only difference is that OpenAI is failing it worse than other LLMs.

    That’s what we’re debating, not a given.

    It’s even more absurd to claim something that is transformative automatically qualifies for fair use.

    Fair point, but it is objectively transformative.




  • You said open source. Open source is a type of licensure.

    The entire point of licensure is legal pedantry.

    No. Open source is a concept. That concept also has pedantic legal definitions, but the concept itself is not inherently pedantic.

    And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

    No, they’re not. Which is why I didn’t use that metaphor.

    A binary is explicitly a black box. There is nothing to learn from a binary, unless you explicitly decompile it back into source code.

    In this case, literally all the source code is available. Any researcher can read through their model, learn from it, copy it, twist it, and build their own version of it wholesale. Not providing the training data, is more similar to saying that Yuzu or an emulator isn’t open source because it doesn’t provide copyrighted games. It is providing literally all of the parts of it that it can open source, and then letting the user feed it whatever training data they are allowed access to.


  • LLMs use the entirety of a copyrighted work for their training, which fails the “amount and substantiality” factor.

    That factor is relative to what is reproduced, not to what is ingested. A company is allowed to scrape the web all they want as long as they don’t republish it.

    By their very nature, LLMs would significantly devalue the work of every artist, author, journalist, and publishing organization, on an industry-wide scale, which fails the “Effect upon work’s value” factor.

    I would argue that LLMs devalue the author’s potential for future work, not the original work they were trained on.

    Those two alone would be enough for any sane judge to rule that training LLMs would not qualify as fair use, but then you also have OpenAI and other commercial AI companies offering the use of these models for commercial, for-profit purposes, which also fails the “Purpose and character of the use” factor.

    Again, that’s the practice of OpenAI, but not inherent to LLMs.

    You could maybe argue that training LLMs is transformative,

    It’s honestly absurd to try and argue that they’re not transformative.



  • you are literally doing what i mean when i say you are making assumptions with no evidence. there is, again, no reason to believe that “driving more efficiently” will result from mass-adoption of automated vehicles–and even granting they do, your assumption that this wouldn’t be gobbled up by induced demand is intuitively disprovable. even the argumentation here parallels other cases where induced demand happens! “build[ing] new roads or widen[ing] existing ones” is a measure that is almost always justified by an underlying belief that we need to improve efficiency and productivity in existing traffic flows,[^1] and obviously traffic flow does not improve in such cases.

    I’m doing nothing other than questioning where the induced demand is coming from. What is inducing if not increased efficiency?

    The whole point of induced demand in highways is that when you add capacity in the form of lanes it induces demand. So if our highways are already full and if that capacity isn’t coming from increased EV efficiency then where is it coming from? If there’s no increase in road capacity then what is inducing demand?

    but granting that you’re correct on all of that somehow: more efficiency (and less congestion) would be worse than inducing demand. “efficiency” in the case of traffic means more traffic flow at faster speeds, which is less safe for everyone—not more.[^2] in general: people drive faster, more recklessly, and less attentively when you give them more space to work with (especially on open roadways with no calming measures like freeways, which are the sorts of roads autonomous vehicles seem to do best on). there is no reason to believe they would do this better in an autonomous vehicle, which if anything incentivizes many of those behaviors by giving people a false sense of security (in part because of advertising and overhyping to that end!).

    You are describing how humans drive, not AVs. AVs always obey the speed limit and traffic calming signs.

    you asserted these as “other secondary effects to AVs”–i’m not sure why you would do that and then be surprised when people challenge your assertion. but i’m glad we agree: these don’t exist, and they’re not benefits of mass adoption nor would they likely occur in a mass adoption scenario.

    We haven’t agreed on anything,I said I was open to your reasoning as to why those effects wouldn’t happen, then you didn’t provide any.

    the vast majority of road safety is a product of engineering and not a product of human driving ability, what car you drive or its capabilities, or other variables of that nature. almost all of the problems with, for example, American roadways are design problems that incentivize unsafe behaviors in the first place (and as a result inform everything from the ubiquity of speeding to downstream consumer preferences in cars). to put it bluntly: you cannot and will not fix road safety through automated vehicles, doubly so with your specific touted advantages in this conversation.

    You think you can eliminate all accidents through road design?

    You are literally ignoring every single accident caused by distracted driving, impatient driving, impaired driving, tired driving etc.

    Yeah, road design in America should be better, AVs should still also replace crappy wreckless humans. Those two ideas are not mutually exclusive.


  • this is at obvious odds with the current state of self-driving technology itself–which is (as i noted in the other comment) subject to routine overhyping and also has rather minimal oversight and regulation generally

    All cool tech things are overhyped. If you judgement for whether or not a technology is going to be useful is “if it sounds at all overhyped then it will flop” then you would never predict any technology would change the world ever.

    And no, quite frankly those assertions are objectively false. Waymo and Cruise’s driverless programs are both monitored by the DMV which is why they revoked Cruise’s license when they found them hiding crash data. Waymo has never been found to do so or even accused of doing so. Notice that in the lawsuit you linked, Waymo was happy to publish accident and safety data but did not want to publish data about how it’s vehicles handle edge cases, which would give their rivals information on how they operate, and the courts agreed with them.

    https://arstechnica.com/cars/2023/12/human-drivers-crash-a-lot-more-than-waymos-software-data-shows/

    Since their inception, Waymo vehicles have driven 5.3 million driverless miles in Phoenix, 1.8 million driverless miles in San Francisco, and a few thousand driverless miles in Los Angeles through the end of October 2023. And during all those miles, there were three crashes serious enough to cause injuries:

    In July, a Waymo in Tempe, Arizona, braked to avoid hitting a downed branch, leading to a three-car pileup. A Waymo passenger was not wearing a seatbelt (they were sitting on the buckled seatbelt instead) and sustained injuries that Waymo described as minor. In August, a Waymo at an intersection “began to proceed forward” but then “slowed to a stop” and was hit from behind by an SUV. The SUV left the scene without exchanging information, and a Waymo passenger reported minor injuries. In October, a Waymo vehicle in Chandler, Arizona, was traveling in the left lane when it detected another vehicle approaching from behind at high speed. The Waymo tried to accelerate to avoid a collision but got hit from behind. Again, there was an injury, but Waymo described it as minor. The two Arizona injuries over 5.3 million miles works out to 0.38 injuries per million vehicle miles. One San Francisco injury over 1.75 million miles equals 0.57 injuries per million vehicle miles. An important question is whether that’s more or less than you’d expect from a human-driven vehicle.

    After making certain adjustments—including the fact that driverless Waymo vehicles do not travel on freeways—Waymo calculates that comparable human drivers reported 1.29 injury crashes per million miles in Phoenix and 3.79 injury crashes per million miles in San Francisco. In other words, human drivers get into injury crashes three times as often as Waymo in the Phoenix area and six times as often in San Francisco.

    Waymo argues that these figures actually understate the gap because human drivers don’t report all crashes. Independent studies have estimated that about a third of injury crashes go unreported. After adjusting for these and other reporting biases, Waymo estimates that human-driven vehicles actually get into five times as many injury crashes in Phoenix and nine times as many in San Francisco.

    To help evaluate the study, I talked to David Zuby, the chief research officer at the Insurance Institute for Highway Safety. The IIHS is a well-respected nonprofit that is funded by the insurance industry, which has a strong interest in promoting automotive safety.

    While Zuby had some quibbles with some details of Waymo’s methodology, he was generally positive about the study. Zuby agrees with Waymo that human drivers underreport crashes relative to Waymo. But it’s hard to estimate this underreporting rate with any precision. Ultimately, Zuby believes that the true rate of crashes for human-driven vehicles lies somewhere between Waymo’s adjusted and unadjusted figures.


  • they can. induced demand is omnipresent in basically all vehicular infrastructure and vehicular improvements and there’s no reason to think this would differ with autonomous vehicles

    Yes, I have no doubt there would be induced demand, but that extra demand wouldn’t be at the cost of anything. Induced demand is a problem when we, for instance, build new roads or widen existing ones, because then more people drive and they clog up the same as they were before. That’s a bad thing because the cost of adding this capacity is that we have to tear down nature and existing city to add lanes, and then we have more capacity that sits at a standstill leading to more emissions.

    But if AVs add more capacity to our roads, that will be entirely because they are driving more efficiently. We’ll have the same amount of cars on the road at any given time, they’ll just be moving faster on average rather than idling in traffic jams made by humans. Which means that there will be only relatively minor emissions increases during peak times, fewer emissions emitted during non peak, and we won’t be tearing anything down to build more giant highways.

    okay but: literally none of this follows from mass-adoption of autonomous vehicles. this is a logical leap you are making with no supporting evidence—there is, and i cannot stress this enough, no evidence that if mass-adoption occurs any of this will follow

    You’re asking for something that does not exist. How am I supposed to provide you evidence proving what the results of mass adoption of AVs will be when there has never been a mass adoption of AVs.

    and in general the technology is subject to far more fabulism and exaggeration (like this!) than legitimate technological advancement or improvement of society.

    Again, it’s never actually been rolled out on a mass scale. It’s a technology still being actively developed. Neither of us know what the end results will be, but I put forth plausible reasoning to my speculation, if you have plausible reasoning why those things won’t come to pass I’m all ears. For instance, what is your reasoning for believing that AVs could never be fundamentally safer than human drivers who are frequently tired, angry, distracted, impaired, impatient, etc?