A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.

  • Damage@feddit.it
    link
    fedilink
    English
    arrow-up
    10
    ·
    12 hours ago

    I’ve got a friend who has to lead a team of apparently terrible developers in a foreign country, he loves AI, because “if I have to deal with shitty code, send back PRs three times then do it myself, I might as well use LLMs”

    And he’s like one of the nicest people I know, so if he’s this frustrated, it must be BAD.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      I had to do this myself at one point and it can be very frustrating.

      It’s basically the “tech makes lots of money” effect, which attracts lots of people who don’t really have any skill at programming and would never have gone into it if it weren’t for the money.

      We saw this back in earlier tech booms and see it now in poorer countries to were lots of IT work has been outsourced - they still have the same fraction of natural techies as the rest but the demand is so large that masses of people with no real tech skill join the profession and get given actual work to do and they suck at it.

      Also beware of cultural expectations and quirks - the team I had to manage were based in India and during group meetings on the phone would never admit if they did not understood something of a task they were given or if there was something missing (I believe that it was so as not to lose face in front of others), so ended up often just going with wrong assumptions and doing the wrong things. I solved this by, after any such group meeting, talking to each member of that outsourced team, individually and in a very non-judgemental way (pretty much had to pass it as “me, being unsure if I explained things correctly”) to tease from them any questions or doubts, which helped avoid tons of implementation errors from just not understanding the Requirements or the Requirements themselves lacking certain details and devs just making assumptions on their own about what should go there.

      That said, even their shit code (compared to what us on the other side, who were all senior developers or above, produced) actually had a consistent underlying logic throughout the whole thing, with even the bugs being consistent (humans tend to be consistent in the kind of mistakes they make), all of which helps with figuring out what is wrong. LLMs aren’t as consistent as even incompetent humans.