A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.

  • krunklom@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    12 hours ago

    The technology is fascinating and useful - for specific use cases and with an understanding of what it’s doing and what you can get out of it.

    From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn’t at the point where it makes any fucking sense to have it plugged into fucking everything.

    Leaving the questionable ethics many paid models’ creators have used to make their models aside, the backlash against so is understandable because it’s being shoehorned into places it just doesn’t belong.

    I think eventually we may “get there” with models that don’t make so many obvious errors in their output - in fact I think it’s inevitable it will happen eventually - but we are far from that.

    I do think that the “fuck ai” stance is shortsighted though, because of this. This is happening, it’s advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I’m more inclined to believe it’s when) we have functional models that can are accurate in their output.

    When it actually makes sense to replace virtually every profession with ai (it doesn’t right now, not by a long shot) then how are we going to deal with this as a society?