We’re all seeing the breathless hype surrounding the vacuous marketing term. It’ll change everything! It’s coming for our jobs! Some 50% of white-collar workers will be laid off!

Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

I was having a conversation on Reddit along these lines a couple of days ago, and after seeing more news that just parrots Altman’s theme-du-jour, I need a sanity check.

Something I’ve always found hilarious at work is someone asking if you have a calculator (I guess that dates me to the flip-phone era) … my canned response was “what’s wrong with the very large one on your desk?”

Like, automation is literally why we have these machines.

And it’s worth noting that you can’t automate the interesting parts of a job, as those are creative. All you can tackle is the rote, the tedious, the structured bullshit that no one wants to do in the first place.

But here’s the thing: I’ve learned over the decades that employers don’t want more efficiency. They shout it out to the shareholders, but when it comes down to the fiefdoms of directors and managers, they like inefficiency, thank you very much, as it provides tangible work for them.

“If things are running smoothly, why are we so top heavy” is not something any manager wants to hear.

Whatever the fuck passes for “AI” in common parlance can’t threaten management in the same way as someone deeply familiar with the process and able to code. So it’s anodyne … not a threat to the structure. Instead of doubling efficiency via bespoke code (leading to a surplus of managers), just let a couple people go through attrition or layoffs and point to how this new tech is shifting your department’s paradigm.

Without a clutch.

I’ve never had a coding title, but I did start out in CS (why does this feel like a Holiday Inn Express ad?), so regardless of industry, when I end up being expected to use an inefficient process, my first thought is to fixing it. And it has floored me how severe the pushback is.

I reduced a team of 10 auditors to five at an audiobook company with a week of coding in VB. A team of three placing ads to 0.75 (with two of us being me and my girlfriend) at a newspaper hub.

Same hub, clawed back 25% of my team’s production time after absurd reporting requirements were implemented despite us having all the timestamps in our CMS – the vendor charged extra to access our own data, so management decided a better idea than paying the vendor six figures was overstaff by 33% (250 total at the center) to get those sweet, sweet self-reported error-laden data!

At a trucking firm, I solved a decadelong problem with how labour-intensive receiving for trade shows was. Basically, instead of asking the client for their internal data, which had been my boss’ approach, I asked how much they really needed from us, and could I simplify the forms and reports (samples provided)? Instant yes, but my boss hated the new setup because I was using Microsoft Forms to feed Excel, and then a 10-line script to generate receivers and reports, and she didn’t understand any of that, so how was she sure I knew what I was doing?

You can’t make this shit up.

Anyway, I think I’ve run far afield of my central thesis, but I think these illustrations point to a certain intransigence at the management level that will be far more pronounced than is being covered.

These folks locked in their 2.9% mortgage and don’t want to rock the boat.

My point is, why would management suddenly be keen on making themselves redundant when decades of data tell us otherwise?

This form of “AI” does not subvert the dominant paradigm. And no boss wants fewer employees.

As such, who’s actually going to get screwed here? The answer may surprise you.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    5 hours ago

    And it’s worth noting that you can’t automate the interesting parts of a job, as those are creative. All you can tackle is the rote, the tedious, the structured bullshit that no one wants to do in the first place.

    Are you saying that this used to be the case and acknowledging that it’s no longer true with modern AI? Because it’s demonstrably not true for modern AI and is the entire reason people are fearful.

    Honestly, this post is so far out of the loop, part of me is wondering if it’s AI generated.

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 hours ago

    Because AI is the first form of automation that doesn’t run on humans. Theoretically.

    Now, can AI replace humans? No, obviously not at this stage. It can help humans do more, but only an idiot would trust it to do something important without a human in the loop

    Is it? Yes. They are firing people and letting AI attempt to make the difference. They’re cutting entire departments in some cases

  • Crotaro@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    11 hours ago

    I think it’s more simple than you assume. From my limited experience (many stranger’s anecdotes and my team recently being fired literally because “the other (very different) production location is able to do it without a dedicated Quality Management team”) most employers / company chiefs just want to make more money or, at least, increase the perceived value so that being bought out becomes realistic and leaves them with more money. They don’t actually care if their product works well or efficient, as long as number go up. Maybe the original company founder does but how many companies are still there that have the founder for long-term in key decision making and without shareholders who kinda hold the real power and couldn’t care less if the company cleaned up oceans or burned children because to them it’s just one combination of letters that make them money?

    As @lvxferre@mander.xyz suggested, the top management might not even understand that AI won’t help, so they think it will make a short- (savings due to firings) and long-term (increased efficiency or otherwise better product) profit. And those that are very informed about AI understand, at the very least, that they can increase short-term profits by firing employees (thus saving on needing to pay salaries to pesky humans) under the guise of increasing efficiency.

    So to top management it’s just a decision of “do I want more money now and in the future?” or “do I want more money now and maybe also trick idiots into buying us out before it goes belly-up?”

    Lastly, I think you might ascribe more self-reflection ability to middle management than they have. I want to believe that most of them truly think they are a crucial part of making the company work, so they don’t even see that replacing humans with AI would make them obsolete and thus prone for firing.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    29
    ·
    23 hours ago

    Given the massive layoffs happening under the Assumed Intelligence banner, the answer has always been: “cheaper labour”

    Apparently people who actually know how to do their ICT job are too expensive, right until the shit hits the fan, at which point it’s “drop everything and help me, now!”

    Organisations are no longer run by Founders, instead they’re run by accountants and lawyers who only care about shareholder value, not the societal or environmental impact.

    When the bubble finally explodes we’re going to be looking at an altered economic and technology landscape, if we don’t self ignite before that.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    20 hours ago

    My guess:

    Coverage roughly follows money, and that money comes the top of the hierarchy. However, the top is too far from the production to actually get that 1) automation is nothing new, and 2) AI won’t help as much with it as advertised.

    The middle of the hierarchy is close enough to the production to know those two things, but it’ll parrot them because doing so enables the inefficiency they love so much, under the disguise of efficiency.

    Then you got the bottom. It’s the closest to the production, but often suffers from a problem of “I don’t see the forest, I see the leaves”, plus since it has no decision power so it ends as a “meh who cares”. So it’ll parrot whatever it sees in the coverage.

    As such, who’s actually going to get screwed here? The answer may surprise you.

    All three. However not in the way people predict, “AI is going to steal our jobs”. It’s more like suckers at the top will lose big money on AI fluff, and to cut costs off they’ll fire a lot of people.

    Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

    Ditto.

  • Megaman_EXE@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    22 hours ago

    Amazon just laid off 14,000 workers, and they’re claiming they can do their jobs with AI.

    They’re already making AI commercials for TV and AI images for advertising, and there was a post here about AI music that hit the top 100 played on Spotify or something. Now, how cost-effective and efficient these all are? I have no idea.

    I think there’s both overhype and reality. I think it’s a bit of both. Companies want people to believe AI is the way forward. But I also think part of this is smoke and mirrors. I actually don’t know how much of it is truth and until we start hearing more first-hand accounts, I feel like it’s very up in the air.

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      Amazon, like a lot of other tech companies, has also been cutting product lines as increased interest rates has put a cost to money and a push to profitability.

      That said, there is likely some use to AI, even if it is error prone.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 hours ago

      Though to be fair, Amazon’s scale is very large, so it’s worth it to spend a lot on automation. They’ve done a lot with robots before. 14k isn’t as many as it might sound, at their scale.

      kagis

      https://www.nytimes.com/2025/10/21/technology/inside-amazons-plans-to-replace-workers-with-robots.html

      Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers.

      Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        13 hours ago

        I think the entire origin story of Amazon and why they outcompeted other bookstores, online- and mail-order companies was automation and their more streamlined processes. Afaik they’ve made sure to implement it as an entire chain from end to end, and that’s been their huge advantage from early on.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    22 hours ago

    Why is so much coverage of “AI” devoted to this belief that we’ve never had automation before (and that management even really wants it)?

    I’m going to set aside the question of whether any given company or a given timeframe or a given AI-related technology in particular is effective. I don’t really think that that’s what you’re aiming to address.

    If it just comes down to “Why is AI special as a form of automation? Automation isn’t new!”, I think I’d give two reasons:

    It’s a generalized form of automation

    Automating a lot of farm labor via mechanization of agriculture was a big deal, but it mostly contributed to, well, farming. It didn’t directly result in automating a lot of manufacturing or something like that.

    That isn’t to say that we’ve never had technologies that offered efficiency improvements across a wide range of industries. Electric lighting, I think, might be a pretty good example of one. But technologies that do that are not that common.

    kagis

    https://en.wikipedia.org/wiki/Productivity-improving_technologies

    This has some examples. Most of those aren’t all that generalized. They do list electric lighting in there. The integrated circuit is in there. Improved transportation. But other things, like mining machines, are not generally applicable to many industries.

    So it’s “broad”. Can touch a lot of industries.

    It has a lot of potential

    If one can go produce increasingly-sophisticated AIs — and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations — there’s a pathway to, over time, automating darn near everything that humans do today using that technology. Electrical lighting could clearly help productivity, but it clearly could only take things so far.

    So it’s “deep”. Can automate a lot within a given industry.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      22 hours ago

      There is a fundamental limitation of all LLMs that prevents it from doing as much as you might think, regardless of how accurate they are (and they are not):

      LLMs cannot take liability. When they make mistakes, they cannot take responsibility for those mistakes. The person who used the LLM will always be liable instead.

      So any automation as a result of LLMs removing jobs will end up punting that liability to the next person up the chain. Management will literally have nobody to blame but themselves, and that’s their worst nightmare.

      Anyway, this is of course assuming capabilities that don’t exist.

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        arrow-up
        4
        ·
        22 hours ago

        Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 hours ago

          Yep! You would need not only an AI superintelligence capable of reflecting and adapting, but legislation which holds liable those superintelligences and grants them the rights and obligations of a human. Because there is no concept of reward of punishment to a LLM, they can never be replacements for people.

          • Lvxferre [he/him]@mander.xyz
            link
            fedilink
            arrow-up
            3
            ·
            19 hours ago

            It’s more than that: they’d need to have desires, aversions, goals. That is not automatically granted by intelligence; in our case it’s from our instincts as animals. So perhaps you’d need to actually evolve Darwin style the AGI systems you develop, and that would be way more massive than a single AGI, let alone the “put glue on pizza lol” systems we’re frying the planet for.

              • Powderhorn@beehaw.orgOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                I’m reminded of the fairy tale of the two squirrels in the Black Forest. As fall came to pass, they BALLROOM!

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 hours ago

      and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations

      We already know there are massive fundamental limitations. All of the big name AI companies are all in on LLMs which can’t do anything that hasn’t been done before, unless it is just arbitrarily outputting something randomly mashed together which is not what to do for anything important. It is a dead end without humans doing things it can copy. When a new coding language is developed, it can’t use it until lots and lots of people have used it to suck up their code to vomit forth.

      LLMs, which is what all of the general purpose AIs are, cannot be a long term solution to anything unless we are just pausing technology and society to whenever it can handle ‘everything’. LLMs have already peaked and that is supposedly the road to general AI.

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      It’s ultimately frustrating to me that I suspect AI here. There are weird inconsistencies.

      But, come on.

      It has a lot of potential

      Really? That’s what everyone says about their toddler while it pukes.