A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.
Some decent news at least
It is absolutely a bubble, but the applications that AI can be used for still remain while the models continue to get better and cheaper. Here’s the actual graph:
For the things AI is good at, like reading documentation, one should just get a local model and be done.
I think pouring as much money as big companies in the us has been doing is unwise. But when you have deep pockets, i guess you can afford to gamble.
Could you point me to a model to do that and instructions on how get it up and running?
I’m using Deepseek R1 (8B) and Gemma 3 (12B), installed using LM Studio (which pulls directly from Hugging Face).
Took them long.
I mean the automatic speech recognition and transcription capabilities are quite useful. But that’s about it, for me for now.
It could be interesting for frame interpolation in movies at some point maybe, I guess.
I dream of using it for the reliable classification of things. But I haven’t seen it working reliably, yet.
For the creation of abstracts and as a dialog system for information retrieval it doesn’t feel exact/correct / congruent enough to me.
Also: A working business plan to make money with actual AI services has yet to be found. Right now it is playing with a shiny new toy and the expectations and money of their investors. Right now they fail to deliver and the investors might get restless. Selling the business while it is still massively overrated, seems like the only way forward. But that’s just my opinion.
I mean the automatic speech recognition and transcription capabilities are quite useful.
That’s what LLM are made for; text stuff, not knowledge stuff.
That’s what LLM are made for;
Hence the Name? :)
let’s not forget the us is pumping EVERYTHING into ai, 3-4% of the gdp are just the ai economy. here’s hoping it comes crashing down on them
Time to short Google yet?
You might as well put your money on red at the casino.
brace for the pop, this one gonna be loud.
IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.
Reminds me of 10 other technologies that if you didn’t get in the world was going to end but ended up more niche than you’d expect.
As someone who is excited about AI and thinks it’s pretty neat, I agree we’ve needed a level-set around the expectations. Vibe coding isn’t a thing. Replacing skilled humans isn’t a thing. It’s a niche technology that never should’ve been sold as making everything you do with it better.
We’ve got far too many companies who think adoption of AI is a key differentiator. It’s not. The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.
The technology is fascinating and useful - for specific use cases and with an understanding of what it’s doing and what you can get out of it.
From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn’t at the point where it makes any fucking sense to have it plugged into fucking everything.
Leaving the questionable ethics many paid models’ creators have used to make their models aside, the backlash against so is understandable because it’s being shoehorned into places it just doesn’t belong.
I think eventually we may “get there” with models that don’t make so many obvious errors in their output - in fact I think it’s inevitable it will happen eventually - but we are far from that.
I do think that the “fuck ai” stance is shortsighted though, because of this. This is happening, it’s advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I’m more inclined to believe it’s when) we have functional models that can are accurate in their output.
When it actually makes sense to replace virtually every profession with ai (it doesn’t right now, not by a long shot) then how are we going to deal with this as a society?
I’ve got a friend who has to lead a team of apparently terrible developers in a foreign country, he loves AI, because “if I have to deal with shitty code, send back PRs three times then do it myself, I might as well use LLMs”
And he’s like one of the nicest people I know, so if he’s this frustrated, it must be BAD.
Cyberspace, hypertext, multimedia, dot com, Web 2.0, cloud computing, SAAS, mobile, big data, blockchain, IoT, VR and so many more. Sure, they can be used for some things, but doing that takes time, effort and money. On top of that, you need to know exactly when to use these things and when to choose something completely different.
oh the horror
Nature is healing
The US Census Bureau keeps track of things like that? Huh… TIL
13.5%, slipping to about 12%
I know that 1.5% could mean hundreds of businesses, but this still seems like such a nothing burger.
But they’re already not making money, losing customers during the supposed growth phase is absolutely devastating. It’s occuring all while AI is being subsidized by massive investments from the likes of microsoft and google, and many more namelesss VCs through OpenAI, anthropic etc.
The issue isn’t the percentage, it’s that inverse of growth. Most investors desire growth to see returns on investment for their upfront capital. If growth isn’t occurring, that’s a good sign to read the room and pull your funding.
Similar issues occurred with streaming services. Netflix is still profitable, but because the userbase isn’t growing, investors and the financial world stopped seeing it as a valuable platform to invest in.
That is more than a 10% loss of that customer base in 2 month.
For any industry that is huge.
The ai companies haven’t even found a viable business model yet, are bleeding money while the user base is shrinking
Isn’t that the case with a lot of modern tech?
I vaguely recall Spotify and Uber being criticized relying on the “get big first and figure out how to monetize later” model.
(Not defending them, just wondering what’s different about AI.)
The lack of business model is what’s freaking me out.
Around 2003 I was talking to a customer about Google going public and saying he should go all in.
“Meh, they’re a great search engine, but I can’t see how they’ll make any money.”
Still remember that conversation, standing in his attic, wiring his new satellite dish. Wonder if he remembers that conversation at well.
So instead of bursting the bubble is slowly deflating?
Bubble is build upon potential of the investment. It’s unlikely that AI is near its invested potential which means declining usage might actually be an indicator that the bubble is about to pop. A few big investors think the potential has been reached and pull out and then it cascades into a crash.
That’s user rates, not the stock price
Western growth is predicated on bubbles.