

Sounds like correlation to me.
Did I say otherwise?
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


Sounds like correlation to me.
Did I say otherwise?


It’s funny how completely opposite to this my experience over the past couple of years has been. Twice now I’ve been practically begging my managers to let me use AI-based tools to make my work easier, they’ve responded “no, we don’t want AI touching any of our stuff for vague legal paranoia reasons”, and then the company suffered a collapse and everyone got laid off.


It’s important to separate the training AI part from the conventional copyright violation. A lot of these companies downloaded stuff they shouldn’t have downloaded and that is a copyright violation in its own right. But the training part has been ruled as fair use in a few prominent cases already, such as the Anthropic one.
Beyond even that, there are generative AIs that were trained entirely on material that the trainer owned the license to outright - Adobe’s “Firefly” model, for example.
So I have yet to see it established that generative AI inherently involves “asset theft.” You’ll have to give me something specific. That page has far too many cases jumbled together covering a whole range of related subjects, some of them not even directly AI-related (I notice one of the first ones in the list is “A federal judge accused a third-party law firm of attempting to “trick” authors out of their record $1.5 billion copyright class action settlement with Anthropic.” That’s just routine legal shenanigans).
I just wish it wasn’t via such a monstrously painful mechanism.


First it would need to be established that generative AI inherently involves “asset theft”, which so far has not been the case in the various lawsuits that have reached trial.


And illustrates exactly the point I’m making. If people are going to hate it purely because it’s AI, regardless of whether it’s labelled or not, then there’s every incentive there to simply not label it. It’s counterproductive.


Sure, but that disclosure is simply painting a target on themselves. If they’re going to be pilloried whether they do it or not then why draw attention? Keeping it unlabelled at least allows for the chance that nobody will notice.


The ad was openly marketed as being created with AI. So, for all those folks who say they “just want AI content to be labelled as such”, this is a major reason why there are so many people who refuse to do that. They know it doesn’t help.


They got their user base by being the first ones to have open access to it. Being the first to market OFC gives a massive advantage.
Right, and then everyone chose to go use them.
This isn’t AI vs everything. This is ONLY the “AI” products compared to themselves
Every single one of them showed an increase in user growth, Microsoft just didn’t grow as much as the others. They’re not just shuffling the same users around, they’re continuing to gain new ones.
And as I pointed out in another response to you, chatgpt.com is the fourth-most-visited website in the world. They’re doing that with just a thousand users?


chatgpt.com is the fourth-most-visted website in the world (as of September, when this data is from). That’s the website, not the API. People have to choose to go to the chatgpt.com website in their browser, when OpenAI’s APIs are used by other products they don’t go to the chatgpt.com website. The API is at openai.com.
How are all those people people being “forced” to go to chatgpt.com?


Alright. So for purposes of argument, let’s accept all of that. Microsoft and Google are just faking it all, everyone’s tricked or forced into using their AI offerings.
The whole table from the article:
| # | Generative AI Chatbot | AI Search Market Share | Estimated Quarterly User Growth |
|---|---|---|---|
| 1 | ChatGPT (excluding Copilot) | 61.30% | 7% ▲ |
| 2 | Microsoft Copilot | 14.10% | 2% ▲ |
| 3 | Google Gemini | 13.40% | 12% ▲ |
| 4 | Perplexity | 6.40% | 4% ▲ |
| 5 | Claude AI | 3.80% | 14% ▲ |
| 6 | Grok | 0.60% | 6% ▲ |
| 7 | Deepseek | 0.20% | 10% ▲ |
ChatGPT by far has the bigger established user base. How did they force and/or trick everyone into using them?
Claude AI is growing their userbase faster than Google, how are they tricking and/or forcing everyone to switch over to them?
None of these other AI service providers, except for Grok, have a pre-existing platform with users that they can capture artificially. People are willingly going over to these services and using them. Both Microsoft and Google could vanish completely and it would take out less than a third of the AI search market.


And yet beating out both of them by a very wide margin, with 61.30% of the AI search share, is ChatGPT. Which didn’t have any established reputation or pre-installed userbase or anything at all that either Microsoft or Google started out with.
Your friend uses Gemini, presumably willingly. That’s not “faked.” This narrative of “nobody wants AI” is false, it’s just popular among social media bubbles where people want it to be true.


They’ve got 70% of the desktop operating system share. Seems like every other thread about them around these parts is how they’re “shoving AI down everyones’ throats.” I’m dubious that they’re “easier to ignore.”


So why aren’t Microsoft’s numbers going up? Everyone’s faking it except them?


Rare to see an AI-positive article getting so many upvotes on @technology@beehaw.org.
According to the chart in the article every AI is seeing stronger growth than Copilot, on a percentage gain basis. Gemini’s just the one that looks like it’s about to surpass Copilot in total market share.


So not only are people not reading the articles any more, they’re not even finishing reading the headlines all the way through?


It’s funny seeing the sudden surge of “copyright is awesome!” On the Internet now that it’s become a useful talking point to bludgeon the hated Abominable Intelligence with.
Have any actual court cases established that Gemini is violating copyright, BTW? The major cases I’ve seen so far have been coming down on the “training AI is fair use” side of things, any copyright issues have largely been ancillary to that.


But you don’t understand, this story reinforces what I already believe, therefore it must be true.
A technology I’ve been eagerly anticipating for many, many years now. It still sounds like it’s in the “Real Soon Now, honest!” Phase though:
[…]
[,]
Which is where it’s been for all of those many years I’ve been anticipating it. But who knows, perhaps this will be the company to finally start selling them. I’m fine with them being expensive at first, the cost will come down if they take off.