My short response. Yes.
marx talked about it. with sufficient automation, the value of Labor collapses. under socialism, this is a good thing. under capitalism it’s a bad thing.
THIS!
If ai takes all our jobs the only way forward is communism, otherwise the working class will collapse and the capitalist class will collapse alongside.
What about under a technocracy? Sounds horrible
money would either become worthless, or would have to stop representing labor. you would have two distinct classes with zero mobility between them. im taking a shit
What’s a technocracy?
It’s essentially a governance model driven by scientific, technical, and data-driven analysis. This would include control and input from Universities and Silicon Valley. The problem are the corporations that own a huge portion of SV are not benevolent in their practices on an employment level, a consumer level and certainly a powerful over ruling governmental level.
No, it’s going to be bad in really stupid ways that aren’t as cool as what happens when it goes bad in the movies.
If/when we actually achieve Artificial Intelligence, then maybe it would be a concern.
What we have today are LLMs which are big dumb parrots that just say things back to you that match a pattern. There is no actual intelligence.
Calling our current LLMs “Artificial Intelligence” is just marketing. LLMs have been possible for a while but until recently we just didn’t have the processing power at the scale we have now.
Once everyone realizes they’ve been falling for a marketing campaign and that we’re not very much closer to AI than we were before LLMs blew up, then LLMs will just become what they actually are: a tool that enhances human intelligence.
I could be wrong though. If so, I, for one, welcome our new AI overlords.
LLMs are a form of AI. They are just not AGI
I don’t think we’re any closer to AGI due to LLMs. If you take away all the marketing misdirection, to achieve AGI you would have to have artificial rational thought.
LLMs have no rational thought. They just don’t. That’s not how they’re designed.
Again, I could be wrong. If so, I was always in support of the machines.
I don’t think we’re any closer to AGI
never said we did. Just that LLMs are included in the very broad definition that is “AI”
Tbf, the phrase “as the movies say”, makes it reasonable to assume that OP meant AGI. Not the broad definition of AI.
I mean, when is the last time you saw a movie about the dangers of the k-nearest neighbor algorithm?
First it’s gonna crash the economy because it doesn’t work then it’s gonna crash the economy because it does
what movie?
Terminator? no, our level of AI is ridiculously far from that
The Big Short? yes, that bubble is going to pop and bring the world economy down
No. I think we’re essentially where AI will stop improving in the LLM department, image/video generation might get better though.
I assume within 5 years CEOS will stop advertising AI generated on stuff, but things like shitty t shirts and stuff will still have AI, they just won’t be marketed as. Back in the day things were marketed as plastic as a positive thing before slowly becoming a negative selling point, I assume AI will be similar.
other than phones. There’s no other improvement they can market like gimmicks or nostalgia bait.
It will be slavery by a different whip.
AI (once it is actually here) is just a tool. Much like other tools, its impact will be dependent on who is using and and what for.
Who do you feel has the most agency in our current status quo? What are they currently doing? These will answer your question.
Its the 1%, and they will build a fully automated army and get rid of all but the sexiest of us to keep as sex slaves.
This is worth it because capitalism is the most important thing on planet earth. Not humanity, capitalism. Thus the vasectomy. The 1% can make their own slaves. And with AI they will.
The “just a tool” response is such a cop out. A lot of things are just tools and still have horrifying implications just by existing **
I don’t feel like you read the entire comment you replied to.
Yes, AI is a tool with horrifying implications. Machine learning has some interesting use cases, but if one had any hope that it would be implemented well, that should be dashed by the way it is run by the weirdest bros imaginable with complete contempt for the concept of consent.
No, I did. I’ll elaborate. Some (many) tools are awful and exist with no useful purpose, or even purposes that are only wanton destruction.
depends on the movie. it’ll probably be as bad as at least one movie said. we’ve got a lot of data points for films with AI at this point, and they’re all over the place in terms of how bad or good AI is.
Obviously when workers are not needed to the same degree, they will be treated as a problem rather than a resource.
I dont know what movies say exactly but I think its going to be a lot of people both watched by Ai, and using AI for their tasks.
It will be worse than the movies because they don’t portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.
Movie AI isn’t what we are headed for. This is what we are headed for. Where’s that movie?
Short answer: No one today can know with any amount of certainty because we’re nowhere close to developing anything resembling “AI” in the movies. Today’s generative AI is so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.
Long answer:
First we have to define what “AI” is. The current zeitgeist meaning of “AI” refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we’ve invented.
However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of “AI” before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn’t fire at all depending on input from the neurons it’s connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it’s not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.
This brings us to the movie definition of “AI,” which is generally “conscious” AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like “good” or “evil” can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn’t prove it can emulate a brain as complex as humans have, but we also can’t prove it definitely won’t be able to with enough technical advancement. So the most we can say right now is that it’s not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.
The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we’ll even get close to this is impossible to know.
Worse: In addition to everything else it’ll be extremely annoying
I heard a different take yesterday from Corey Doctorow that the real concern is global economic collapse!
Not something I’d considered, but I would say a frightening possibility!
;; this wiww affect the globaw twout population :33
It will be as bad as it is now with an even higher intensity.
We will see it continue to be used as a substitute for research, learning, critical or even surface level thinking, and interpersonal relationships.
If and when our masters create an AI that is actually intelligent, and maybe even sentient as depicted in movies- it will be a thing that provides biased judgments behind a veneer of perceived objectivity due to its artificial nature. People will see it as a persona completely divorced from the prejudices of its creators as they do now with chat GPT. And who ever can influence this new “objective” truth will wield considerable power.
I agree 99% (only disagreement, those people aren’t our masters, they are our enemies)
Trust that I agree with you on this, I use the word “master” intentionally though- as we are subjected to their whims without any say in the matter.
There are also many of us who are (unwittingly) dependent or addicted to their products / services. You and I both know plenty of people who give into almost every impulse incentivized by these products, especially when in the form of entertainment.
Our communities are now choc full of slaves and solicitors- a master is an enemy yes, but only when his slaves know who owns them.