In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!
Are you sure that’s happening? Under the previous mode of capitalism, what kind of companies were hiring artists?
As I understand it, that isn’t the actual gripe from the general perspective of the artist. Instead it’s about copyright, a concept I fundamentally disagree with. I don’t think it’s necessary, and that the artist’s capacity for prosperity being tied to copyright is a symptom of a bigger problem than being usurped by software.
I think there is good art and bad art. I think there is good AI art (tbh I can’t think of any examples, I just think in principle AI art has the capacity to be good) and bad AI art. I think the relative ease of access skews people’s exposure towards slop. I use the term slop as a descriptor for AI art that is sloppy or wholly derivative; not to prejudge it.
I think perspectives like yours haven’t compelled me to think they are meaningfully different from that of the Luddites, or those opposed to implementing computers in the workplace, etc. I genuinely sympathise with those groups, but ultimately wouldn’t have us go back.
Movie studios, VFX houses, advertisement agencies, should I keep going? It’s not that all of these people will or can be replaced, but the studios are already hollowing out their staff and the abstract threat of AI gives studios much more power in negotiations with artists. Since AI, much less people are willing to contract artists online, which many young and alternative artists depend on to survive. Why do you think, the Hollywood strikes are happening?
I agree, that copyright shouldn’t have to exist in an ideal society, but we still live under capitalism. Imagnine, if Disney could just scoop up all the good indie movies, and redistribute them under their own name with massive marketing budgets, taking all the profit and pretending, it’s their own work. The original creator would go bankrupt and not be able to make another great movie.
In my opinion, generative AI is doing exactly the same thing, but indirectly. If Disney were to release a fully AI generated movie, they would still have profited from the work of a bunch of unconsenting and uncredited independent artists.
AI “art” is also not art, because real art requires a concios and self aware being to observe the world in a unique way and get inspired to express a new idea in their art. AI is not conscious and therefore cannot observe the world or get any new ideas. There will never be good AI-“Art”, because AI can only recreate and recombine the existing (and yeah, I know, that AI images are technically unique, but they are still only derived from what the AI was trained on). The best, an AI could do, is imitate a human as well as possible. It cab only succeed in decieving us, letting us think, there is some person behind this art, but there will never be anyone behind it.
I don’t think my earlier reply came through. I’ll try rewriting it.
AI can add, remove, change or refine input, either text or image-based, either wholly or partially, which may or may not itself be AI-generated. That feature set certainly allows room for genuine, inspired artistic expression. The way you describe AI art is as though it is all created by asking ChatGPT to draw you something. This isn’t the case, and neglects to consider the litany of AI model types that are fundamentally different to LLM’s. Models which are operated by humans directly interacting with them in a range of ways.
Let’s say you’re a concept artist for a movie. After replacing you with AI, how does the company instruct the model in the concept to be represented? If they’re just asking ChatGPT to come up with something itself, then sure - your description applies. And the output will be shitty concept art, and the movie will shittier than it otherwise would be. People might consume it, but it would be a slippery slope towards failure either because a) people don’t like it, or don’t like it enough for it to reach the critical mass required to spread, or b) someone else does the same uninspired and easy job more cheaply or effectively. If you’re an AI-slop consumer, why watch AI slop movies when you can just watch AI slop Tiktoks?
Good art resonates with people not because humans are easily entertained by pretty flashing lights or whatever an AI can churn out, but because of their relationship to a piece of art which is derived from their human experience. Companies have tried to broaden appeal and lower costs by appealing to the lowest common denominator for centuries, but beyond a certain point it is a failing business model. In my opinion, if some companies want to try, let them find out why there are 1000s of AI-generated movie trailers but no movies.
I think that AI can be used for the concept art in a way that maintains artistic integrity and capacity for artistic expression by having someone skilled in representing visual concepts operate the AI tool. That someone would be for all intents and purposes an artist. In essence the artist position would not be redundant; the way their job is done would have changed.
Not the OP but I’ll put my PoV.
AI allows to cut junior and entry level artists. Companies only need to retain top 1% talent orchestrating hordes of AI.
While it is still a craft, commercial art is not about being genuine; it is to deliver product and meeting deadline while passing QA. AI’s output rate outpaces human labor, and the top 1% can certainly identify what aspect makes AI output slop. Which means they can cherry pick “OK” part of AI, review, iterate, tweak to deliver product while keeping quality. The process previously involved comunication between senior and junior artits. Now companies don’t need the rest of the 99% anymore as workforce.
What will happen in the long run? Who knows. Companies are known for only keen on immediate profit.
This tendency is widespread and not limited to art field, nor related to the argument of intrinsic value of art. I can argue this is more of labor (and capitalism) issue, on top of people whose art stolen not getting enough compensation for their work. While I’m not against AI technology itself, its effect on peoples livelihood and climate impact makes current AI landscape hard to defend.
Thanks for your input. I agree with you that it is a labour and capitalism issue. This seems to be where your perspective differs from the OP.
I guess my fundamental disagreement is that we should deny ourselves technological advancement because we live under capitalism. Yes, that is the system we will live under for the foreseeable future. I don’t like it and don’t like how capital takes advantage of technology. The way capital takes advantage of AI isn’t unique. Generally, significant advancement will bring change and the biggest impact of that change will be felt by the proletariat. That sucks and we shouldn’t have to put up with it.
Circling back to the topic of the post, OP uses this negative impact as justification to disagree with the apparent use of AI in the community banner art. This is non-sequitur. No one is making a living off of designing Lemmy community banners. The people that run the community simply decided not to arbitrarily deny themselves what they felt to be the best tool for the task. What I’m defending isn’t necessarily the current AI landscape as such, just the technology part I’m interested in.