In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!
You should definitely support artists! You know how good it feels to support someone you know? I’m personally going to give my music away for free. I think intellectual property is meant to be shared, but I do recognize that we gotta eat in this parasitic system, yo. How about this? We support artists with our commonwealth? It’s fucking important, man. Culture matters. No need to shift the blame to the individual when it’s the system that’s rotten. Two more ideas, then I’ll fuck off. Guaranteed dignity in death, and defensive, non-coercive, no entanglements protection of holy sites. I’m a deterministic atheist through and through, but man, we gotta heal our fucking souls.
In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture.
I don’t think AI is inherently bad. What’s bad is how we (or well, the corpos) use it. SEO, vibe coding, making slop, you name it.
About training material being stealing: hard agree here. Our copyright laws are broken, but they are right about AI - training is strong in a retrieval system, which is infingement. Shame they aren’t enforced at all.
What fascinates me is the similarity between AI and photography. That is, both are revolutionary tools in the visual medium. Imagine this thread being an opinion column in an 1800s newspaper, and replace all instances of ‘AI’ with ‘photography’. The arguments all stand, but our perspective to them may change.
How else would someone have been able to get all those chipmunks in one photo?
Taxidermy
You wouldn’t necessarily even need to comission someone. There are plenty of Creative Commons licensed pieces of art that could be used.
There are a lot of talented artists here on lemmy.ml and I think it would be wise to ask them if they were interested in providing a banner image that is not ai generated, surely someone would take up the offer.
If they wanted to do it for free, they would have offered.
Artists do labor for free for the benefit of their communities all the time, myself included, mostly out of the goodness of their hearts. Although maybe Lemmy can offer some compensation if they want to commission something. Tbh, I’ve never approached someone or an organization and said, “hey, I think you should change your logo/banner/whatever, want me to make a better one?” I think that’s a bit forward.
I’m glad you have the privilege of being able to give away your time and work for free.
it’s just a crappy and lazy image regardless of origins, but the fact it is AI makes it crappier
There is absolutely nothing wrong with the tech itself. Your issue is with capitalist relations and the way this technology is used under capitalism. Focus on what the actual problem is. https://dialecticaldispatches.substack.com/p/a-marxist-perspective-on-ai
I read your link. I think my main issue is the framing as though AI is just a new tool that people are afraid of similar to the introduction of the camera.
Even outside of capitalist exploitation, AI generated art suffers from an inherent creative limitation. It’s a derivative and subtractive tool. It can only remix what already exists. It lacks intention and human experience that make art meaningful. The creative process isn’t just about the final image. There’s choices, mistakes, revisions, and personal investment, etc. No amount of super long and super specific prompts can do this.
This is why a crude MS Paint drawing or a hastily made meme can resonate more than a “flawless” AI generated piece. Statistical approximation can’t imbue a piece with lived experience or subvert expectations with purpose. It is creative sterility.
I can see some applications of AI generation for the more mundane aspects of creation, like the actions panel in Photoshop. But I think framing creative folks’ objections as an act of self preservation as though we are afraid of technology is a bit of a strawman and reductive of the reality of the situation. Although there are definitely artists that react this way, I admit.
It is true that new tools reshape art. The comparison to photography or Photoshop is flawed. Those tools still require direct engagement with the creative process. In the link you provided the argument is made for a pro-AI stance using the argument that the photographer composes a shot and manipulating light. In contrast to AI which automates the creative act itself. That’s where their argument falls apart.
As for democratization goes the issue isn’t accessibility (plenty of free, nonexploitative tools already exist for beginners) and that is something that could be improved. AI doesn’t teach someone to draw, operate a camera, paint, reiterate, conceptualize, and develop artistic judgment. It lets them skip those steps entirely resulting in outputs that are aesthetically polished and creatively hollow. True democratization would mean empowering people to create.
I think, ultimately, AI-generated images have their own utility, but fundamentally cannot replace human art as an expression of the human experience and artist intent through their chosen medium. AI-generated textures for, say, wooden planks in a video game does little to nothing to change the end-user’s experience, but just asking AI to create a masterpiece of art fundamentally lacks the artistic process that makes art thought provoking and important. It isn’t even about being produced artisinally or mass-produced, it’s fundamentally about what art is to begin with, and what makes it resonate.
AI cannot replace art. AI can make the more mundane and tedious aspects of creation smoother, it can be a part of a larger work of art, or it can be used in a similar way to stock images. At the same time, just like AI chatbots are no replacement for human interaction, AI can’t replace human art. It isn’t a matter of morality, or something grander, it’s as simple as AI art just being a tool for guessing at what the user wants to generate, and thus isn’t capable of serving the same function for humanity as art in the traditional sense.
I always like your posts when I see them here, so I really do value your perspective on this.
I always like your posts when I see them here, so I really do value your perspective on this.
Thank you, I was hoping I wasn’t going to get eaten alive for my comments. That said, the question asked in the original post is why is our banner AI generated? And I think our answer should be: It shouldn’t be, if this is going to be a community made of people for people, then banner should be made from someone from this community not a capitalist AI image generator. I don’t think that should be controversial and illicit responses that are hostile to the question even if OP’s intentions are being questioned.
Haha, I went back and forth on whether or not to post my thoughts for quite a while, I understand being reluctant to posting on this. Up front, I am not an artist, which I think is obvious but nevertheless should be stated.
I personally don’t care for the people trying to question OP’s motives, that’s not the point here. Questioning the purpose of an AI image is an extremely salient issue, and one OP has every right to ask. AI is not a “settled issue” in my eyes on the left, and what I shared earlier is easily one of my least strong opinions.
As for the purposes of the banner, I think, personally, whether or not it is AI generated depends on what the users of the community want. If someone wants to put in the time to design a banner, and the people using the community prefer it to the AI banner, then it should change to the artist’s banner. Art made by humans is desired for that artistic process, grappling with the medium as a form of expression, something the viewer can contemplate (in my again untrained, unartistic view), but in the interim AI can at least make servicable images, especially if run locally and on green energy.
I see AI images fulfilling a similar use to stock images. Good for quickly drafting up something as a visual representation of an idea, horrible for being art as a stand-alone subject to contemplate and appreciate, the skill, the decision making, the expression.
Am I off-base? I dunno, I feel a bit like I got eaten alive in my comment I made earlier. I’m certainly not “pro-AI,” I don’t even use it myself, but at the same time I took issue with how people are framing the conversation.
Even outside of capitalist exploitation, AI generated art suffers from an inherent creative limitation. It’s a derivative and subtractive tool. It can only remix what already exists.
There’s little evidence that this is fundamentally different from how our own minds work. We are influenced by our environment, and experiences. The art we create is a product of our material conditions. If you look at art from different eras you can clearly see that it’s grounded in the material reality people live in. Furthermore, an artist can train the AI on their own style, as the video linked in the article shows with a concrete use case. That allows the artists to automate the mechanical work of producing the style they’ve come up with.
It lacks intention and human experience that make art meaningful.
That’s what makes it a tool. A paintbrush or an app like Krita also lacks intention. It’s the human using the tool that has the idea that they want to convey, and they use the tool to do that. We see this already happening a lot with memes being generated using AI tools. A few examples here. It’s a case of people coming up with ideas and then using AI to visualize them so they can share them with others.
This is why a crude MS Paint drawing or a hastily made meme can resonate more than a “flawless” AI generated piece.
If we’re just talking about pressing a button and getting an image sure. However, the actual tools like ComfyUI have complex workflows where the artist has a lot of direction over every detail that’s being generated. Personally, I don’t see how it’s fundamentally different from using a 3D modelling tool like Blender or a movie director guiding actors in execution of the script.
I can see some applications of AI generation for the more mundane aspects of creation, like the actions panel in Photoshop.
Right, I think that’s how these tools will be used professionally. However, there are also plenty of people who aren’t professionals, and don’t have artistic talent. These people now have a tool to flesh out an idea in their heads which they wouldn’t have been able to do previously. I see this as a net positive. The examples above show how this can be a powerful tool for agitation, satire, and political commentary.
Those tools still require direct engagement with the creative process
So do tools like ComfyUI, if you look at the workflow, it very much resembles these tools.
the argument that the photographer composes a shot and manipulating light. In contrast to AI which automates the creative act itself
I do photography and I disagree here. The photographer looks at the scene, they do not create the scene themselves. The skill of the photographer is in noticing interesting patterns of light, objects, and composition in the scene that are aesthetically appealing. It’s the skill of being able to curate visually interesting imagery. Similarly, what the AI does is generate the scene, and what the human does is curate the content that’s generated based on their aesthetic.
AI doesn’t teach someone to draw, operate a camera, paint, reiterate, conceptualize, and develop artistic judgment. It lets them skip those steps entirely resulting in outputs that are aesthetically polished and creatively hollow. True democratization would mean empowering people to create.
Again, AI is a tool and it doesn’t magically remove the need for people to develop an aesthetic, to learn about lighting, composition, and so on. However, you’re also mixing in mechanical skills like operating the camera which have little to do with actual art. These tools very much do empower people to create, but to create something interesting still takes skill.
It honestly just seems like you want AI to be a stand in for creative thinking and intention rather than it actually enabling creative processes. Your examples you provide don’t teach those skills. Everyone has ideas. I have ideas of being a master painter creating incredible paintings, I can visually imagine them in my head, AI can shit out something that somewhat resembles that I want. It can train on my own style of [insert medium]. But I am always at the mercy of the output of that tool. It would not be a problem if it were a normal tool like a camera or paintbrush. But when you use a thought limiting tool like AI it gives you limited results in return. It is always going to be chained to the whatever that particular AI has trained on. Artists develop a style over years, it changes from day to day, year to year, AI cannot evolve, yet an artist’s style does just through repetition of creation. AI creates the predictive average of existing works.
I think the biggest thing here is that AI is a limited tool from the ground up rather than enabling creativity. You can’t train AI to develop a new concept or a new idea, that’s reserved to humans alone. It’s that human intangibility that’s yet to be achieved via AI and until sentience is achieved you’re never going to get that from a limited tool like AI. If sentience is achieved, you’d have to recognize its humanity and at that point prompts are no longer needed, it can create its own work.
It honestly just seems like you want AI to be a stand in for creative thinking and intention rather than it actually enabling creative processes.
I think was pretty clear in what I actually said. I think AI is a tool that automates the mechanical aspect of producing art. In fact, I repeatedly stated that I think the intention and creative thinking comes from the human user of the tool. I even specifically said that the tool does not replace the need for artistic ability.
Everyone has ideas. I have ideas of being a master painter creating incredible paintings, I can visually imagine them in my head, AI can shit out something that somewhat resembles that I want.
This is just gatekeeping. You’re basically saying that only people who have the technical skills should be allowed to turn ideas in their heads into content that can be shared with others, and tough luck for everyone else.
But I am always at the mercy of the output of that tool. It would not be a problem if it were a normal tool like a camera or paintbrush.
That’s completely false, you’re either misunderstanding how these tools work currently or intentionally misrepresenting how they work. I urge you to actually spend the time to learn how a tool like ComfyUI works and what it is capable of.
It is always going to be chained to the whatever that particular AI has trained on.
What it’s trained on is literally millions of images in every style imaginable, and what it is able to do is to blend these styles. The person using the tool can absolutely create a unique style. Furthermore, as I’ve already noted, and you’ve ignored, the artist can train the tool on their own style.
AI cannot evolve, yet an artist’s style does just through repetition of creation.
Yes, AI can evolve the same way artist evolves by being trained on more styles. Take a look at LoRA approach as one example of how easily new styles can be adapted to existing models.
I think the biggest thing here is that AI is a limited tool from the ground up rather than enabling creativity.
With all due respect, I think that you simply haven’t spent the time how the tool actually works and what it is capable of.
It’s that human intangibility that’s yet to be achieved via AI and until sentience is achieved you’re never going to get that from a limited tool like AI
Replace AI in that sentence with paint brush and it will make just as much sense.
If sentience is achieved, you’d have to recognize its humanity and at that point prompts are no longer needed, it can create its own work.
You’re once again ignoring my core point which is that AI is a tool and it is not meant to replace the human. It is meant to be used by people who have sentience and a critical eye for the specific imagery they’re aiming to produce.
deleted by creator
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Wouldn’t it be much cooler, if we commissioned an actual artist for the banner
I hate it when AI is used to replace the work an artist would have been paid for. But uh, this is a random open-source forum; there’s no funding for artists to make banners. Rejecting AI art – which was voted for by the community – just seems like baseless virtue signalling. No artist is going to get paid if we remove it.
But like if you want to commission an artist with your own money, by all means go ahead. You’ll still most likely need another community vote to approve it though.
That doesn’t change that real artists who made real art will have had their work used without permission or payment to help generate the banner. I’m with OP.
If I drew something myself, those artists would also not be paid. I can understand a deontological argument against using AI trained on people’s art, but for me, the utilitarian argument is much stronger – don’t use AI if it puts an artist out of work.
It’s not about anyone getting paid, it’s about affording basic respect and empathy to people and their work. Using AI sends a certain message of 'I don’t care about your consent or opinion towards me using your art", and I don’t think, that this is a good thing for anyone.
I mean how many of us are pirating stuff
Thank you, you can’t both love piracy (which lemmy overwhelmingly does) and hate AI
plenty of examples where piracy harms no one devs get paid no matter what, ppl working on and making shows like south park that have 5 year deals, many devs get fired right after a game gets released they dont benefit if it does well, indie games i never pirate, I use the 2 hour steam window instead to see if I want it
ai on the other hand lol, actively takes away jobs
There would be no job designing a lemmy banner
Well yeah, I don’t care about IP rights. Nothing has been materially stolen, and if AI improves, then the result could some day in theory be indistinguishable from a human who was merely “inspired” by an existing piece of art. At the end of the day, the artist is not harmed by AI plagiarism; the artist is harmed by AI taking what could have been their job.
They’re harmed by both IMO.
how
By systems positing human creativity as a computational exercise
If I saw the artwork myself and it inspired my artwork, would it be any different? Everything is based on everything.
Yeah, but if you drew it yourself then they wouldn’t expect to be paid. Unless you plagiarised them to the degree that would trigger a copyright claim, they would (at worst) just see it as a job that they could have had, but didn’t. Nothing of theirs was directly used, and at least something original of theirs was created. Whereas AI images are wholly based on other work and include no original ideas at all.
You haven’t explained how it would be different in any way. Human artists learn by emulating other artists, and vast majority of art is derivative in nature. Unless a specific style is specified by the user input, AI images are also not plagiarised to the degree that would trigger a copyright claim. The only actual difference here is in the fact that the process is automated and a machine is producing the image instead of a human drawing it by hand.
You’re posting on lemmy.ml; we don’t care much for intellectual property rights here. What we care about is that the working class not be deprived of their ability to make a living.
Agree with that. I don’t think the two are mutually exclusive though?
I agree that they are not mutually exclusive, which is why I usually side against AI. On this particular occasion however, there’s a palpable difference, since no artist is materially harmed.
Real artists use uncited reference art all the time. That person that drew a picture of Catherine the Great for a video game certainly didn’t list the artist of the source art they were looking at when they drew it. No royalties went to that source artist. People stopped buying reference art books for the most part when Google image search became a thing.
A hell, a lot of professional graphic artists right now use AI for inspiration.
This isn’t to say that the problem isn’t real and a lot of artists stand to lose their livelihood over it, but nobody’s paying someone to draw a banner for this forum. The best you’re going to get is some artist doing out of the goodness of their heart when they could be spending their time and effort on a paying job.
Real artists may be influenced, but they still put something of themselves into what they make. AI only borrows from others, it creates nothing.
I realise no-one is paying someone to make a banner for this forum, it would need to be someone choosing to do it because they want there to be a banner. But the real artists whose work was used by the AI to make the banner had no choice in the matter, let alone any chance of recompense.
AI only borrows from others, it creates nothing.
This isn’t an argument, it’s pseudophilosophical nonsense.
But the real artists whose work was used by the AI to make the banner had no choice in the matter, let alone any chance of recompense.
In order to make such a statement you must:
- Know what model was used and;
- Know that it was trained on unlicensed work.
So, what model did the OP use?
I mean, unless you’re just ignorantly suggesting that all diffusion models are trained on unlicensed work. Something that is demonstratively untrue: https://helpx.adobe.com/firefly/get-set-up/learn-the-basics/adobe-firefly-faq.html
Your arguments havent been true since the earliest days of diffusion models. AI training techniques are at the point where anybody with a few thousand images, a graphics card and a free weekend can train a high quality diffusion model.
It’s simply ignorance to suggest that any generated image is using other artist’s work.
Nope, you can’t train a good diffusion model from scratch with just a few thousand images, that is just delusion (I am open for examples though). Adobe Firefly is a black box, so we can’t verify their claims, obviously they wouldn’t admit, if they broke copyright to train their models. We do however have strong evidence, that google, openai and stability AI used tons of images, which they had no licence to use. Also, I still doubt that all of the people, who sold on Adobe Stock either knew, what their photos are gonna be used for or explicitly wanted that or just had to accept it to be able to sell their work.
Great counterargument to my first argument by the way 👏
So, what model did the OP use?
Adobe has a massive company with a huge amount to lose if they’re lying to their customers. They have much more credibility than a random anti-AI troll account. Of course you’d want to dismiss them, it’s pretty devastating to your arguments if there are models which are built using artwork freely given by artists.
Firefly was found to use suspect training data too though… It’s the best of them in that it’s actually making an effort to ethically source the training data, but also almost no one uses it because programs from professional adobe suite are expensive as hell.
https://martech.org/legal-risks-loom-for-firefly-users-after-adobes-ai-image-tool-training-exposed/
So what’s the solution for this board, they should just put up a black image? Should they start a crowdfunding to pay an artist?
It’s a really bothers an artist enough they could make a banner for the board and ask them to swap out the AI. But, they’ll have to make something that more people like than the AI.
But, they’ll have to make something that more people like than the AI.
No, it does not have to be better than the AI image to be preferable.
Speak for yourself.
Okay, we have your vote down now think about the other people that are also here. It needs to be preferable to the majority not just you.
Considering AI is really unlikeable, I don’t think that’ll be too hard.
Proof is when it happens.
The banner could be anything or nothing at all, and as long as it isn’t AI generated, I would like it better
I, on the other hand, would not.
Perhaps we should ask ChatGPT what to do about this?
AI bad. Upvotes please.
Intellectual property is made up bullshit. You can’t “steal” a jpeg by making a copy of it, and the idea that creating something based on or inspired by something else is somehow “stealing” it is quite frankly preposterous.
The sooner we as a society disabuse ourselves of this brainworm the better.
Edit: I have very mixed feelings about so-called generative AI, so please do not take this as a blanket endorsement of the technology - but rather a challenge on the concept of “stealing intellectual property,” which I unequivocally do not believe in.
I agree with you. AI is bad for reasons other than that it is stealing IP.
Honestly, it’s because it went in early days.
When ML generated art was a novelty, and people hadn’t had a chance to sit down and go “wait, actually, no”.
And it’s an absolute arsepain to replace, because you’ll get 1001 prompt engineers defending slop.
feddit.uk banned generative AI content to make this process easier, and still needs to sweep through and commission new art for a few communities.Yeah, maybey it would be a good idea to have a new community vote. Can I just start that or do I have to ask the mods or something? I am pretty new to Lemmy, so I am not really shure how this works.
Lemmy honestly tends to run on the ideas of “be the change you want to see in the world” and “well volunteered”.
Stick a post up, see if people are interested.
You could message the mods. While they don’t seem to have posted for a while, there are mod actions happening still.
And if you don’t hear anything back, put it as a suggestion to the admins.While they don’t seem to have posted for a while, there are mod actions happening still.
It’s worth noting that sometimes people mod with a different alt than they use for commenting. Just because you don’t see them participating, doesn’t mean they aren’t.
This is pretty good for early AI. I’m impressed.
Yeah, I remember it being feverdream sludge when I first tried it
If that’s what it seems to you, you might want to reread their comment. You’re way off base.
Though this is about Lemmy.world I think sh.itjust.works has a similarly sad story.
We had a vote for the banner when sh.itjust.works started where a bunch of artist came forward with art for the banner and some AI guys came in with art as well. This was clearly stated by the AI guys, with no trickery. The community voted in the agora to reject the art of its users in favour of this stable diffusion slop.
I think you can tell I dispise AI art. The reason for it here though is that the community voted for it over real artists time, dedication, and love for the community.
If someone really wanted to change it though one could create a discussion post in the agora, our community voting community, to have it changed. They’d likely need to provide new art which, as an artist, I’m unwilling to do. The community has shown it cares little for the time, effort, and skill involved so somebody with an hour and stable diffusion would win out over the multi-day process of making something meaningful
The community voted in the agora
Is ‘agora’ just a metaphor for expressing preferences via commenting, or is it Fediverse polling software I don’t know about?
It’s a Greek term used by the head mod for their community. Sorry for the confusion
The community has shown it cares little for the time, effort, and skill involved so somebody with an hour and stable diffusion would win out over the multi-day process of making something meaningful
I mean if the end result is shittier than some AI slop then not sure the effort matters much. I could pour hours of work into a painting and it’d still be total shit because I’m a really shitty painter. I wouldn’t expect anyone to value my painting just for the work I put in.
Exception being parents when they get their kid’s shitty painting they did at school. They have to pretend to think it’s good and appreciate it.
Right now, anti-AI rhetoric is taking the same unprincipled rhetoric that the Luddites pushed forward in attacking machinery. They identified a technology linked to their proletarianization and thus a huge source of their new misery, but the technology was not at fault. Capitalism was.
What generative AI is doing is making art less artisinal. The independent artists are under attack, and are being proletarianized. However, that does not mean AI itself is bad. Copyright, for example, is bad as well, but artists depend on it. The same reaction against AI was had against the camera for making things like portraits and still-lifes more accessible, but nowadays we would not think photography to be anything more than another tool.
The real problems with AI are its massive energy consumption, its over-application in areas where it actively harms production and usefulness, and its application under capitalism where artists are being punished while corporations are flourishing.
In this case, there’s no profit to be had. People do not need to hire artists to make a banner for a niche online community. Hell, this could have been made using green energy. These are not the same instances that make AI harmful in capitalist society.
Correct analysis of how technologies are used, how they can be used in our interests vs the interests of capital, and correct identification of legitimate vs illegitimate use-cases are where we can succeed and learn from the mistakes our predecessors made. Correct identification of something linked to deteriorating conditions combined with misanalyzing the nature of how they are related means we come to incorrect conclusions, like when the Luddites initially started attacking machinery, rather than organizing against the capitalists.
Hand-created art as a medium of human expression will not go away. AI can’t replace that. What it can do is make it easier to create images that don’t necessarily need to have that purpose, as an expression of the human experience, like niche online forum banners or conveying a concept visually. Not all images need to be created in artisinal fashion, just like we don’t need to hand-draw images of real life when a photo would do. Neither photos nor AI can replace art. Not to mention, but there is an art to photography as well, each human use of any given medium to express the human experience can be artisinal.
It’s worth noting that the argument regarding massive energy consumption is no longer true. Models perform better than ones that required a data centre to run just a year ago can already be run on a laptop today. Meanwhile, people are still finding lots of new ways to optimize them. There is little reason to think they’re not going to continue getting more efficient for the foreseeable future.
Fair point, but I do think that until we see more widespread adoption of renewables in the US and other heavy-polluters, energy use in general is a hot topic we are already beyond capacity for. There needs to be a real qualitative leap to green energy some point soon, and we can’t just rely on the PRC to electrify the world if the US is intent on delaying that shift as much as possible.
Oh I completely agree there.
The Luddites weren’t simply “attacking machinery” though, they were attacking the specific machinery owned by specific people exploiting them and changing those production relations.
And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems
Yes, I’m aware that the Luddites weren’t stupid and purely anti-tech. However, labor movements became far more successful when they didn’t attack machinery, but directly organized against capital.
GenAI exists. We can download models and run them locally, and use green energy. We can either let capitalists have full control, or we can try to see if we can use these tools to our advantage too. We don’t have the luxury of just letting the ruling class have all of the tools.
These systems are premised on the idea that human thought and creativity are matters of calculation. This is a deeply anti-human notion.
https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything
Human thought is what allows us to change our environment. Just as our environment shapes us, and creates our thoughts, so too do we then reshape our environment, which then reshapes us. This endless spiral is the human experience. Art plays a beautiful part in that expression.
I’m a Marxist-Leninist. That means I am a materialist, not an idealist. Ideas are not beamed into people’s heads, they aren’t the primary mover. Matter is. I’m a dialectical materialist, a framework and worldview first really brought about by Karl Marx. Communism is a deeply human ideology. As Marx loved to quote, “nothing human is alien to me.”
I don’t appreciate your evaluation of me, or my viewpoint. Fundamentally, it is capitalism that is the issue at hand, not whatever technology is caught up in it. Opposing the technology whole-cloth, rather than the system that uses it in the most nefarious ways, is an error in strategy. We must use the tools we can, in the ways we need to. AI has use cases, it also is certainly overused and overapplied. Rejecting it entirely and totally on a matter of idealist principles alone is wrong, and cedes the tools purely to the ruling class to use in its own favor, as it sees fit.
Matter being the primary mover does not mean that ideas and ideals don’t have consequences. What is the reason we want the redistribution of material wealth? To simply make evenly sized piles of things? No, it’s because we understand something about the human experience and human dignity. Why would Marx write down his thoughts, if not to try to change the world?
I never for one second suggested that thoughts had no purpose or utility, or that we shouldn’t want to change the world. This is, again, another time you’ve misinterpreted me.
All I am saying is that, baked into the design and function of these material GenAI systems, is a model of human thought and creativity that justifies subjugation and exploitation.
Ali Alkhatib wrote a really nice (short) essay that, while it’s not saying exactly what I’m saying, outlines ways to approach a definition of AI that allows the kind of critique that I think both of us can appreciate: https://ali-alkhatib.com/blog/defining-ai
And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems
That hasn’t been true for years now.
AI training techniques have rapidly improved to the point where they allow people to train completely new diffusion models from scratch with a few thousand images on consumer hardware.
In addition, and due to these training advancements, some commercial providers have trained larger models using artwork specifically licensed to train generative models. Adobe Firefly, for example.
It isn’t the case, and hasn’t been for years, that you can simply say that any generative work is built on “”“stolen”“” work.
Unless you know what model the person used, it’s just ignorance to accuse them of using “exploitative” generative AI.
Can you provide a few real-life examples of images made with a model trained on just “a few thousand images on consumer hardware”, along with stats on how many images, where those images were from, and the computing hardware & power expended (including in the making of the training program)? Because I flat out do not believe that one of those was capable of producing the banner image in question.
You are probably confusing fine tuning with training. You can fine tune an existing model to produce more output in line with sample images, essentially embedding a default “style” into every thing it produces afterwards (Eg. LoRAs). That can be done with such a small image size, but it still requires the full model that was trained on likely billions of images.