• Rose@slrpnk.net
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    6 hours ago

    The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)

    I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.

    And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.

    I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    5 hours ago

    The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don’t be treating it like it’s some sort of irrational bandwagon.

  • ssillyssadass@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    6
    ·
    4 hours ago

    I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word “pronouns.”

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    12
    ·
    6 hours ago

    I’m a lot more sick of the word ‘slop’ than I am of AI. Please, when you criticize AI, form an original thought next time.

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    arrow-down
    8
    ·
    17 hours ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

    • MrMcGasion@lemmy.world
      link
      fedilink
      arrow-up
      26
      arrow-down
      3
      ·
      edit-2
      16 hours ago

      I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.

      • haungack@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I don’t know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn’t somehow stop or end AI, but cause a new wave of innovation instead.

        I’ve seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn’t exactly die.

      • NewDayRocks@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        13 hours ago

        AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

        The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.

        Making a cheap or efficient AI doesn’t help the end user in any way.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          I’m using “good” in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

          What I mean by “good AI” is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven’t thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

          The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won’t be a clear profit motive, and they won’t be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            Ok. Thanks for clarifying.

            Although I am pretty sure AI is already used in the medical field for research and diagnosis. This “AI everywhere” trend you are seeing is the result of everyone trying to stick and use AI in every which way.

            The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

        • SolarBoy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 hours ago

          It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.

          It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.

          • krunklom@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            You and OP are misunderstanding what is meant by good and cheap.

            It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.

            So the quality of the content created is not limited by cost.

            If the AI bubble popped, this won’t improve AI quality.

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    18 hours ago

    Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.

    • DeathByBigSad@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      13 hours ago

      Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        Unfortunately, UBI is just a solution to unemployment. Another solution (and the one apparently preferred by the billionaire rulers of this planet) is letting the unemployed rot and die.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        13 hours ago

        For labor people don’t like doing, sure. I can’t imagine replacing a friend of mine with a conversation machine that performs the same or better, though.

    • surph_ninja@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 minutes ago

      You’re repeating debunked claims that are being pushed by tech giants to lobby for laws to monopolize AI control.

      I’d rather read AI crap than this idiocy.

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      3 hours ago

      Ai is literally making people dumber: https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

      We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.

      I would not say “can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving” equals to “literally making people dumber”. A sample size of 319 isn’t really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn’t sound too bad if done correctly. They associate critical thinking with verifying everything (“Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort”), not sure I agree on this.

      This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.

      https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

      They had to write an essay in 20 minutes… obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you’re just fixing stuff instead on inventing… well yea, you use your brain less. Doesn’t make you dumb? It’s a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.

      Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      3 hours ago

      Are being used to push fascist ideologies into every aspect of the internet:

      Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      3 hours ago

      They are a massive privacy risk:

      I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      8
      ·
      11 hours ago

      Ai is literally making people dumber:

      And books destroyed everyone’s memory. People used to have fantastic memories.

      They are a massive privacy risk:

      No different than the rest of cloud tech. Run your AI local like your other self hosting.

      Are being used to push fascist ideologies into every aspect of the internet:

      Hitler used radio to push fascism into every home. It’s not the medium, it’s the message.

      And they are a massive environmental disaster:

      AI uses a GPU just like gaming uses a GPU. Building a new AI model uses the same energy that Rockstar spent developing GTA5. But it’s easier to point at a centralized data center polluting the environment than thousands of game developers spread across multiple offices creating even more pollution.

      Stop being a corporate apologist

      Run your own AI! Complaining about “corporate AI” is like complaining about corporate email. Host it yourself.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      18
      ·
      23 hours ago

      Do you really need to have a list of why people are sick of LLM and Ai slop?

      With the number of times that refrain is regurgitated here ad nauseum, need is an odd way to put it. Sick of it might fit sentiments better. Done with this & not giving a shit is another.

    • AnonomousWolf@lemmy.world
      link
      fedilink
      arrow-up
      24
      arrow-down
      46
      ·
      edit-2
      23 hours ago

      If you ever take a flight for holiday, or even drive long distance and cry about AI being bad for the environment then you’re a hypocrite.

      Same goes for if you eat beef, or having a really powerful gaming rig that you use a lot.

      There are plenty of valid reasons AI is bad, but the argument for the environment seems weak, and most people using it are probably hypocrites. It’s barely a drop in the bucket compared to other things

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        22
        ·
        18 hours ago

        Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.

        This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.

        • AnonomousWolf@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          edit-2
          4 hours ago

          Maybe they should cut down on Beef first, it uses exponentially more water than AI and CO2

          • 1 kg Beef = 60kg CO2 - source
          • 1000km Return flight = 314kg CO2 - source
          • 1 Bitcoin transaction = 645kg of CO2 - source
          • 1000 AI prompts = 3kg of CO2 - source
      • BroBot9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        10
        ·
        edit-2
        23 hours ago

        Ahh so are you going to acknowledge the privacy invasion and brain rotting cause by Ai or are you just going to focus on dismissing the environmental concerns? Cause I linked more than just the environmental impacts.

        • Draces@lemmy.world
          link
          fedilink
          arrow-up
          16
          arrow-down
          10
          ·
          21 hours ago

          Uh dismissing that concern seems like valid point? Do people have to comprehensively discredit the whole list to reply?

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        12
        ·
        18 hours ago

        This echo chamber isn’t ready for this logical discussion yet unfortunately lol

        • CXORA@aussie.zone
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          15 hours ago

          When someone disagrees with me - echo chamber.

          When someone agrees with me - logical discussion.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        14
        arrow-down
        20
        ·
        21 hours ago

        You’re getting downvoted for speaking the truth to an echo chamber my guy.

        • Barrymore@sh.itjust.works
          link
          fedilink
          arrow-up
          27
          arrow-down
          5
          ·
          20 hours ago

          But he isn’t speaking the truth. AI itself is a massive strain on the environment, without any true benefit. You are being fed hype and lies by con men. Data centers being built to supply AIs are using water and electricity at alarming rates, taking away the resources from actual people living nearby, and raising the cost of those utilities at the same time.

          https://www.realtor.com/advice/finance/ai-data-centers-homeowner-electric-bills-link/

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            6
            ·
            11 hours ago

            AI itself is a massive strain on the environment, without any true benefit

            Rockstar games developing GTA5: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance

            18,000,000,000 watt hours

            vs

            10,000,000,000 watt hours for ChatGPT training

            https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/

            There are more 3d games developed each year than companies releasing new AI models.

          • Ek-Hou-Van-Braai@piefed.socialOP
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            10
            ·
            17 hours ago

            The same can be said for taking flights to go on holiday.

            Flying emits way exponentially more CO2 and supports the oil industry

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            9
            ·
            18 hours ago

            This is valid to all data centers serving all websites. Your take is a criticism of unregulated capitalism, not AI.

            Beef farming is a far far far more impactful discussion, yet here we are.

            • CXORA@aussie.zone
              link
              fedilink
              English
              arrow-up
              8
              ·
              15 hours ago

              Ai takes far more power to serve a single request than a website does though.

              And remember, AI requires those websites too, for training data.

              So it’s not just more power hungry, it also has thw initial power consumption added on top

          • Draces@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            9
            ·
            19 hours ago

            And your car or flight is a massive strain on the environment. I think you’re missing the point. There’s a way to use tools responsibly. We’ve taken the chains off and that’s obviously a problem but the AI hate here is irrational

          • absentbird@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            7
            ·
            20 hours ago

            The problem is the companies building the data centers; they would be just as happy to waste the water and resources mining crypto or hosting cloud gaming, if not for AI it would be something else.

            In China they’re able to run DeepSeek without any water waste, because they cool the data centers with the ocean. DeepSeek also uses a fraction of the energy per query and is investing in solar and other renewables for energy.

            AI is certainly an environmental issue, but it’s only the most recent head of the big tech hydra.

          • Honytawk@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            15
            ·
            20 hours ago

            AI uses 1/1000 the power of a microwave.

            Are you really sure you aren’t the one being fed lies by con men?

            • jimjam5@lemmy.world
              link
              fedilink
              arrow-up
              7
              ·
              edit-2
              14 hours ago

              What? Elon Musk’s xAI data center in Tennessee (when fully expanded & operational) will need 2 GW of energy. That’s as much as some entire cities use in a year.

            • Ace T'Ken@lemmy.ca
              link
              fedilink
              English
              arrow-up
              12
              ·
              edit-2
              17 hours ago

              Hi. I’m in charge of an IT firm that is been contracted to carry out one of these data centers somewhat unwillingly in our city. We are currently in the groundbreaking phase but I am looking at papers and power requirements. You are absolutely wrong on the power requirements unless you mean per query on a light load on an easy plan, but these will be handling millions if not billions of queries per day. Keeping in mind that a single user query can also be dozens, hundreds, or thousands of separate queries… Generating a single image is dramatically more than you are stating.

              Edit: I don’t think your statement addresses the amount of water it requires as well. There are serious concerns that our massive water reservoir and lake near where I live will not even be close to enough.

              Edit 2: Also, we were told to spec for at least 10x growth within the next 5 years which, unless there are massive gains in efficiency, I don’t think there are any places on the planet capable of meeting the needs of, even if the models become substantially more efficient.

          • Randomgal@lemmy.ca
            link
            fedilink
            arrow-up
            3
            arrow-down
            11
            ·
            20 hours ago

            Do you really think those data centers wouldn’t have been built if AI didn’t exist? Do you really think those municipalities would have turned down the same amount of money if it was for something else but equally destructive?

            What I’m hearing is you’re sick of municipal governance being in bed with big business. That you’re sick of big business being allowed to skirt environmental regulations.

            But sure. Keep screaming at AI. I’m sure the inanimate machine will feel really bad about it.

      • SugarCatDestroyer@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        10
        ·
        23 hours ago

        Hypocrisy can be called the primitive nature of man who chooses what is easier because he is designed that way. Human is like a cancerous tumor for the planet.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    7
    ·
    24 hours ago

    The problem isn’t AI. The problem is Capitalism.

    The problem is always Capitalism.

    AI, Climate Change, rising fascism, all our problems are because of capitalism.

    • Ofiuco@piefed.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      17
      ·
      21 hours ago

      Wrong.
      The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        17 hours ago

        Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.

      • Eldritch@piefed.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        21 hours ago

        While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.

        • Ofiuco@piefed.ca
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          20 hours ago

          It will happen regardless because we are not machines, we don’t follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.

          • pebbles@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            20 hours ago

            I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.

            • JargonWagon@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              4
              ·
              18 hours ago

              Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.

              • pebbles@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                12 hours ago

                I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            20 hours ago

            I see, so you don’t understand. Or simply refuse to engage with what was asked.

        • Ace T'Ken@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          18 hours ago

          I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            15 hours ago

            And the point of anarchist or actual communist systems is that such scale would be miniscule. Not massive national or unanswerable state scales.

            And yes, I’m an anarchist. I know DB0 and their instance and generally agree with their stance - because it would allow any one of us to effectively advocate against it if we desired to.

            There would be no tech broligarchy forcing things on anyone. They’d likely all be hanged long ago. And no one would miss them as they provide nothing of real value anyway.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              11 hours ago

              And the point of anarchist or actual communist systems is that such scale would be miniscule.

              Every community running their own AI would be even more wasteful than corporate centralization. It doesn’t matter what the system is if people want it.

              • Eldritch@piefed.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                The point is, most wouldn’t. It’s of little real use currently, especially the LLM bullshit. The communities would have infinitely better things to pit resources to.

                • Blue_Morpho@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  18 minutes ago

                  The point is, most wouldn’t.

                  People currently want it despite it being stupid which is why corporations are in a frenzy to be the monopoly that provides it. People want all sorts of stupid things. A different system wouldn’t change that.

            • Ace T'Ken@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 hours ago

              DB0 has a rather famous record of banning users who do not agree with AI. See !yepowertrippinbastards@lemmy.dbzer0.com or others for many threads complaining about it.

              You have no way of knowing what the scale would be as it’s all a thought experiment, however, so let’s play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?

              Buuuuut… keep in mind that in previous Communist regimes (even if you disagree that they were “real” Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?

              • Eldritch@piefed.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                6 hours ago

                Yes, I have seen posts on it. Sufficed to say, despite being an anarchist. I don’t have an account there for reasons. And don’t agree with everything they do.

                The situation with those bans I might consider heavy handed and perhaps overreaching. But by the same token it’s a bit of a reflection of some of those that are banned. Overzealous and lacking nuance etc.

                The funny thing is. They pretty much dislike the tech bros as much as anyone here does. You generally won’t ever find them defending their actions. They want AI etc that they can run from their home. Not snarfing up massive public resources, massively contributing to climate change, or stealing anyone’s livelihood. Hell many of them want to run off the grid from wind and solar. But, as always happens with the left. We can agree with eachother 90%, but will never tolerate or understand because of the 10%.

                PS

                We do know the scale. Your use of “the state” with reference to anarchism. Implies you’re unfamiliar with it. Anarchism and communism are against “the state” for the reasons you’re also warry of it. It’s too powerful and unanswerable.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        21 hours ago

        Can, would… and did. The list of environmental disasters in the Soviet is long and intense.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      14
      ·
      23 hours ago

      Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart… So alas we will always be in complete shit until we disappear.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        22 hours ago

        That’s a pathetic, defeatist world view. Yeah, we’re victims of our circumstances, but we can make the world a better place than what we were raised in.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          21 hours ago

          You can try, and you should try. But some handful of generations ago, some assholes were in the right place at the right time and struck it rich. The ones that figured out generational wealth ended up with a disproportionate amount of power. The formula to use money to make more money was handed down, coddled, and protected to keep the rich and powerful in power. Even 100 Luigi’s wouldn’t even make the tiniest dent in the oligarch pyramid as others will just swoop in and consume their part.

          Any lifelong pursuit you have to make the world a better place than you were raised in will be wiped out with a scribble of black Sharpie on Ministry of Truth letterhead.

        • SugarCatDestroyer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          22 hours ago

          Well, you can believe that there is a chance, but there is none. It can only be created with sweat and blood. There are no easy ways, you know, and sometimes there are none at all, and sometimes even creating one seems like a miracle.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    121
    arrow-down
    1
    ·
    1 day ago

    Now see, I like the idea of AI.

    What I don’t like are the implications, and the current reality of AI.

    I see businesses embracing AI without fully understanding the limits. Stopping the hiring juniors developers, often firing large numbers of seniors because they think AI, a group of cheap post grad vibe programmers and a handful of seasoned seniors will equal the workforce they got rid of when AI, while very good is not ready to sustain this. It is destroying the career progression for the industry and even if/when they realise it was a mistake, it might already have devastated the industry by then.

    I see the large tech companies tearing through the web illegally sucking up anything they can access to pull into their ever more costly models with zero regard to the effects on the economy, the cost to the servers they are hitting, or the environment from the huge power draw creating these models requires.

    It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

    • MissJinx@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      14 hours ago

      tbf now I think AI is just a tool… in 3 years it will be a really impactfull problem

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      33
      ·
      1 day ago

      And the whole AI industry is holding up the stock market, while AI has historically always ran the hype cycle and crashed into an AI winter. Stock markets do crash after billions pumped into a sector suddenly turn out to be not worth as much. Almost none of these AI companies run a profit and don’t have any prospect of becoming profitable. It’s when everybody starts yelling that this time it’s different that things really become dangerous.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        7 hours ago

        and don’t have any prospect of becoming profitable

        There’s a real twist here in regards to OpenAI.

        They have some kind of weird corporate structure where OpenAI is a non-profit and it owns a for-profit arm. But, the deal they have with Softbank is that they have to transition to a for-profit by the end of the year or they lose out on the $40 billion Softbank invested. If they don’t manage to do that, Softbank can withhold something like $20B of the $40B which would be catastrophic for OpenAI. Transitioning to a For-Profit is not something that can realistically be done by the end of the year, even if everybody agreed on that transition, and key people don’t agree on it.

        The whole bubble is going to pop soon, IMO.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        1 day ago

        Yep, exactly.

        They knew the housing/real estate bubble would pop, as it currently is…

        … So, they made one final last gambit on AI as the final bubble that would magically become super intelligent and solve literally all problems.

        This would never, and is not working, because the underlying tech of LLM has no real actual mechanism by which it would or could develop complex, critical, logical analysis / theoretization / metacognition that isn’t just a schizophrenic mania episode.

        LLMs are fancy, inefficient autocomplete algos.

        Thats it.

        They achieve a simulation of knowledge via consensus, not analytic review.

        They can never be more intelligent than an average human with access to all the data they’ve … mostly illegally stolen.

        The entire bet was ‘maybe superintelligence will somehow be an emergent property, just give 8t more data and compute power’.

        And then they did that, and it didn’t work.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            edit-2
            1 day ago

            I mean, I also agree with that, lol.

            There absolutely are valid use cases for this kind of ‘AI’.

            But it is very, very far from the universal panacea that the capital class seems to think it is.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              1 day ago

              When all the hype dies down, we will see where it’s actually useful. But I can bet you it will have uses, it’s been very helpful in making certain aspects of my life a lot easier. And I know many who say the same.

          • WanderingThoughts@europe.pub
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            1 day ago

            That too is the classical hype cycle. After the trough of disillusionment, and that’s going to be a deep one from the look of things, people figure out where it can be used in a profitable way in its own niches.

            • sp3ctr4l@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              1 day ago

              … Unless its mass proliferation of shitty broken code and mis/disinformation and hyperparasocial relationships and waste of energy and water are actually such a net negative that it fundamentally undermines infrastructure and society, thus raising the necessary profit margin too high for such legit use cases to be workable in a now broken economic system.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              1 day ago

              Time will tell how much was just hype, and how much actually had merit. I think it will go the way of the .com bubble.

              LOTS of uses for the internet of things, but it’s still overhyped

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  24 hours ago

                  Fair enough.

                  The dot-com bubble (late 1990s–2000) was when investors massively overvalued internet-related companies just because they had “.com” in their name, even if they had no profits or solid business plans. It burst in 2000, wiping out trillions in value.

                  The “Internet hype” bubble popped. But the Internet still has many valid uses.

    • ☂️-@lemmy.ml
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      21 hours ago

      i see a silver lining.

      i love IT but hate IT jobs, here’s hoping techbros just fucking destroy themselves…

    • SubArcticTundra@lemmy.ml
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      22 hours ago

      It’s a nice idea, but private business cannot be trusted to do this right, we’re seeing how to do it wrong, live before our eyes.

      You’re right. It’s the business model driving technological advancement in the 21st century that’s flawed.

    • I have to disagree that it’s even a nice idea. The “idea” behind AI appears to be wanting a machine that thinks or works for you with (at least) the intelligence of a human being and no will or desires of its own. At its root, this is the same drive behind chattel slavery, which leads to a pretty inescapable conundrum: either AI is illusory marketing BS or it’s the rebirth of one of the worst atrocities history has ever seen. Personally, hard pass on either one.

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    36
    arrow-down
    2
    ·
    24 hours ago

    Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      6
      ·
      20 hours ago

      Nuance is the thing.

      Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.

      The truth is, like always, somewhere in between.

  • GregorGizeh@lemmy.zip
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    24 hours ago

    I don’t hate the concept as is, I hate how it is being marketed and shoved everywhere and into everything by sheer hype and the need for returns on the absurd amounts of money that were thrown at it.

    Companies use it to justify layoffs, create cheap vibed up products, delegate responsibilities to an absolutely not sentient or intelligent computer program. Not even mentioning the colossal amount of natural and financial resources being thrown down this drain.

    I read a great summary yesterday somewhere on here that essentially said “they took a type of computer model made to give answers to very specific questions it has been trained on, and then trained it on everything to make a generalist”. Except that doesn’t work, the broader the spectrum the model is covering the less accurate it will be.

    Identifying skin cancer? Perfect tool for the job.

    Giving drones the go ahead on an ambiguous target? Providing psychological care to people in distress? FUCK NO.