• @A_Very_Big_Fan@lemmy.world
    link
    fedilink
    English
    -33 months ago

    it can old produce an algorithmically-derived malange of its source-data recomposited in novel forms

    Right, it produces derivative data. Not copyrighted material.

    By itself without any safeguards, it absolutely could output copyrighted data, (albeit probably not perfectly but for copyright purposes that’s irrelevant as long as it serves as a substitute). And any algorithms that do do that should be punished, but OpenAI’s models can’t do that.

    Hammers aren’t bad because they can be used for bludgeoning, and if we have a hammer that somehow detects that it’s being used for murder and then evaporates, calling it bad is even more ridiculous.

    • gregorum
      link
      fedilink
      English
      2
      edit-2
      3 months ago

      Some safeguards have been added which curtail certain direct misbehavior, but it is still capable - by your own admission - of doing it. And it still profits from the unlicensed use of copyrighted works by using such material for its training data. Because what it is producing is not a new and unique creative work, it is a composite of copyrighted work. That is not the same thing.

      And if you are comparing LLMs and hammers, you’re just proving how you fundamentally misunderstand what LLMs are and how they work. It’s a false equivalence.

      • @A_Very_Big_Fan@lemmy.world
        link
        fedilink
        English
        -4
        edit-2
        3 months ago

        but it is still capable - by your own admission - of doing it

        And if you are comparing LLMs and hammers, you’re just proving how you fundamentally misunderstand what LLMs are and how they work

        And a regular hammer is capable of being used for murder. Which makes calling a hammer that evaporates before it can be used for murder “unethical” ridiculous. You’re deliberately missing the point.

        And it still profits from the unlicensed use of copyrighted works by using such material for its training data

        I just don’t buy this reasoning. If I look at paintings of the Eiffel Tower and then sell my own painting of the building, I’m not violating the copyright of any of the original painters unless what I paint is so similar to one of theirs that it violates fair use.

        it is a composite of copyrighted work

        It’s stable diffusion, not a composite. But even if they were composites, I’m allowed to shred a magazine and make a composite image of something else. It’s fair use until I use those pieces to create a copyrighted image.

        • gregorum
          link
          fedilink
          English
          23 months ago

          Lol… I hope you didn’t sprain something with all those mental gymnastics. In the meantime, perhaps you should educate yourself a bit more on AI, LLV’s, and, perhaps, just a little bit on art.

            • gregorum
              link
              fedilink
              English
              2
              edit-2
              3 months ago

              OK, if you think what you just said made sense, then you either didn’t read the link you just posted or you clearly didn’t understand it. And you certainly have no clue what you’re talking about.

              But you’re certainly helping to make my point for me

              • @A_Very_Big_Fan@lemmy.world
                link
                fedilink
                English
                -3
                edit-2
                3 months ago

                AI, unlike a human, cannot create unique works of art. it can old produce an algorithmically-derived malange of its source-data recomposited in novel forms

                Find me a single sentence in that entire article that suggests AI art is composites of source data

                You can’t, because how it actually works is wildly different than how you want to believe it works.

                • gregorum
                  link
                  fedilink
                  English
                  4
                  edit-2
                  3 months ago

                  The entire article explains that’s how it works. I’m sorry it’s just over your head.

                  • @A_Very_Big_Fan@lemmy.world
                    link
                    fedilink
                    English
                    -2
                    edit-2
                    3 months ago

                    Mhm, I’m sure that’s why you couldn’t find a single sentence about compositing images

                    DALL·E 2 uses a diffusion model conditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model.

                    You’re either projecting or being dishonest