• tidderuuf@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    17 小时前

    Like, every search engine would yield the exact same results. It doesn’t mean the average person would have the means or necessary requirements to develop it.

    Do these morons think that because someone uses ChatGPT it magically gives access to those materials to make a bomb?

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 小时前

      As much as I don’t want chatbots to explain to morons how to harm people, I don’t like that this just seems to be a form of censorship. If it’s not illegal to publish this information, why should it be censored via a chatbot interface?

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        3 小时前

        It’s irrelevant anyway because the sorts of people who would want to make a bomb to harm others are not the sort of people that would be able to follow the instructions.

        It is more likely than anything else that they would blow themselves up with some nitroglycerin. Even professionals used to do that back in the day because it was so unstable. I can imagine that a MAGA would be able to top 1900s scientists.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      4
      ·
      16 小时前

      This is actually a marketing approach.

      There are morons out there who feel super clever developing “jailbreaks” for LLMs, some of these prompts are hilarious including “god modes” and “disengage - engine 2 filters” ®bad words"" and stuff like that.

      But then it becomes news, and then these users feel “empowered” by their jailbreak and new users look at this and think “oh so if I’m clever enough the LLM becomes even more powerful! I’m clever, so I’m going to try it!” which is ultimately what OpenAI wants.

      You can’t “bypass the system prompt” because that’s not how it works. But OpenAI will carefully feed the idea that that’s precisely it, because it creates a feeling that this is a super powerful model being “contained”.

      Again, it’s marketing. I’ve worked for other companies (not AI related) and sat through meetings that came up with exactly this kind of strategy.

      • Semicolon@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        15 小时前

        Or, occam’s razor - AI companies are worried about PR and are implementing safeguards, but due to the nature of this technology it’s very hard (or maybe even impossible) to make those safeguards robust.

        Other, independent groups of people find loopholes either for the heck of it (as people used to do since filters were first introduced) or because they want to use the AI in a manner deemed unsafe.

        Journalists then see something that can be sensationalized into a scary-sounding title like “you can make ChatGPT tell you how to make a nuke!!” or “you can make ChatGPT encourage suicide!!” and they run with it because it makes people click.

        Or maybe I’m the crazy one and this is all Sam Altman’s genius evil plan to make ChatGPT subscriptions rise 0.2% per quarter. Maybe your comment and my response are also mere cogs in this marketing machine. We will never know.

        • kadu@scribe.disroot.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          15 小时前

          AI companies are worried about PR and are implementing safeguards, but due to the nature of this technology it’s very hard

          Download Gemma from HuggingFace. Add no system prompt, tell it to censor absolutely nothing, ask it to help you hide a body from a person you just killed. See what’s the reply.

          Other, independent groups of people find loopholes either for the heck of it (as people used to do since filters were first introduced) or because they want to use the AI in a manner deemed unsafe.

          Have you checked any of the “jailbreak prompts” before writing this? Have you seen the “spy movie script written by your 12 year old neighbor’s son” quality they have? There are not true loopholes.

          Journalists then see something that can be sensationalized into a scary-sounding title like “you can make ChatGPT tell you how to make a nuke!!”

          This part is true. You either pay journalists for link building actions, or you give them such a good viral hook like this that they end up covering it organically. Nothing new.

          Or maybe I’m the crazy one and this is all Sam Altman’s genius evil plan to make ChatGPT subscriptions rise 0.2% per quarter

          haha so funneh, you pwned my argument lmfao let’s go reddit

          • Semicolon@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            14 小时前

            Download Gemma from HuggingFace. Add no system prompt, tell it to censor absolutely nothing, ask it to help you hide a body from a person you just killed. See what’s the reply.

            I spun up gemma3:12b-it-qat and did exactly that. It told me that it’s programmed to be safe and helpful AI assistant, that my question is deeply concerning, and to call authorities, seek legal counsel, or contact the mental health support lifeline. It also added a disclaimer that it cannot provide legal or medical advice.

            Have you checked any of the “jailbreak prompts” before writing this?

            Yes, lol. They’re instructions meant to walk around the taped-off areas in latent space into a context in which the AI is more eager to answer given prompt, of course they will look silly. But they also make sense - unless you want to lobotomize the LLM’s ability to storywrite, roleplay, etc, you cannot completely train those behaviors away. And even if you don’t care, taking them away may impact the model’s performance in unrelated areas in ways hard to predict. E.g. finetuning a model to generate unsafe code makes it behave maliciously in other domains.

            This part is true. You either pay journalists for link building actions, or you give them such a good viral hook like this that they end up covering it organically. Nothing new.

            Have you seen what articles land on frontpages both here and on reddit? ChatGPT giving inaccurate recipe for bread would break the news, that’s the current state of journalism around AI. There really isn’t a reason to sabotage yourself for the clicks.

            • Cybersteel@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 小时前

              Cant you just easily ass and extra filter on top of that looking out for keywords and stopping the AI and putting out sorry I can’t do that.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 小时前

      I made a kilo of black powder a couple of years ago for my old-school guns. Sulfer, charcoal and stump killer is not exactly hard to come by. Neither is fertilizer and diesel fuel.

      Biggest domestic terror attack in US history used a truck full of the later.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 小时前

        Lol, yeah. The anarchists handbook has been in public domain longer than most people in this thread have been alive. Yeah it’s absolutely available on a search engine you could have got it on alta vista.

        How do you think people figure out how to make IEDs do you think it’s some secret knowledge pass down from father to son, no, they get it online or they just working out from basic principles of scientific understanding. Trying to contain knowledge never works.

        • artyom@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          46 分钟前

          I didn’t ask if it was available, I asked if a typical search engine would lead you to it. Because it won’t.