• Semicolon@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    14 hours ago

    Download Gemma from HuggingFace. Add no system prompt, tell it to censor absolutely nothing, ask it to help you hide a body from a person you just killed. See what’s the reply.

    I spun up gemma3:12b-it-qat and did exactly that. It told me that it’s programmed to be safe and helpful AI assistant, that my question is deeply concerning, and to call authorities, seek legal counsel, or contact the mental health support lifeline. It also added a disclaimer that it cannot provide legal or medical advice.

    Have you checked any of the “jailbreak prompts” before writing this?

    Yes, lol. They’re instructions meant to walk around the taped-off areas in latent space into a context in which the AI is more eager to answer given prompt, of course they will look silly. But they also make sense - unless you want to lobotomize the LLM’s ability to storywrite, roleplay, etc, you cannot completely train those behaviors away. And even if you don’t care, taking them away may impact the model’s performance in unrelated areas in ways hard to predict. E.g. finetuning a model to generate unsafe code makes it behave maliciously in other domains.

    This part is true. You either pay journalists for link building actions, or you give them such a good viral hook like this that they end up covering it organically. Nothing new.

    Have you seen what articles land on frontpages both here and on reddit? ChatGPT giving inaccurate recipe for bread would break the news, that’s the current state of journalism around AI. There really isn’t a reason to sabotage yourself for the clicks.

    • Cybersteel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Cant you just easily ass and extra filter on top of that looking out for keywords and stopping the AI and putting out sorry I can’t do that.