• businessfish@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    21 hours ago

    complete insanity that the browser/agent doesnt even ask for user confirmation before interpreting web pages as instructions. this is just AI XSS, just mental that the AI was configured to trust and execute instructions from unsanitized web content. how was this not one of the first problems raised during development prior to release?

    • jrandomhacker@beehaw.org
      link
      fedilink
      arrow-up
      12
      ·
      21 hours ago

      LLMs fundamentally don’t/can’t have “sanitized” or “unsanitized” content - it’s all just tokens in the end. “Prompt Injection” is even a bit too generous of a term, I think.

      • businessfish@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        sure but one would hope that if the agent is interpreting content from the web as instructions that there would be literally any security measure between the webpage and the agent - whether that’s some input sanitization, explicit user confirmation, or prohibiting the agent from interpreting web pages as instructions at all.