We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • yesman@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Ethical theories and the concept of free will depend on agency and consciousness. Things as you point out, LLMs don’t have. Maybe we’ve got it all twisted?

    I’m not anthropomorphising ChatGPT to suggest that it’s like us, but rather that we are like it.

    Edit: “stochastic parrot” is an incredibly clever phrase. Did you come up with that yourself or did the irony of repeating it escape you?