• tal@olio.cafe
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      15 hours ago

      In the broad sense that understanding of spatial relationships and objects is just kind of limited in general with LLMs, sure, nature of the system.

      If you mean that models simply don’t have a training corpus that incorporates adequate erotic literature, I suppose that it depends on what one is up to and the bar one has. No generative AI in 2025 is going to match a human author.

      If you’re running locally, where many people use a relatively-short context size on systems with limited VRAM, I’d suggest a long context length for generating erotic literature involving bondage implements like chastity cages, as otherwise once information about the “on/off” status of the implement passes out of the context window, the LLM won’t have information about the state of the implement, which can lead to it generating text incompatible with that state. If you can’t afford the VRAM to do that, you might look into altering the story such that a character using such an item does not change state over the lifetime of the story, if that works for you. Or, whenever the status of the item changes, at appropriate points in the story, manually update its status in the system prompt/character info/world info/lorebook/whatever your frontend calls its system to inject static text into the context at each prompt.

      My own feeling is that relative to current systems, there’s probably room for considerably more sophisticated frontend processing of objects, and storing state and injecting state about it efficiently into the system prompt. The text of a story is not an efficient representation of world state. Like, maybe use an LLM itself to summarize world state and then inject that summary into the context. Or, for specific games written to run atop an LLM, have some sort of Javascript module that runs in a sandbox, runs on each prompt and response to update its world state, and dynamically generates text to insert into the context.

      I expect that game developers will sort a lot of this out and develop conventions, and my guess is that the LLM itself probably isn’t the limiting factor on this today, but rather how well we generate context text for it.