“The surveillance, theft and death machine recommends more surveillance to balance out the death.”

  • ByteOnBikes@discuss.online
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    Sadly, Local models arent there yet. I have tech nerds in my company spending $3-10k building their own systems and they’re still not getting the speeds and quality that these subscriptions have.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      $3-10k…not getting the speeds and quality

      I mean, that’s true. But the hardware that OpenAI is using costs more than that per pop.

      The big factor in the room is that unless the tech nerds you mention are using the hardware for something that requires keeping the hardware under constant load — which occasionally interacting with a chatbot isn’t going to do — it’s probably going to be cheaper to share the hardware with others, because it’ll keep the (quite expensive) hardware at a higher utilization rate.

      I’m also willing to believe that there is some potential for technical improvement. I haven’t been closely following the field, but one thing that I’ll bet is likely technically possible — if people aren’t banging on it already — is redesigning how LLMs work such that they don’t need to be fully loaded into VRAM at any one time.

      Right now, the major limiting factor is the amount of VRAM available on consumer hardware. Models get fully loaded onto a card. That makes for nice, predictable computation times on a query, but it’s the equivalent of…oh, having video games limited by needing to load an entire world onto the GPU’s memory. I would bet that there are very substantial inefficiencies there.

      The largest GPU you’re going to get is something like 24GB, and some workloads can be split that across multiple cards to make use of VRAM on multiple cards.

      You can partially mitigate that with something like a 128GB Ryzen AI Max 395+ processor-based system. But you’re still not going to be able to stuff the largest models into even that.

      My guess is that it is probably possible to segment sets of neural net edge weightings into “chunks” that have a likelihood to not concurrently be important, and then keep not-important chunks not loaded, and not run non-loaded chunks. One would need to have a mechanism to identify when they likely do become important, and swap chunks out. That will make query times less-predictable, but also probably a lot more memory-efficient.

      IIRC from my brief skim, they do have specialized sub-neural-networks, which are called “MoE”, for “Mixture of Experts”. It might be possible to unload some of those, though one is going to need more logic to decide when to include and exclude them, and probably existing systems are not optimal for these:

      kagis

      Yeah, sounds like it:

      https://arxiv.org/html/2502.05370v1

      fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving

      Despite the computational efficiency, MoE models exhibit substantial memory inefficiency during the serving phase. Though certain model parameters remain inactive during inference, they must still reside in GPU memory to allow for potential future activation. Expert offloading [54, 47, 16, 4] has emerged as a promising strategy to address this issue, which predicts inactive experts and transfers them to CPU memory while retaining only the necessary experts in GPU memory, reducing the overall model memory footprint.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Naturally the commercial systems are going to be strictly better, but the best models I can run on my 3090 have been good enough for me for a couple years now, and have massively improved over that time. Currently mostly use qwen3-coder which is really solid. It just feels so much nicer to use knowing it’s private and not being datamined for who knows what,

      • Techlos@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        i’m running 2x 3060 (12gb cards, vram matters more than clockspeed usually) and if you have the patience to run them the 30b qwen models are honestly pretty decent; if you have the ability and data to fine-tune or LORA them to the task you’re doing, it can sometimes exceed zero shot performance from SOTA subscription models.

        the real performance gains come from agentifying the language model. With access to wikipedia, arxiv, and a rolling embedding database of prior conversations, the quality of the output shoots way up.

          • Techlos@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 hours ago

            this is a good guide on a standard approach to agentic tool use for LLMs

            For conversation history:

            this one is a bit more arbitrary, because it seems whenever someone finds a good history model they found a startup about it so i had to make it up as i went along.

            First, BERT is used to create an embedding for each input/reply pair, and a timestamp is attached to give it some temporal info.

            A buffer keeps the last 10 or so input/reply pairs beyond a token limit, and periodically summarise the short-term buffer into paragraph summaries that are then embedded; i used an overlap of 2 pairs between each summary. There’s an additional buffer that did the same thing with the summaries themselves, creating megasummaries. These get put into a hierarchical database where each summary has a link to what generated the summary. Depending on how patient i’m feeling i can configure the queries to use either the megasummary embedding or summary embedding for searching the database.

            During conversation, the input as well as the last input/reply get fed into a different instruction prompt that says “Create three short sentences that accurately describe the topic, content, and if mentioned the date and time that are present in the current conversation”. The sentences, along with the input + last input/reply, are embedded and used to find the most relevant items in the chat history based on a weighed metric of 0.9*(1-cosine_distance(input+reply embedding, history embedding)+0.1*(input.norm(2)-reply.norm(2))^2 (magnitude term worked well when i was doing unsupervised learning, completely arbitrary here), which get added in the system prompt with some instructions on how to treat the included text as prior conversation history with timestamps. I found two approaches that worked well.

            for lightweight predictable behaviour, take the average of the embeddings and randomly select N entries from the database, weighed on how closely they match using softmax + temperature to control memory exploration.

            For more varied and exploratory recall, use N randomly weighted averages of the embeddings and find the closest matches for each. Bit slower because you have way more database hits, but it tends to perform much better at tying relevant information from otherwise unrelated conversations. I prefer the first method for doing research or literature review, second method is great when you’re rubber ducking an idea with the LLM.

            an N of 3~5 works pretty well without taking up too much of the context window. Including the most relevant summary as well as the input/reply pairs gives the best general behaviour, strikes a good balance between broad recall and detail recall. The stratified summary approach also lets me prune input/reply entries if it they don’t get accessed much (whenever the db goes above a size limit, a script prunes a few dozen entries based on retrieval counts), while leaving the summary to retain some of the information.

            It works better if you don’t use the full context window that the model is capable of. Transformer models just aren’t that great at needle in a haystack problems, and i’ve found a context window of 8~10k is the sweet spot for the model both paying attention to the conversation recall as well as the current conversation for the qwen models.

            A ¿fun? side effect of this method is that using different system prompts for LLM behaviour will affect how the LLM summarises and recalls information, and it’s actually quite hard to avoid the LLM from developing a “personality” for lack of a better word. My first implementation included the phrase “a vulcan-like attention to formal logic”, and within a few days of testing each conversation summary started with “Ship’s log”, and developed a habit of calling me captain, probably a side effect of summaries ending up in the system prompt. It was pretty funny, but not exactly what i was aiming for.

            apologies if this was a bit rambling, just played a huge set last night and i’m on a molly comedown.

            • chicken@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              24 minutes ago

              Nah that’s some great info, though sounds like a pretty big project to set it up that way. Would definitely want to hear about it if you ever decide to put your work on a git repo somewhere.