• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: September 5th, 2023

help-circle
  • Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

    What’s marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called “tokens” to break apart natural language into elements that a machine can understand, and then recombining those tokens to “create” something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

    So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.




















  • I’ll be honest, unless you have been using Linux for…a long time, of your job requires you to manage servers, your probably not that last category.

    If you enrolled in the windows insider/test doohickey then you might want look into the rolling release distros. If not, something with a standard release cadence will be better.

    I my self? All of the servers I manage have no desktop environment (core infrastructure does not need graphics). But if I am on a workstation? LMDE - Because I care about the graphics getting out of my way so I can do my job.