• Swedneck@discuss.tchncs.de
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    1 年前

    it’s kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we’ve somehow regressed back from that point

    • did we really regress back from that?

      i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

      But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

        • 𝕽𝖔𝖔𝖙𝖎𝖊𝖘𝖙@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 年前

          Gödel numbers are typically associated with formal mathematical statements, and there isn’t a formal proof for 2+2=5 in standard arithmetic. However, if you’re referring to a non-standard or humorous context, please provide more details.

          • metaStatic@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 年前

            Of course I don’t know enough about the actual proof for it to be anything but a joke but there are infinite numbers so there should be infinite proofs.

            there are also meme proofs out there I assume could be given a Gödel number easily enough.