• utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 hours ago

    (pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)

    "AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.

    It’s a big word so let me unpack the idea with 1 example :

    • StackOverflow, or SO for shot.

    So SO is cratering in popularity. Maybe it’s related to LLM craze, maybe not but in practice, less and less people is using SO.

    SO is basically a software developer social network that goes like this :

    • hey I have this problem, I tried this and it didn’t work, what can I do?
    • well (sometimes condescendingly) it works like this so that worked for me and here is why

    then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean “correct”) answer rises to the top.

    The next person with the same, or similar enough, problem gets to try right away what might work.

    SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.

    Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.

    Yet the content itself is often correct in the sense that it does solve the problem.

    So SO in a way is the pinnacle of “technically right” yet being an ass about it.

    Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?

    Of course the switch will happen.

    That’s nice, right?.. right?!

    It is. For a bit.

    It’s actually REALLY nice.

    Until the “thing” you “discuss” with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let’s not even say true or correct) its answer is.

    That’s a deep problem because that thing does not learn.

    It has no learning capability. It’s not just “a bit slow” or “dumb” but rather it does not learn, at all.

    It gets updated with a new dataset, fine tuned, etc… but there is no action that leads to invalidation of a hypothesis generated a novel one that then … setup a safe environment to test within (that’s basically what learning is).

    So… you sit there until the LLM gets updated but… with that? Now that less and less people bother updating your source (namely SO) how is your “thing” going to lean, sorry to get updated, without new contributions?

    Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.

    Yes, it might help some, even a lot, of people to “vile code” sorry I mean “vibe code”, their way out of a problem, but if :

    • they, the individual
    • it, the model
    • we, society, do not contribute back to the dataset to upgrade from…

    well I guess we are going faster right now, for some, but overall we will inexorably slow down.

    So yes epistemologically we are slowing down, if not worst.

    Anyway, I’m back on SO, trying to actually understand a problem. Trying to actually learn from my “bad” situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.

    I’ll share my answer back on SO hoping to help other.

    Don’t just “use” a tool, think, genuinely, it’s not just fun, it’s also liberating.

    Literally.

    Don’t give away your autonomy for a quick fix, you’ll get stuck."

    originally on https://mastodon.pirateparty.be/@utopiah/115315866570543792

    • amzd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 minutes ago

      Most importantly, the pipeline from finding a question on SO that you also have, to answering that question after doing some more research is now completely derailed because if you ask an AI a question and it doesn’t have a good answer you have no way to contribute your eventual solution to the problem.

    • ThirdConsul@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 hours ago

      I honestly think that LLM will result in no progress made ever in computer science.

      Most past inventions and improvements were made because of necessity of how sucky computers are and how unpleasant it is to work with them (we call it “abstraction layers”). And it was mostly done on company’s dime.

      Now companies will prefer to produce slop (even more) because it will hope to automate slop production.

      • I3lackshirts94@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        As an expert in my engineering field I would agree. LLMs has been a great tool for my job in being better at technical writing or getting over the hump of coding something every now and then. That’s where I see the future for ChatGPT/AI LLMs; providing a tool that can help people broaden their skills.

        There is no future for the expertise in fields and the depth of understanding that would be required to make progress in any field unless specifically trained and guided. I do not trust it with anything that is highly advanced or technical as I feel I start to teach it.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      edit-2
      1 hour ago

      Maybe SO should run everyone’s answers through a LLM and revoke any points a person gets for a condescending answer even if accepted.

      Give a warning and suggestions to better meet community guidelines.

      It can be very toxic there.

      Edit: I love the downvotes here. OP - AI is going to destroy the sources of truth and knowledge, in part because people stopped going to those sources because people were toxic at the sources. People: But I’ll downvote suggestions that could maybe reduce toxicity, while having no actual impact on the answers given.