Clarification:

Judging by the fact that AI progress will never stop and will eventually be able to replace truth with fiction, it will become impossible to trust any article, and even if it is possible, not all of them, and we won’t even be able to tell exactly what’s true and what’s fiction.

So, what if people from different countries and regions exchanged contacts here and talked about what’s really happening in their countries, what laws are being passed, etc., and also shared their well-thought-out theories and thoughts?

If my idea works, why not sober up as many people as possible that only similar methods will be able to distinguish reality from falsehood in the future?

I’m also interested in your ideas, as I’m not much of an expert.

  • Alsjemenou@lemy.nl
    link
    fedilink
    Nederlands
    arrow-up
    4
    ·
    1 day ago

    Like it has always been done. This question is such a weird one to me. The problem isn’t that AI is making shit up, people make shit up all the time. They lie, cheat, make mistakes, are dumb, etc. The problem is that we don’t know if we can trust what we trusted before. But the solution is always simply trusting certain people to tell you the truth. scientists, journalists, teachers, publishers, etc. This has never not been the case. We can’t trust AI not dreaming up answers. That’s just how it is, and that’s not a problem that needs solving. It’s a fundamental part of current LLM technology.

    Maybe in the future we can find something that makes LLM’s more trustworthy, but as of yet, that’s simply not the case. So I don’t see a problem here. If you want to know the truth about something you’re going to have to look at sources and do some digging until you find something you’re trusting, then that’s what’s real to you.

    And unless you want some deep philosophical discussion on the nature of Truth and how to arrive at Reality. Than this is how it works and how it has always worked.