• 3 Posts
  • 112 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • This is… Well, not entirely convincing.

    So, say the computational cost triples. Intelligent methods to mitigate this would include purpose built hardware to optimize these processes. That’s a big lift, but the reward would be calculable and would have significant enough ROI that there’s no way they won’t pursue it. I think it’s a realistically conquerable problem.

    And so what if it doesn’t know? Existing solutions will scour the Internet on command and this functionality, given a sufficiently high level of uncertainty, could be automated.

    Combining the Internet access capability with a certainty calculation and assuming there is hardware optimization in the future, these problems, while truly significant, seem solvable.

    That said, the solution probably will most likely make our world uninhabitable, so that’s neat.

    My concern on top of this is that they will not exhaust funding even if private investment goes dry. The state (US, China) won’t stop funding till they reach total dominance.

    We’re so screwed, guys.



















  • You’re not wrong, but I don’t think you’re 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don’t have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.

    I’m not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they’re used and it needs addressed, andI think that we’re only a few clever optimizations away from a threat.