- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
cross-posted from: https://infosec.pub/post/36262288
Malicious payloads stored on Ethereum and BNB blockchains are immune to takedowns.
Web 3.0 has always been a joke. AI has more actual uses
They are both useful and both jokes.
Depends what exactly we are talking about.
Utility is born of necessity, and it’s true that every joke needs a punchline.
AI has no use. It only subtracts value and creates liabilities.
AI != chatbots
Just saying.
I think theres a point where you have to realize the topic of discussion is about LLMs like ChatGPT, and that point was around the time we compared it to Web 3.0, something that people hate and associate with tech bros and evil corporations.
The meaning of words change based on context.
There is a point when one can just admit they are wrong, or twist words to convince themselves they were right.
wow thanks for that /s
I’m no AI fan by any means, but it’s really good at pointing directions, or rather, introducing you to topics that you didn’t know how to start researching.
I often find myself asking: “Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”. And sure enough I get the starting point, so I can close that down and search on my own.
Otherwise, trying to learn anything in depth there is just a footgun.
^(edit: typo)
Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.
They just explained how to use AI in a way where “truth” isn’t relevant.
And I explained why that makes them a moron.
How would I search something I don’t know how it’s called? As I explained the AI is just responsible to tell me “hey this thing X exists”, and after that I go look for it on my own.
Why am I a moron? Isn’t it the same as asking another person and then doing the heavy lifting yourself?
^(edit: typo)