- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
cross-posted from: https://infosec.pub/post/36262288
Malicious payloads stored on Ethereum and BNB blockchains are immune to takedowns.
cross-posted from: https://infosec.pub/post/36262288
Malicious payloads stored on Ethereum and BNB blockchains are immune to takedowns.
Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.
They just explained how to use AI in a way where “truth” isn’t relevant.
And I explained why that makes them a moron.
How would I search something I don’t know how it’s called? As I explained the AI is just responsible to tell me “hey this thing X exists”, and after that I go look for it on my own.
Why am I a moron? Isn’t it the same as asking another person and then doing the heavy lifting yourself?
^(edit: typo)
That was your previous example. You had a very specific thing in mind, meaning you knew what to search for from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.
“Hey AI, can you please think for me? Please? I need it, idk what to do.”
Jesus, I don’t know who hurt you or why you are so salty like that.
I’ll stop the conversation here, funnily enough, the way you keep repeating the same thing in a different way is just like AI.
My best wishes to you.
Removed by mod