We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.
We are prediction machines, but nothing like chatgpt. Current AI has no ability to learn, adapt, or even consider the future.
Our jackdaw regular will tear up things around it and throw them around when it gets frustrated (such as when it wants a treat without having to put in any effort).
It could simply be bold and really frustrated.
Well I upvoted the post so that people will see the comments!
You managed to get your money back?! How?
I think that’s an american thing. Besides, that money is long gone since I made the purchase several years ago.
I asked for a refund when they kept delaying shipment of my Librem 5. I was simply denied and that was it. They told me I could still choose to receive the phone, but I don’t want it since it’s a bad, practically useless product now.
I reported them in my country for it.
I reply to people on lemmy on a case-by-case basis. I decide how to eat food on a case-by-case basis. But if you give me a deck of cards and tell me to shuffle them, I generally do not decide how to shuffle on a case-by-case basis; it doesn’t matter whose cards they are.
That’s not what case-by-case means. Wiktionary:
Separate and distinct from others of the same kind; treated individually.
Case-by-case implies that each treatment is different and is not generalisable; but the fact that they use a patient’s own tissue does not make each individual treatment different. If you want to extend the logic, you might call vaccination a case-by-case treatment as well, since they use different needles for each person.
it was done on a case-by-case basis. Each person has their own therapy tailored for them. This does not appear to be a mass-solution.
I’m not sure what you are expecting for something to be considered a cure? What they are describing is a treatment procedure which uses the patient’s own tissue. How does that make it case-by-case?
It can at least get one unstuck, past an indecision paralysis, or give an outline of an idea. It can also be useful for searching though data.
If this works, it’s noteworthy. I don’t know if similar results have been achieved before because I don’t follow developments that closely, but I expect that biological computing is going to catch a lot more attention in the near-to-mid-term future. Because of the efficiency and increasingly tight constraints imposed on humans due to environmental pressure, I foresee it eventually eclipse silicon-based computing.
FinalSpark says its Neuroplatform is capable of learning and processing information
They sneak that in there as if it’s just a cool little fact, but this should be the real headline. I can’t believe they just left it at that. Deep learning can not be the future of AI, because it doesn’t facilitate continuous learning. Active inference is a term that will probably be thrown about a lot more in the coming months and years, and as evidenced by all kinds of living things around us, wetware architectures are highly suitable for the purpose of instantiating agents doing active inference.
I don’t know about google because I don’t use it unless I really can’t find what I’m looking for, but here’s a quick ddg search with a very unambiguous and specific question, and from sampling only the top 9 results I see 2 that are at all relevant (2nd and 5th):
In order to answer my question, I need to first mentally filter out 7/9 of the results visible on my screen, then open both of the relevant ones in new tabs and read through lengthy discussions in order to find out if anyone has shared a proper solution.
Here is the same search using perplexity’s default model (not pro, which is a lot better at breaking down queries and including relevant references):
and I don’t have to verify all the details because even if some of it is wrong, it is immediately more useful information to me.
I want to re-emphasise though that using LLMs for this can be incredibly frustrating too, because they will often insist assertively on falsehoods and generally act really dumb, so I’m not saying there aren’t pros and cons. Sometimes a simple keyword-based search and manual curation of the results is preferred to the nonsense produced by a stupid language model.
Edit: I didn’t answer your question about malicious, but I can give some example of what I consider malicious and you may agree that it happens frequently enough:
etc.
Maybe I can share some insight into why one might want to.
I hate searching the internet. It’s a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it’s my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.
I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.
And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone’s idiosyncratic, artistic ideas for how to organise and present data, or how to keep me ‘engaged’.
So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.
And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it’s irresponsible to embed one the way google has.
I think it’s probably best to… uhh… sort of gatekeep this tech so that it’s mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.
We’re all living in amerikka
koka kola
santa klaus
I don’t remember encountering the particular bug they’re describing. I was hoping it was about the behaviour of drag-and-dropping something into the browser, such as with those “drop a file here to upload”. I am often simply unable to make that work because instead of the thing being dropped into the webpage’s element, it opens the file in the browser instead, which is not really something I ever want to do.
Same problem, 1070, NixOS Plasma
While it’s possible that this is the case, we don’t actually know that because the people with the right skills aren’t spending a lot of time and resources on experimenting with new ideas and concepts unless there’s profit to be made from it.
Chances of coming up with an idea for a new kind of OS that will bring great return on investment in terms of profit and market share are very low, so entrepreneurs are spending their time thinking about more lucrative ventures.
If we lived in a post-scarcity Communist society where everyone is free to do what they feel is important and fulfilling to them, we’d be more likely to see new and novel ways of interfacing with computers (and technology in general).
But we don’t.
Edit: Also, operating systems are a lot of work.
I finally got around to restarting my system after adding hardware.nvidia.modesetting.enable = true;
to my NixOS config and it works perfectly! Thank you for the suggestion. I likely wouldn’t have figured that out on my own any time soon.
Thanks for the suggestion. sudo cat /sys/module/nvidia_drm/parameters/modeset
indeed prints N
, so I’ll try adding that to my system config.
Once. They do not have the ability to learn or adapt on their own. They are created by humans through “deep learning”, but that is fundamentally different from continuously learning based on one’s own actions and experiences.