I’m gay
Disgusting typical techbro behavior, I hate it.
The president really made an executive order about preventing woke AI? Humanity is cooked fam
That’s going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words.
Very simple solution: ask AI to rewrite your resume for specific job applications or fields.
Having seen how so many average people are using AI, I’m sadly not surprised that people are just going with whatever is up there, even when it is wrong.
The difference is that Camacho is a himbo, our current president is more of a bridge troll
It really isn’t hidden lol. But you will get countless LLM defenders online who claim you can eliminate the bias with prompting or other hacks which don’t address the underlying issue or do anything but patch a broken system. To fix LLM bias you need to systematically correct, and very few folks have bothered to try and design methods to systematically correct. In the case of Grok, it’s actually explicitly designed to reference Musk’s bigoted musings on subjects first before examining other information.
While you are correct, and the author deserves to be called out on their behavior, the context of the entire article is around how they are struggling with being bombarded with things taking up their attention and time. This response is seriously lacking in any compassion for the author’s struggle and more or less ignores the entire point of the article in order. Beehaw isn’t the place for one-liner gotchas. Please try to engage with the content if you’re going to comment.
Eh frankly I just see us moving to more strict reputation based systems - someone has to vouch for you.
So how is an AI prompt poking for Holocaust denial different than a Google search looking for Holocaust denial?
Because one is something you have to actively search for. The other is shoved in your face, by a figure that many feel is one who has some authority.
Why are you defending anything about this situation? This is not a thread to discuss how LLMs work in detail, this is a thread about accountability, consequences, hate, and society.
Definitely something I’ve observed even here. Luckily we get few applications and there is a report button, but I share the author’s frustration and the author’s jaded view of a limited timeline on services such as ours being tenable. Eventually it will be trivially easy to flood this place with slop.
314m what a joke! Still, good to see them lose this court case
You believe that a police officer, who is doing public actions, in a public role, should be given privacy while performing public actions? Say more
Even if an officer’s name and badge number were not public (which would be weird, because both of these are a part of a police officer’s uniform), what is the concern about a tool which provides these?
I would love to hear what has you concerned about a tool which provides a piece of information which is, by law (California Penal Code Section 830.10), supposed to be accessible to all individuals interacting with the officer - their name and/or badge number.
In what world is that even a plausible outcome of this news? This feels non-sequitur by its pure absurdity. If they had a list of 1000 things they can do with this database, that would not even be on the list.
I understand you are talking about something which either interests you or is a cause you care about, but we’re talking about monumental governmental surveillance by a president many scholars are calling a fascist. This is not the time nor the place to discuss such matters and trying to have that conversation could easily be read as dismissing the plentiful and obvious concerns around privacy and safety of the American public.
Already not a fan of Palantir, this is pretty bad news
wokepedia lmao what’s wrong with the world
I’m glad to see a lot of different people trying different models. I don’t think microblogging really has the capability of being nontoxic, but who knows? Maybe they’ll succeed where everyone else has failed. I certainly know we’re trying to have nontoxic social media around here, and we have plenty of issues at a much smaller scale.
I understand why you might be upset based on how they made a rather sweeping statement about the comments without addressing any content. When they said “a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour” it might strike many as an attack on the user base, but I’m choosing to interpret it through the lens of simply being upset at people who are not nice. I could be wrong, and perhaps @sabreW4K3@lazysoci.al can elaborate on exactly who and what they were talking about.
Regardless, let’s try our best to treat them in good faith. Don’t let your own biases shape how you interpret people or their language. Please try to ask clarifying questions first before jumping to the assumption that they are a right wing troll.
This is just one of the many far reaching effects of the disinformation age we are headed into. It would not surprise me if, in the future (assuming humanity survives our climate crisis), this period of time will be contrasted with the middle ages as periods of great loss of human knowledge.
For what it’s worth, a lot of what the article is bringing up isn’t particularly new. Fake studies are nothing new, but the scope of them will definitely increase. While it is manpower intensive, this is easily solved by peer review. In fact, perhaps ironically, AI could be used to do a first-pass review before and summarize what seems like it was AI created versus human created and send that along to a human.
Corporation funded studies designed to get around regulation or to promote their product, on the other hand, is something we’ve been dealing with for quite some time and an issue we haven’t really solved as a society. Anyone who works in science knows how to spot these from a mile away because they’re nearly always published in tiny journals (low citation score) which either don’t do peer review or have shady guidelines. The richer companies have the money to simply run 40 or 50 studies concurrently and toss the data from every one that doesn’t have the outcome they’re looking for (in essence repeatedly rolling a d20 until they get a 20) which allows them to get their findings into a bigger journal because it’s all done above board. Some larger publishing journals have created guidelines to try and fight this, but ultimately you need meta-analyses and other researchers to disprove what’s going on, and that’s fairly costly.
Also, as an aside- this belongs in the science community more than tech in my opinion.