Gaywallet (they/it)

I’m gay

  • 54 Posts
  • 186 Comments
Joined 4 years ago
cake
Cake day: January 28th, 2022

help-circle
  • This is just one of the many far reaching effects of the disinformation age we are headed into. It would not surprise me if, in the future (assuming humanity survives our climate crisis), this period of time will be contrasted with the middle ages as periods of great loss of human knowledge.

    For what it’s worth, a lot of what the article is bringing up isn’t particularly new. Fake studies are nothing new, but the scope of them will definitely increase. While it is manpower intensive, this is easily solved by peer review. In fact, perhaps ironically, AI could be used to do a first-pass review before and summarize what seems like it was AI created versus human created and send that along to a human.

    Corporation funded studies designed to get around regulation or to promote their product, on the other hand, is something we’ve been dealing with for quite some time and an issue we haven’t really solved as a society. Anyone who works in science knows how to spot these from a mile away because they’re nearly always published in tiny journals (low citation score) which either don’t do peer review or have shady guidelines. The richer companies have the money to simply run 40 or 50 studies concurrently and toss the data from every one that doesn’t have the outcome they’re looking for (in essence repeatedly rolling a d20 until they get a 20) which allows them to get their findings into a bigger journal because it’s all done above board. Some larger publishing journals have created guidelines to try and fight this, but ultimately you need meta-analyses and other researchers to disprove what’s going on, and that’s fairly costly.

    Also, as an aside- this belongs in the science community more than tech in my opinion.








  • It really isn’t hidden lol. But you will get countless LLM defenders online who claim you can eliminate the bias with prompting or other hacks which don’t address the underlying issue or do anything but patch a broken system. To fix LLM bias you need to systematically correct, and very few folks have bothered to try and design methods to systematically correct. In the case of Grok, it’s actually explicitly designed to reference Musk’s bigoted musings on subjects first before examining other information.











  • In what world is that even a plausible outcome of this news? This feels non-sequitur by its pure absurdity. If they had a list of 1000 things they can do with this database, that would not even be on the list.

    I understand you are talking about something which either interests you or is a cause you care about, but we’re talking about monumental governmental surveillance by a president many scholars are calling a fascist. This is not the time nor the place to discuss such matters and trying to have that conversation could easily be read as dismissing the plentiful and obvious concerns around privacy and safety of the American public.





  • I understand why you might be upset based on how they made a rather sweeping statement about the comments without addressing any content. When they said “a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour” it might strike many as an attack on the user base, but I’m choosing to interpret it through the lens of simply being upset at people who are not nice. I could be wrong, and perhaps @sabreW4K3@lazysoci.al can elaborate on exactly who and what they were talking about.

    Regardless, let’s try our best to treat them in good faith. Don’t let your own biases shape how you interpret people or their language. Please try to ask clarifying questions first before jumping to the assumption that they are a right wing troll.