IIRC, Wikipedia is CC-BY-SA licensed, generally it’s okay to take, remix, and publish its content, no matter whether you’re using it for good or evil. You just have to Share Alike the results.
But asking a known biased bullshit generator to fact check things is pretty cringe in general.
I wonder if its a one-time fork, or if Musk wants to continue to derive benefit from Wikipedia authors and editors. Is it possible its actually a real-time AI filtering each request? If so that would burn up a massive amount of AI tokens.
It would also present some great methods to tease out exactly how his filters are working. Assuming its real-time, a single wikipedia page could be created with test content with specific words or phrases then a check on the grok version to see if it alters it. A full map could be built of exactly the rules its using.
It can’t be a one time fork. Whenever his propaganda changes, he’s going to want to give different instructions to his AI and then regenerate the entire encyclopedia like he already did.
It’s a different use case from an encyclopedia that is based on facts and the truth.
Its a shame there is no license that forbids AI use. Well, there kind of are but none are common and probably wouldn’t hold up in court. Still, it would be nice to attach to work and communicate that the preference is to not have AI reuse
That’s what I meant by “wouldn’t hold up in court”. Thanks for filling in the specifics. I wish it wasn’t classified as fair us, I think its an unfortunate way to avoid paying people for training data and that’s hiding the true cost of the system
IIRC, Wikipedia is CC-BY-SA licensed, generally it’s okay to take, remix, and publish its content, no matter whether you’re using it for good or evil. You just have to Share Alike the results.
But asking a known biased bullshit generator to fact check things is pretty cringe in general.
I wonder if its a one-time fork, or if Musk wants to continue to derive benefit from Wikipedia authors and editors. Is it possible its actually a real-time AI filtering each request? If so that would burn up a massive amount of AI tokens.
It would also present some great methods to tease out exactly how his filters are working. Assuming its real-time, a single wikipedia page could be created with test content with specific words or phrases then a check on the grok version to see if it alters it. A full map could be built of exactly the rules its using.
It can’t be a one time fork. Whenever his propaganda changes, he’s going to want to give different instructions to his AI and then regenerate the entire encyclopedia like he already did.
It’s a different use case from an encyclopedia that is based on facts and the truth.
Its a shame there is no license that forbids AI use. Well, there kind of are but none are common and probably wouldn’t hold up in court. Still, it would be nice to attach to work and communicate that the preference is to not have AI reuse
AI is predominantly classified as “fair use” in the US right now, so it wouldn’t even matter if you said “No AI” - copyright does not apply.
That’s what I meant by “wouldn’t hold up in court”. Thanks for filling in the specifics. I wish it wasn’t classified as fair us, I think its an unfortunate way to avoid paying people for training data and that’s hiding the true cost of the system
You must have misread, grok is doing the fact checking not Elon.
You must have misread, I’m not writing a book, my pen is
I don’t get the joke? Mine would make the most sense reading the post I’m responding to and then immediately reading mine.
Grok and Elon are the same thing
O…Kay? I mean that aligns with my joke but it’s a bit less funny if you explain it