What you are saying is you don’t want creators to be able to make a living off their work. Because that is what “professional” means.
What you are saying is you don’t want creators to be able to make a living off their work. Because that is what “professional” means.
Do you use the integrated AI in new versions of Excel or do you ask ChatGPT or some other AI to write it out for you?
Since some commenters on here seemed a little too eager to go with “fuck copyright” and outright dismiss the particular copyright claim the story was about, I thought I’d help make sure they understand that it’s not all bad. Too often have well intentioned people been too quick to dismiss a setup, only to replace it with something worse - or without really having any idea what to replace it with. You seem to understand that copyright serves a useful function in the current market-based economy, warts and all.
It is very difficult to make money in a market economy if you cannot sell the products of your labor. And to be able to do that, you need to have some ownership over said products. Ownership means exclusion, no way around it. Then you can transfer that ownership to an employer in exchange for a salary, or trade in an open market as a freelancer. Or create a collective wherein you share ownership. There are different models, but culture, to an extent, has always been monetized one way or another because creators have always needed to make a living, so they can continue to practice their craft while sustaining themselves and their families.
Copyright abolitionism sounds cool until you’re a professional creator with mouths to feed in this economy.
Of course there are smart ways creators can make money while also waiving their rights under copyright, but this does not work for everyone and many really just need to be able to sell the product of their labor to make a living.
I’m not saying it’s a perfect system, not by a long shot. But there’s no easy solution either.
I understand the sentiment, and you are right, that copyright is an obstacle to some forms of creativity, especially anything that involves direct reuse of somebody else’s work without their consent. It has also enabled a marketplace for content that has, like many other markets over time, led to the concentration of market power in a small number of business concerns, who effectively dominate their fields with extensive content libraries and armies of lawyers and lobbyists to promote their interests.
However, one should still not forget, that if you’re just an independent creator who depends on their creativity to make a living and at some point manages to create something of great value, it is more likely than not that other small or big fish will try to take that and sell it without giving you a penny. And your only recourse will be copyright law. As in this case here. Saying “fuck copyright” without critically engaging with what is actually at stake in a specific case, can lead to a problematic stance where you may find yourself defending grifters against honest creators trying to make a living off their work.
Well if you want an argument based on ‘first principles’, because the photographer actually put work into producing this picture, let alone their knowledge, likely expensive equipment, and hard earned skill to take a truly great shot, whereas you did nothing for it. Unless you are a professional model, but then you probably got compensated for your work as part of a deal.
Now the uses you describe are very different. Some are more casual and non-commercial in nature. Courts will consider such factors in a copyright infringement case.
Now does the above mean you have absolutely no say in what happens to a picture of you taken by someone else? Not exactly, you can also prevent third parties from using the likeness of you for purposes that might be damaging to your dignity or reputation, again, in some jurisdictions, I do not know the details. I am not a lawyer and it was a long time ago I studied these subjects. But basically my point is that the fact that it’s you in the picture may matter to an extent, depending on laws protecting personhood in your country, but not in the way you assumed where every photo of you is yours for the taking.
It is different only in that - in some jurisdictions at least - you can ask for the picture to be taken down or destroyed, and then not if you are a public person appearing in public like Trump is in this case. But that still does not give you the right to use the picture for your own gain without compensating the photographer. Because then you clearly not only have no objections to the picture being taken, but you value that picture, want to use it publicly, commercially even, and again, you owe a debt to the person who took it and in fact depends on people paying for their pictures for their livelihood.
Actually no, when you go to a professional photographer to have your picture taken, you pay for it. Because they put in the work and need to be compensated for it. By that logic people would never have to pay photographers for portraits, weddings, none of that. Just because you’re in a picture doesn’t mean you don’t owe a debt to the person who took it.
Sure, until you become a creative professional and you see someone with a lot more money than you making even more money off your work, and then you might instead say “fuck that guy”!
Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.
The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.
Welp, regardless of the very real issues in these countries, this is exactly the kind of rhetoric that precedes an invasion, as it did when Putin started publicly questioning Ukraine’s status as a country. This helps cement my assessment that Israel is going to go for a larger land grab with the pretext of building a buffer zone for the protection of its citizens.
Why is that?
Nice but paywalled for me
It was a more optimistic time, perhaps a more naive time depending on your perspective. A time when most people felt that crowds were wise and the truth would surface spontaneously. Where the internet would help us spread knowledge and democracy and none of the bad things. Where conspiracy theories, disinformation, outright hatred and bigotry were considered fringe phenomena that could be kept at bay. When people would point to 4chan as the worst the internet had to offer, if they even knew about it. Where politicians and their voters could argue passionately, without necessarily feeling that other side are “extremists” or “fascists” who would literally “destroy our country” if they win an election.
The world is cracking at the seams lately and this leads more people to wanna put the brakes on the internet. Liberals especially, witnessing with horror the surge of the far right and attributing it in part to the internet’s ultimate ability to amplify anything, any voice, any shitty little take, no matter how extreme, how misinformed, or bigoted. Most likely misinformed and bigoted with someone like Musk at the helm, the thinking goes. In short, liberals have shifted from the exuberant naïveté of the past to protection mode, trying to stem the tide of right wing populism and perhaps ultimately fascism. And thus will come off as overbearing censors to anyone who doesn’t understand why they do what they do or is still optimistic that a lack of censorship will only lead to good things.
Freedom only works with a social contract in place, some consensus, some ground truths about the world that we can all agree on. Or that a solid, relatively stable majority at least can agree on. When that starts to break down, freedom to say and do whatever you want online may in fact bring the downfall sooner by stoking the fires of division. Of course the likes of Musk probably do think that they are fighting the good fight and are championing free speech, but increasingly he seems to be shifting to the right politically, and rather fighting for his presumed right to shape the world in his image and grow his business empire unchecked, if anything, and not some ideal of freedom and democracy. The likes of him, businessmen with nearly unchecked power and ultimately more concern for their business and personal aspirations than democracy, are probably going to become a bigger threat to our freedoms than the government of Australia. Maybe. Probably.
I personally do think that liberals have often gone overboard in their speech policing zeal, but on the other hand understand why they do what they do. Policing the internet seems like a much easier alternative than actually addressing all the major, sometimes seemingly existential socioeconomic challenges liberal democracies face today. The latter would deprive right wing populists and extremists of much of their influence, but is of course way, way harder than policing speech.
Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions. The latter has serious ramifications and shouldn’t be taken lightly. What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one. And the means to make some extra cash for others. Also, a stepping stone for immigrants who may not have the skills or means to get better jobs but are thus able to make a living legally. And sometimes the refuge of white collar workers down on their luck. What are all these people going to do when taxi driving is relegated to robots? Will there be (less menial) alternatives? Will these offer a livable wage? Or will such people end up long-term unemployed? Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?
A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.
Be careful with that logic, these are jobs forever lost to robots. They will eventually come for your job or the job of someone you know. Increasingly the question won’t be whether robots can do X better than humans, but whether they should.
Interesting. And shady. Though not about recording conversations.
Unfortunately unless you are a tiny niche community that isn’t ever targeted by spam or idiots (and how common is that really), moderators are a necessary evil. You probably don’t hate moderators. You probably hate bad/aggressive/biased/etc moderators. Or maybe sometimes you are the problem, I don’t know. It is not a problem with an easy solution. Usually large forums with no moderation become quickly unbearable to most people. And then moderators become in turn unbearable to some people.
Maybe a trusted AI can do a better job at this - like give it the community rules and ask it to enforce them objectively, transparently, and dispassionately, unless a certain number of participants complain, in which case it can reverse its decision and learn from that.