AI is a tool. It’s not a person, it’s not a be-all-end-all of anything. Just like a person can use excel and come up with the wrong numbers, people can use AI and come up with the wrong answer.
Just like with every tool, there are people who can’t use them properly, there are people who are good enough to get modest results, and there are people who are experts at their craft who can do amazing things with them. AI is no different.
If you want a calculator, use a calculator - not AI. Use the right tool for the job and you’ll get the best result.
Studies can be made to say anything, and I know the ones you are talking about - they’re bogus.
Except that anyone who can use it properly can also just do the job without it, and the amount of damage it is doing because it’s freely available to everyone is insane.
You’re completely ignoring all my arguments. This sorta makes sense since your original reply was very “just ignore the bad stuff and it’s good!” but you’re going to have to address those things. I meanc, you did say “they’re bogus” and then not elaborate at all, but I’m assuming that if you have the energy to continuing writing comments then you would also have the energy to do the far more efficient thing and show me why those studies are bogus, right?
No I’m not, I addressed them. LLMs not being able to do maths/spelling is a known shortcoming. Anyone using it to do that is literally using it wrong. The studies you talk about were ridiculous, I know the ones you’re talking about. Of course people that don’t learn something won’t know how to do it, for example - but the fact that they can do it with AI is a positive. Obviously getting AI to write an essay means that the person will feel less “proud” of their work, as one of the studies said - but that’s not a “bad” thing. Just like how people don’t need to learn how to hunt and gather anymore doesn’t mean that it’s a bad thing - the world as it is, and as it always will be from here on out, means we don’t need to know that unless we want to do it.
Again - AI is a tool, and idiots being able to use it to great effect doesn’t mean that the tool is bad. If anything that’s a showing of how good the tool is.
Those studies aren’t about them feeling less proud, they’re about the degradation of critical thinking skills.
I have repeatedly said that isn’t worth anything largely because it doesn’t do anything I can’t do with relative ease. Why do you think it’s so great? What do you honestly use it for?
As one example I built an MCP server that lets LLMs access a reporting database, and made a Copilot Agent and integrated it into Teams, so now the entire business can ask a chat bot questions, using natural language in teams, about business data. It can run reports for them on demand, pulling in new columns/tables. It can identify when there might be something wrong as it also reads from our logs.
These people don’t know databases. They don’t know how to read debug/error logs.
I also use GitHub copilot.
But sure, it can’t be of any help to anyone ever lol
I’ll take your word for it to not just be saying “no” but I still have to wonder why it needs “AI” and if people are going to build up a reliance on it to the point where they start to not be able to find that info on their own. I mean, hell, like you say they already can’t handle the databases so why are they even fucking around in there anyway/why aren’t they learning how to use them if they’re so important for their jobs?
AI is a tool. It’s not a person, it’s not a be-all-end-all of anything. Just like a person can use excel and come up with the wrong numbers, people can use AI and come up with the wrong answer.
Just like with every tool, there are people who can’t use them properly, there are people who are good enough to get modest results, and there are people who are experts at their craft who can do amazing things with them. AI is no different.
If you want a calculator, use a calculator - not AI. Use the right tool for the job and you’ll get the best result.
Studies can be made to say anything, and I know the ones you are talking about - they’re bogus.
Except that anyone who can use it properly can also just do the job without it, and the amount of damage it is doing because it’s freely available to everyone is insane.
You’re completely ignoring all my arguments. This sorta makes sense since your original reply was very “just ignore the bad stuff and it’s good!” but you’re going to have to address those things. I meanc, you did say “they’re bogus” and then not elaborate at all, but I’m assuming that if you have the energy to continuing writing comments then you would also have the energy to do the far more efficient thing and show me why those studies are bogus, right?
No I’m not, I addressed them. LLMs not being able to do maths/spelling is a known shortcoming. Anyone using it to do that is literally using it wrong. The studies you talk about were ridiculous, I know the ones you’re talking about. Of course people that don’t learn something won’t know how to do it, for example - but the fact that they can do it with AI is a positive. Obviously getting AI to write an essay means that the person will feel less “proud” of their work, as one of the studies said - but that’s not a “bad” thing. Just like how people don’t need to learn how to hunt and gather anymore doesn’t mean that it’s a bad thing - the world as it is, and as it always will be from here on out, means we don’t need to know that unless we want to do it.
Again - AI is a tool, and idiots being able to use it to great effect doesn’t mean that the tool is bad. If anything that’s a showing of how good the tool is.
Those studies aren’t about them feeling less proud, they’re about the degradation of critical thinking skills.
I have repeatedly said that isn’t worth anything largely because it doesn’t do anything I can’t do with relative ease. Why do you think it’s so great? What do you honestly use it for?
As one example I built an MCP server that lets LLMs access a reporting database, and made a Copilot Agent and integrated it into Teams, so now the entire business can ask a chat bot questions, using natural language in teams, about business data. It can run reports for them on demand, pulling in new columns/tables. It can identify when there might be something wrong as it also reads from our logs.
These people don’t know databases. They don’t know how to read debug/error logs.
I also use GitHub copilot.
But sure, it can’t be of any help to anyone ever lol
I’ll take your word for it to not just be saying “no” but I still have to wonder why it needs “AI” and if people are going to build up a reliance on it to the point where they start to not be able to find that info on their own. I mean, hell, like you say they already can’t handle the databases so why are they even fucking around in there anyway/why aren’t they learning how to use them if they’re so important for their jobs?