Shit code review is not code review. If you just rubber stamp everything or outsource it to someone who will, you aren’t doing code review.
Aside from that:
LLM generated code is more likely to have subtle errors that a human would be very unlikely to make in otherwise mundane code.
Citation requested
My current least favorite thing is LLM generated unit tests that don’t actually test what they say they do.
If I had a nickle for every single time I had to explain to someone that their unit test doesn’t do anything or that they literally just copied the output and checked against it (and that they are dealing with floating points so that is actually really stupid)… I’d probably go buy some Five Guys for lunch.
Its like saying that the problem is that you are using robots to assemble cybertrucks rather than people. The problem isn’t who is super glueing sharp jagged metal together. The problem is that your product is fundamentally shite and should never have reached production in the first place. And you need to REALLY work through your design flows and so forth.
I keep seeing it over and over again. Anyone that actually has to deal with coworkers using this bullshit that isn’t also in the cult is going to recognize it.
If I had a nickle for every singl yada yada yada
Sure, there have always been better and worse developers. LLMs are making developers that used to be better, worse.
Bad developers just do whatever. It doesn’t matter if they wrote the code themselves or if a tool wrote it for them. They aren’t going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.
Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you… those kids aren’t writing much better code than chatgpt and there is a reason so many of them have embraced it.
But it also fundamentally changes the conversation. It stops being “We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code” and instead “We need generative AI to be better”.
It was the exact same thing with “AI can’t draw hands”. Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the “studio ghibli filter” was made? It took off like hotcakes because “Now AI can can do hands!” and there was no thought towards the actual implications of generative AI.
Nothing outside of the first paragraph here is terribly meaningful, and the first paragraph is just trying to talk past what I said before. I’ll reiterate, very clearly.
I have observed several of my coworkers that used to be really good at their jobs, get worse at their jobs (and make me spend more ensuring code quality) since they started using using LLM tools. That’s it. That’s all I care about. Maybe they’ll get better. Maybe they won’t. But right now I’d strongly prefer people not use them, because people using them has made my experience worse.
Shit code review is not code review. If you just rubber stamp everything or outsource it to someone who will, you aren’t doing code review.
Aside from that:
Citation requested
If I had a nickle for every single time I had to explain to someone that their unit test doesn’t do anything or that they literally just copied the output and checked against it (and that they are dealing with floating points so that is actually really stupid)… I’d probably go buy some Five Guys for lunch.
Its like saying that the problem is that you are using robots to assemble cybertrucks rather than people. The problem isn’t who is super glueing sharp jagged metal together. The problem is that your product is fundamentally shite and should never have reached production in the first place. And you need to REALLY work through your design flows and so forth.
I keep seeing it over and over again. Anyone that actually has to deal with coworkers using this bullshit that isn’t also in the cult is going to recognize it.
Sure, there have always been better and worse developers. LLMs are making developers that used to be better, worse.
Bad developers just do whatever. It doesn’t matter if they wrote the code themselves or if a tool wrote it for them. They aren’t going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.
Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you… those kids aren’t writing much better code than chatgpt and there is a reason so many of them have embraced it.
But it also fundamentally changes the conversation. It stops being “We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code” and instead “We need generative AI to be better”.
It was the exact same thing with “AI can’t draw hands”. Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the “studio ghibli filter” was made? It took off like hotcakes because “Now AI can can do hands!” and there was no thought towards the actual implications of generative AI.
Nothing outside of the first paragraph here is terribly meaningful, and the first paragraph is just trying to talk past what I said before. I’ll reiterate, very clearly.
I have observed several of my coworkers that used to be really good at their jobs, get worse at their jobs (and make me spend more ensuring code quality) since they started using using LLM tools. That’s it. That’s all I care about. Maybe they’ll get better. Maybe they won’t. But right now I’d strongly prefer people not use them, because people using them has made my experience worse.
I know it’s not related, curious about this part.
I know it has an aluminum based frame which should inhibit it’s use to haul heavy loads, but what else?