I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.
They get confused easily, and despite what is being pitched, they don’t really learn very well. So if they get something wrong the first time they aren’t going to figure it out after another hour or two.
In my experience, they’re better at poking holes in code than writing it, whether that’s green or brownfield.
I’ve tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM’s work than if I’d just written it myself.
But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you’re unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.
At work we’ve got coderabbit set up on our github and it has found bugs that I wrote. Sometimes the thing drives me insane with pointless comments, but just today found a spot that would have been a big bug in prod in like 3 months.
But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product.
Yes. Kind of. It takes ( a couple of days) experience with LLMs to know that failing to understand your corrections means immediate delete and try another LLM. The only OpenAI llm I tried was their 120g open source release. It insisted that it was correct in its stupidity. That’s worse than LLMs that forget the corrections from 3 prompts ago, though I also learned that is also grounds for delete over any hope for their usefulness.
I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.
They get confused easily, and despite what is being pitched, they don’t really learn very well. So if they get something wrong the first time they aren’t going to figure it out after another hour or two.
In my experience, they’re better at poking holes in code than writing it, whether that’s green or brownfield.
I’ve tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM’s work than if I’d just written it myself.
But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you’re unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.
At work we’ve got coderabbit set up on our github and it has found bugs that I wrote. Sometimes the thing drives me insane with pointless comments, but just today found a spot that would have been a big bug in prod in like 3 months.
Yes. Kind of. It takes ( a couple of days) experience with LLMs to know that failing to understand your corrections means immediate delete and try another LLM. The only OpenAI llm I tried was their 120g open source release. It insisted that it was correct in its stupidity. That’s worse than LLMs that forget the corrections from 3 prompts ago, though I also learned that is also grounds for delete over any hope for their usefulness.