It’s possible it reduces the probability of things like wrongly answered stack overflow questions from being used, so it might actually work a bit.
Kinda like how with image generation, you get vastly better results by adding a negative prompt such as “low quality, jpeg artifacts, extra fingers, bad hands” etc, because the dataset from boorus actually do include a bunch of those tags and using them steers the generation to do thing that don’t have features that match them.
The better way rather than using a vague “make no mistakes” is to feed a template of stylistic preferences like “only var type this, only structures like this, we avoid certain structures or variable types” - as the context window is repeatedly compressed during work, “make no mistakes” probably gets contorted to, “mistakes! make!” then, “MISTAKES!!!”, like “NO, money down!”
Bonus points if the style is stored in the repo as a template, so when the change is done you can just simply go, “ok, now read that style doc again and fix what you re-f’d up”. Sometimes it’ll even go re-read the style doc itself of its own volition.
Using an LLM for dev is like directing an intern on 5 espressos to complete a coding task, but dumber.
“Make no mistakes” gives big “do not hallucinate” energy.
“Generate an image with no dog in it.”
It’s possible it reduces the probability of things like wrongly answered stack overflow questions from being used, so it might actually work a bit.
Kinda like how with image generation, you get vastly better results by adding a negative prompt such as “low quality, jpeg artifacts, extra fingers, bad hands” etc, because the dataset from boorus actually do include a bunch of those tags and using them steers the generation to do thing that don’t have features that match them.
The better way rather than using a vague “make no mistakes” is to feed a template of stylistic preferences like “only var type this, only structures like this, we avoid certain structures or variable types” - as the context window is repeatedly compressed during work, “make no mistakes” probably gets contorted to, “mistakes! make!” then, “MISTAKES!!!”, like “NO, money down!”
Bonus points if the style is stored in the repo as a template, so when the change is done you can just simply go, “ok, now read that style doc again and fix what you re-f’d up”. Sometimes it’ll even go re-read the style doc itself of its own volition.
Using an LLM for dev is like directing an intern on 5 espressos to complete a coding task, but dumber.
But this remarks seem to increase the quality of LLM outputs.
The most relevant words in that sentence.
I guess that’s suiting to nearly everything AI related 😅