Some models are getting so good they can patch user reported software defects following test driven development with minimal or no changes required in review. Specifically Claude Sonnet and Gemini
That’s an interesting point, and leads to a reasonable argument that if an AI is trained on a given open source codebase, developers should have free access to use that AI to improve said codebase. I wonder whether future license models might include such clauses.
But, will it work, huh? HUH?
I can also type a bunch of random sentences of words. Doesn’t make it more understandable.
Some models are getting so good they can patch user reported software defects following test driven development with minimal or no changes required in review. Specifically Claude Sonnet and Gemini
So the claims are at least legit in some cases
Oh good. They can show us how it’s done by patching open-source projects for example. Right? That way we will see that they are not full of shit.
Where are the patches? They have trained on millions of open-source projects after all. It should be easy. Show us.
That’s an interesting point, and leads to a reasonable argument that if an AI is trained on a given open source codebase, developers should have free access to use that AI to improve said codebase. I wonder whether future license models might include such clauses.
Will it execute . . . probably.
Will it execute what you want it to . . . probably not.