cross-posted from: https://lemmy.zip/post/49954591

“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.

To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.

“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

  • dumples@midwest.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    I always wondered how they got those original productivity claims. I assume they are counting everytime a programmer uses a AI suggestion. Seems like the way to get the highest markable number for a sales team. I know that when I use those suggestions occasionally they will be 100% correct and I won’t have to make any changes. More often than not it starts correct and then when it fills it adds things I don’t need or is wrong or isn’t fitting how I like to write my code. Then I have to delete and recreate it.

    The most annoying is when I think I am tabbing for autocomplete and then it just adds more code that I don’t need

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      6
      ·
      15 hours ago

      I always wondered how they got those original productivity claims.

      Probably by counting produced lines of code, regardless their correctness or maintainability.

      And that’s probably combined with what John Ousterhout calls “Debugging a System into Existence”, which is, just assuming the newly generated code works until inevitably somebody comes with a bug report and then doing the absolute minimum to make that specific bug report go away, preferably by adding even more code.

      • droans@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        It seems like a good way to actually determine productivity would be to make it competitive.

        Have marathon and long-term coding competitions between 100% human coding, AI assisted, and 100% AI. Rate them on total time worked, mistakes, coverage, maintainability, extensibility, etc. and test the programmers for knowledge of their own code.

      • dumples@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        That what I thought. Each line of generated code even if deleted afterwards. Or have someone try to get as high as possible in a single trial