• WaitThisIsntReddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 minutes ago

        A couple agent iterations will compile. Definitely won’t do what you wanted though, and if it does it will be the dumbest way possible.

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          18 minutes ago

          Yeah you can definitely bully AI into giving you some thing that will run if you yell at it long enough. I don’t have that kind of patience

          Edit: typically I see it just silently dump errors to /dev/null if you complain about it not working lol

  • thejml@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 hours ago

    Copilot keeps finishing my code for me in near real time… it completely disrupts my train of thought and my productivity dropped tremendously. I finally disabled it.

    I LIKE writing code, stop trying to take the stuff away that I WANT to do and instead take away the stuff I HATE doing.

  • garretble@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 hours ago

    I had a bit of a breakthrough with some personal growth with my code today.

    I learned a bit more about entity framework that my company is using for a project, and was able to create a database table, query it, add/delete/update, normal CRUD stuff.

    I normally work mostly on front end code, so it was rewarding to learn a new skill and see the data all the way from the database to the UI and back - all my code. I felt great after doing a code review this afternoon to make sure I wasn’t missing anything, and we talked about some refactoring to make it better.

    AI will never give you that.

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      edit-2
      2 hours ago

      No, but it can help a capable developer to have more of those moments, as one can use LLMs and coding agents to (a) help explain the relationships in a complicated codebase succinctly and (b) help to quickly figure out why one’s code doesn’t work as expected (from simple bugs to calling out one’s own fundamental misunderstandings), giving one more time to focus on what matters to oneself.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        1 hour ago

        giving one more time to focus on what matters to oneself.

        Is that been an insufferable prick online? Because I assure you no one wants you to spend more time on that, you spend enough time on that as it is.

        • Joe@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 hour ago

          Hm? Oh, I obviously misread the room. It seems I interrupted a circle jerk? My apologies.

      • TORFdot0@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        32 minutes ago

        AI can help you be more agile in getting out a PoC but vibe coding always ends up eating itself and you either aren’t capable enough to fix it (because you are a vibe coder) or you spend more time on the back 9, trying to clean up the code so you don’t have so many hacks and redundancy because the AI was too literal or hallucinated fake libraries that return null or its context window expired and it wrote 5 different versions of the same function

  • MadMadBunny@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 hours ago

    But, will it work, huh? HUH?

    I can also type a bunch of random sentences of words. Doesn’t make it more understandable.

    • ryannathans@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      3 hours ago

      Some models are getting so good they can patch user reported software defects following test driven development with minimal or no changes required in review. Specifically Claude Sonnet and Gemini

      So the claims are at least legit in some cases

      • 6nk06@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 hours ago

        Oh good. They can show us how it’s done by patching open-source projects for example. Right? That way we will see that they are not full of shit.

        Where are the patches? They have trained on millions of open-source projects after all. It should be easy. Show us.

        • JustinTheGM@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 minutes ago

          That’s an interesting point, and leads to a reasonable argument that if an AI is trained on a given open source codebase, developers should have free access to use that AI to improve said codebase. I wonder whether future license models might include such clauses.