• wiegell@feddit.dk
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Mitchell Hashimoto writes a lot of Zig with AI (and this interview is almost a year old), see: https://www.youtube.com/watch?v=YQnz7L6x068&t=490s How long since you have tried tools? I think there has been some pretty astounding progress during the last couple of months. Until recently i did not use it daily, but now I just cant ignore the efficiency boost it gives me. There are definitely security concerns, and at this point you should not trust code that you do not read/understand, but tbh. i’m starting to believe that AI might (at least in the short term) free up resources to patch stuff and implement security features, that otherwise was not prioritised before due to focus on feature development. What it does to the IT sector in the long run - who knows…

    • onlinepersona@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      That video showed him saying that it’s good for autocomplete. But speaking from experience testing it on Rust, Python, JS, HTML and CSS, it performed the worst on Rust. It wrote tests well, but sucked at features or refactoring. Whether the problem is between the chair and the screen, I don’t know.

      Whether AI will be able to write secure code, I dunno, I haven’t tried. It could be put into the rules to consider security and add tests relating to security or add an adversarial agent that tries to find flaws in the code which can be exploited. That could probably do more than a developer who has no time assigned to care about testing, much less security.

      What it does to the IT sector in the long run - who knows…

      Agreed. Things are moving so quickly, it’s impossible to predict. There are lots of people on LinkedIn screaming about obsoletion of humans or other bold claims, but to me they are like drunk fortune tellers: tell enough fortunes and one is bound to be right.

      • wiegell@feddit.dk
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        My naive hope is that local models or maybe workplace distributed clusters catch up and the cloud based bubble bursts. I am under the impression, that atm. a big difference as to whether a tool works well or not is more related to how well all the software around the actual llm is constructed. E.g. for discovery - being able to quickly ingest an internet url and access a web index is a big force of the cloud based providers atm. And for coding it has a lot to do with quickly searching and finding the relevant parts of the codebase and evaluate whether the llm has all the required information to correctly perform a task.