• 0 Posts
  • 811 Comments
Joined 10 months ago
cake
Cake day: February 5th, 2025

help-circle
  • First, how much that is true is debatable.

    It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.

    Second, that doesn’t matter as far as the output. No one can legally own that.

    Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.


  • Nobody is asking it to (except freaks trying to get news coverage.)

    It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.

    When a painter paints a canvas, they don’t record the “source code” but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that…)


  • Yeah, context management is one big key. The “compacting conversation” hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable.

    The best explanation I have heard for the current limitations is that there is a “context sweet spot” for Opus 4.5 that’s somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you’re at “optimal understanding” of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they “compact the conversation” and throw out less useful tokens getting you back to the “essential core” of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s).

    Some describe an aspect of this as the “lost in the middle” phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more “middle of the window” content gets dropped during a compacting event.


  • Depends on how demanding you are about your application deployment and finishing.

    Do you want that running on an embedded system with specific display hardware?

    Do you want that output styled a certain way?

    AI/LLM are getting pretty good at taking those few lines of Bash, pipes and other tools’ concepts, translating them to a Rust, or C++, or Python, or what have you app and running them in very specific environments. I have been shocked at how quickly and well Claude Sonnet styled an interface for me, based on a cell phone snap shot of a screen that I gave it with the prompt “style the interface like this.”



  • If you outsource you could at least sure them when things go wrong.

    Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue.

    Plus you can own the code if a person does it.

    I’m not aware of any ownership issues with code I have developed using Claude, or any other agents. It’s still mine, all the more so because I paid Claude to write it for me, at my direction.


  • the sell is that you can save time

    How do you know when salespeople (and lawyers) are lying? It’s only when their lips are moving.

    developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.

    That’s the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.


  • I tried using Gemini 3 for OpenSCAD, and it couldn’t slice a solid properly to save its life, I gave up on it after about 6 attempts to put a 3:12 slope shed roof on four walls. Same job in Opus 4.5 and I’ve got a very nicely styled 600 square foot floor plan with radiused 3D concrete printed walls, windows, doors, shed roof with 1’ overhang, and a python script that translates the .scad to a good looking .svg 2D floorplan.

    I’m sure Gemini 3 is good for other things, but Opus 4.5 makes it look infantile in 3D modeling.


  • I’ll put it this way: LLMs have been getting pretty good at translation over the past 20 years. Sure, human translators still look down their noses at “automated translations” but, in the real world, an automated translation gets the job done well enough most of the time.

    LLMs are also pretty good at translating code, say from C++ to Rust. Not million line code bases, but the little concepts they can do pretty well.

    On a completely different tack, I’ve been pretty happy with LLM generated parsers. Like: I’ve got 1000 log files here, and I want to know how many times these lines appear. You’ve got grep for that. But, write me a utility that finds all occurrences of these lines, reads the time stamps, and then searches for any occurrences of these other lines within +/- 1 minute of the first ones… grep can’t really do that, but a 5 minute vibe coded parser can.



  • I don’t have time to argue with FOSS creators to get my stuff in their projects

    So much this. Over the years I have found various issues in FOSS and “done the right thing” submitting patches formatted just so into their own peculiar tracking systems according to all their own peculiar style and traditions, only to have the patches rejected for all kinds of arbitrary reasons - to which I say: “fine, I don’t really want our commercial competitors to have this anyway, I was just trying to be a good citizen in the community. I’ve done my part, you just go on publishing buggy junk - that’s fine.”


  • There have been some articles published positing that AI coding tools spell the end for FOSS because everybody is just going to do stuff independently and don’t need to share with each other anymore to get things done.

    I think those articles are short sighted, and missing the real phenomenon that the FOSS community needs each other now more than ever in order to tame the LLMs into being able to write stories more interesting than “See Spot run.” and the equivalent in software projects.



  • making something quick that kind of works is nice… but why even do so in the first place if it’s already out there, maybe maintained but at least tested?

    In a sense, this is what LLMs are doing for you: regurgitating stuff that’s already out there. But… they are “bright” enough to remix the various bits into custom solutions. So there might already be a NWS API access app example, and a Waveshare display example, and so on, but there’s not a specific example that codes up a local weather display for the time period and parameters you want to see (like, temperature and precipitation every 15 minutes for the next 12 hours at a specific location) on the particular display you have. Oh, and would you rather build that in C++ instead of Python? Yeah, LLMs are actually pretty good at remixing little stuff like that into things you’re not going to find exact examples of ready to your spec.



  • As an experiment I asked Claude to manage my git commits, it wrote the messages, kept a log, archived excess documentation, and worked really well for about 2 weeks. Then, as the project got larger, the commit process was taking longer and longer to execute. I finally pulled the plug when the automated commit process - which had performed flawlessly for dozens of commits and archives, accidentally irretrievably lost a batch of work - messed up the archive process and deleted it without archiving it first, didn’t commit it either.

    AI/LLM workflows are non-deterministic. This means: they make mistakes. If you want something reliable, scalable, repeatable, have the AI write you code to do it deterministically as a tool, not as a workflow. Of course, deterministic tools can’t do things like summarize the content of a commit.




  • This stuff ALWAYS ends up destroying the world on TV.

    TV is also full of infinite free energy sources. In the real world warp drive may be possible, you just need to annihilate the mass of Jupiter with an equivalent mass of antimatter to get the energy necessary to create a warp bubble to move a small ship from the orbit of Pluto to a location a few light years away, but on TV they do it every week.


  • your team of AIs keeps running circles

    Depending on your team of human developers (and managers), they will do the same thing. Granted, most LLMs have a rather extreme sycophancy problem, but humans often do the same.

    We haven’t gotten yet to AIs who will tell you that what you ask is impossible.

    If it’s a problem like under or over-constrained geometry or equations, they (the better ones) will tell you. For difficult programing tasks I have definitely had the AIs bark up all the wrong trees trying to fix something until I gave them specific direction for where to look for a fix (very much like my experiences with some human developers over the years.)

    I had a specific task that I was developing in one model, and it was a hard problem but I was making progress and could see the solution was near, then I switched to a different model which did come back and tell me “this is impossible, you’re doing it wrong, you must give up this approach” up until I showed it the results I had achieved to-date with the other model, then that same model which told me it was impossible helped me finish the job completely and correctly. A lot like people.