• 0 Posts
  • 323 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle




  • Is this your first time here?

    Your account is brand new and you’ve already posted now three posts related to JPlus in this community in one day. Please tell me you’re joking with this one.

    This post is a GitHub link to the project. Cool, I love seeing new projects, especially when the goal is to make it harder to write buggy code.

    The other post is an article that immediately links to the GitHub. The GitHub contains a link at the top to, what I can tell, the same exact article. Both the article and the GitHub README explain what JPlus is and how to use it.

    Why is this two posts when they contain the same information and link to each other directly at the top?









  • The conclusion of this experiment is objectively wrong when generalized. At work, to my disappointment, we have been trying for years to make this work, and it has been failure after failure (and I wish we’d just stop, but eventually we moved to more useful stuff like building tools adjacent to the problem, which is honestly the only reason I stuck around).

    There are a couple reasons why this problem cannot succeed:

    1. The outputs of LLMs are nondeterministic. Most problems require determinism. For example, REST API standards require idempotency from some kinds of requests, and a LLM without a fixed seed and a temperature of 0 will return different responses at least some of the time.
    2. Most real-world problems are not simple input-output machines. When calling, let’s say for example, an API to post a message to Lemmy, that endpoint does a lot of work. It needs to store the message in the darabase, federate the message, and verify that the message is safe. It also needs to validate the user’s credential before all of this, and it needs to record telemetry for observability purposes. LLMs are not able to do all this. They might, if you’re really lucky, be able to generate code that does this, but a single LLM call can’t do it by itself.
    3. Some real world problems operate on unbounded input sizes. Context sizes are constrained and as currently designed cannot handle unbounded inputs. See signal processing for an example of this, and for an example of a problem a LLM cannot solve because it cannot receive the input.
    4. LLM outputs cannot be deterministically improved. You can make changes to prompts and so on but the output will not monotonically improve when doing this. Improving one result often means sacrificing another result.
    5. The kinds of models you want to run are not in your control. Using Claude? K Anthropic updated the model and now your outputs all changed and you need to update your prompts again. This fucked us over many times.

    The list keeps going on. My suggestion? Just don’t. You’ll spend less time implementing the thing than trying to get an LLM to do it. You’ll save operating expenses. You’ll be less of an asshole.





  • What they’re doing should be outright illegal in most countries; it’s equivalent to changing a contract unilaterally after both parties signed it.

    Update to [COMPANY NAME]'s Policies

    Yes, this should be illegal, but it’s already common practice. I’m just hoping that enough of this will eventually get people to stop buying these products, and hopefully we can start seeing some real legislation against it in some countries.

    Additionally, I’d strongly advise against buying any sort of “smart” device, unless you’re pretty sure the benefits of connecting your toaster to the internet outweighs all the risks.

    This should be obvious at this point. “Smart” just means “internet-connected”, and we already know what happens to every device that connects to a remote server during regular operation: telemetry (and not the nice debugging kind but the “what do you use” kind), and advertisements.

    Including corporations and crackers

    The “crackers” part of this confuses me. Samsung is a Korean company. The chairman’s name is Lee Jae-yong (이재용). Samsung NA’s CEO is Yoonie Joung. Maybe I’m misreading this?