That sounds right.
Though really its just so much power on rear wheels that does it. No suspension will save you from the torque breaking the wheels loose gunning it out of a turn.
That sounds right.
Though really its just so much power on rear wheels that does it. No suspension will save you from the torque breaking the wheels loose gunning it out of a turn.
IIRC mustangs were specifically notorious for this due to their rear solid axle suspensions, though even that is probably down to driver error.
The “inexperienced driver” thing is true though, especially with the higher powered ones.


That serves the purpose too. It’s harder to pin Plex as an “illegal distribution service” when you have to pay for access. Either the streamer or “distributor” can’t be very anonymous, which makes large scale sharing impractical.
On the other hand, the more money they squeeze out, the more they risk appearing as if they “make money from piracy,” which is exactly how you get the MPAA’s attention.
What they should really have is bypass mufflers, where that loudness only comes at high RPM.
Sadly, most do not…
Huh? Racing corvettes have good records against Ferraris and Porches, going decades back:

Here’s a street corvette keeping pace with a tuned Ariel Atom, a horrifically fast track car, round the nurburgring: https://www.youtube.com/watch?v=oYZs7Ta2SSk
The “corvette meme” is that they’ll kill you in a crash, not that they’re slow around corners. They are not slow around corners.


You may (half) joke, but MPAA attention on Jellyfin would suck.


Playing devil’s advocate, I understand one point of pressure: Plex doesn’t want to be perceived as a “piracy app.”
See: Kodi. https://kodi.expert/kodi-news/mpaa-warns-increasing-kodi-abuse-poses-greater-video-piracy-risk/
To be blunt, that’s a huge chunk of their userbase. And they run the risk of being legally pounded to dust once that image takes hold.
So how do they avoid that? Add a bunch of other stuff, for plausible deniability. And it seems to have worked, as the anti-piracy gods haven’t singled them out like they have past software projects.
To be clear, I’m not excusing Plex. But I can sympathize.


For all the criticism of AI, this is the one that’s massively overstated.
On my PC, the task energy of a casual diffusion attempt (let’s say a dozen+ images in few batches) on a Flux-tier model is 300W * 240 seconds.
That’s 54 kilojoules.
…That’s less than microwaving leftovers, or a few folks browsing this Lemmy thread on laptops.
And cloud models like Nano Banana are more efficient than that, batching the heck out of generations on wider, more modern hardware, and more modern architectures, than my 3090 from 2020.
Look. There are a million reasons corporate AI is crap.
But its power consumption is a meme perpetuated by tech bros who want to convince the world scaling infinitely is the only way to advance it. That is a lie to get them money. And it is not the way research is headed.
Yes they are building too many data centers, and yes some in awful places, but that’s part of the con. They don’t really need that, and making a few images is not burning someone’s water away.
+1
For stuff like editing massive files or huge folders, the least stuttery, fastest IDE for me is… VScode. Jetbrains (last I tried it) is awful.
Code may not use 1MB of RAM or idle dead asleep, but it utilizes the CPU/GPU efficiently.
Now, extensions are the caveat, like any app that supports extensions. Those can bog it down real quick.


It’ll literally be a criminal hub, with a bunch of anonymized posts joking about dodging corpos. Probably.
And owls. Still owls.
FBI? No, I am not opening up.


It’s misleading.
IBM is very much into AI, as a modest, legally trained, economical tool. See: https://huggingface.co/ibm-granite
But this is the CEO saying “We aren’t drinking the Kool-Aid.” It’s shockingly reasonable.


That’s interesting.
I dunno if that’s any better. Compiler development is hard, and expensive.
I dunno what issue they have with LLVM, but it would have to be massive to justify building around it and then switching away to re-invent it.


…The same Zig that ditched LLVM, to make their own compiler from scratch?
This is good. But also, this is sort of in character for Zig.


That’s exactly what I said, and meant: I miss my jailbroken iPhone; I wish I could jailbreak mine now. But I can’t. And my 6 would be basically unusable without App Store support now.
That’s not why I’m replying though… I’m curious; what’s with insults like “eat me” over interpretations of someone else’s comment?


They’re pretty bad outside of English-Chinese actually.
Voice-to-voice is all relatively new, and it sucks if it’s not all integrated (eg feeding a voice model plain text so it loses the original tone, emotion, cadence and such).
And… honestly, the only models I can think of that’d be good at this are Chinese. Or Japanese finetunes of Chinese models. Amazon certainly has some stupid policy where they aren’t allowed to use them (even with zero security risk since they’re open weights).


Hostly, even a dirt cheap language model (with sound input) would tell you it’s garbage. It could itemize problematic parts of the sub.
But they didn’t use that because this isn’t machine learning. Its Tech Bro AI.


All true, yep.
Still, the clocking advantage is there. Stuff like the N100 also optimizes for lower costs, which means higher clocks on smaller silicon. That’s even more dramatic for repurposed laptop hardware, which is much more heavily optimized for its idle state.


First thing, Lemmy is in need of content and likes recruiting. Hence you got 315 replies, heh.
Basically, if you aren’t a bigot, you don’t have to worry about what you say. You can be politically incorrect in any direction and not get a global/shadowban from the Fediverse.
Each instance has its own flavor and etiquette.
Oh yes. Vipers are death traps, albeit rarer ones.
It’s also a very successful racecar, apparently because the gigantic displacement is advanteous for league detuning (like air restrictors).