“Significantly” Going by the comparison Sony felt large enough to brag about there’s hardly a noticeable difference
Yo whatup
“Significantly” Going by the comparison Sony felt large enough to brag about there’s hardly a noticeable difference
I wouldn’t be shocked if it was astro-terfing
Everyone from the lemmy.blahaj.zone instance that I’ve interacted with or seen have been trolls. Those guys are super weirdos idk what their deal is. It’s baffling seeing what they claim to stand for
Wow, that looks like a nightmare
There are a couple projects with native block chain art but as you might expect it’s low resolution pixel art due to the nature of block chain being prohibitively expensive to use as storage
Brave is forked from Chromium so hypothetically they could maintain V2 but they’d need their own store as they currently rely on Googles
Yep that’s why I refuse to use standard libraries. It just makes my code too complicated…
Yup. The moron even admitted it too!
Elon pretended to lean left. He was and never has been left leaning. He’s been the same old guy this entire time it’s just continuing to be more and more difficult to pretend otherwise.
Counterintuitive but more instructions are usually better. It enables you (but let’s be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.
I take it you haven’t had to go through an AI chat bot for support before huh
We do know we created them. The AI people are currently freaking out about does a single thing, predict text. You can think of LLMs like a hyper advanced auto correct. The main thing that’s exciting is these produce text that looks as if a human wrote it. That’s all. They don’t have any memory, or any persistence whatsoever. That’s why we have to feed it a bunch of the previous text (context) in a “conversation” in order for it to work as convincingly as it does. It cannot and does not remember what you say
Every time I hear somebody mention that theory I remember that most people believe Elon isn’t a massive moron
Chat GPT can output an article in a much shorter time than it’d take me to write one but people would probably like mine more
It’s almost always a risk to other people. I can’t think of a vaccine that is for a non-communicable disease.
Tetanus? Least I didn’t think that was contagious
It’s the same impact. It’s the same amount of microplastic it just takes longer. If I give you the choice of 100 beans today or 1 bean each day for 100 days it’s still 100 beans. The total impact is identical it just takes longer.
Semantic whitespace is awful because whitespace (something that you can’t actually see) has meaning in how the program runs. Braces {
}
for scopes gives you the ability to easily tell at a glance where a scope ends. Whitespace doesn’t allow for that. Especially, especially when you can accidentally exit a scope (two new lines in a row with Python) and it’s not actually an error (Pythons global scope). Yeah formatters and linters make this less of an issue but it sucks… Languages with legible symbols for scoping are significantly easier to reason about, see end
symbols in Lua.
It erases the type of what your pointing at. All you have is a memory location, in contrast to int*
which is a memory location of an int
Yes it’s real