Haha, but my only intrusive thought is that the wizard cat is a cute chonker
Haha, but my only intrusive thought is that the wizard cat is a cute chonker
We knew well enough that a pandemic was a realistic risk and had SARS-1 and MERS as warning shots but we’re still not very well prepared at the outset of COVID.
Well, here we go again! The internet was supposed to be freedom, but now it’s just a playground for Big Brother and corporate greed. Y’all see this Y2K “bug”? IT’S NOT A BUG, it’s a SCAM to reset everything and put us under CONTROL! Bill Gates, Al Gore, ALL of ‘em are in on it. Soon, you won’t be able to load a page without the Feds watching you.
Wake UP, people! You think AOL and Micro$oft care about “innovation”? LOL. It’s all about taking over! AOL’s already bought up Netscape! What’s next, paying to send an email? Think your ICQ is private? LOL, it’s government spyware!
And don’t get me started on the mods here. I see you silencing REAL posts about how the UN is behind all this. Keep banning me, I’ll be back! They wanna control speech before the big RESET. Mark my words, the internet will be a corporate mall by 2005, and y’all will pay just to read THIS.
P.S. If this post vanishes, you know why. I SEE YOU, MODS!
22/F/Catfish Island
You need to get your head checked (by a jumbo jet)
I heard there was a secret cord.
you plug it in to meet the lord.
But you don’t really care for safety, do ya?
It goes like this, you plug it in,
And in a flash, the lights go dim,
The power’s gone,
and now it’s running through ya.
When do we Germans beat them 7:1 again? Or is Germany just not as good at murder? /s
There are tons of statistical methods to get reasonable conclusions without an RCT. Some things can not be detected with an RCT, because the experiment is just impossible to run, so sometimes you need methods to do causal identification with observable data. Here you do not even need causal identification methods for observational data. You just need to do some descriptive statistics for a large group of people well to find interesting patterns. Whether this aging pattern in the mid-40s is causal it coincidental is not important at first. The pattern itself is interesting.
I was on a holiday in the Cinque Terre in Italy with my wife a few years ago. Because of a rainy day we decided to take a train to Genua and visit some museums. At the maritime museum I randomly met an Italian coworker/coauthor from my research institute in Germany, who was visiting his family in his hometown with his wife.
For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.
For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.
Better than to be the pole vaulter whose medal ambitions were foiled by his long wang.
The market will segment away from the current tech anyway. CATL Sodium-ion with comparatively low densities but also extremely low prices per kWh will likely win the low-end market and the market for stationary solutions. This is just due to the much lower resource costs. The high-end will be up for things like this battery by Samsung (or other comparable pilot products). The current technology will likely be in a weird middle spot.
I have a ten-year old MacBook Pro with an i7 and 16gb of ram. Just because this thing was a total beast when it was new does not mean it isn’t old now. works great with Ubuntu though. It’s still not a good idea to run it as a server though. My raspberry pi consumes a lot less energy for some basic web hosting tasks. I only use the old MBP to run memory intense docker containers like openrouteservice and I guess just using some hosting service for that would not be much more expensive.
Depends on what you do with it. Synthetic data seems to be really powerful if it’s human controlled and well built. Stuff like tiny stories (simple llm-generated stories that only use the complexity of a 3-year olds vocabulary) can be used to make tiny language models produce sensible English output. My favourite newer example is the base data for AlphaProof (llm-generated translations of proofs in Math-Papers to the proof-validation system LEAN) to teach an LLM the basic structure of Mathematics proofs. The validation in LEAN itself can be used to only keep high-quality (i.e. correct) proofs. Since AlphaProof is basically a reinforcement learning routine that uses an llm to generate good ideas for proof steps to reduce the size of the space of proof steps, applying it yields new correct proofs that can be used to further improve its internal training data.
Seafile. It’s already on my phone when I want it there.
Edward Teller is just the kind of scientist you need to build civil engineering projects out of doomsday devices.
So this school was built on an ancient Pleistocene burial ground. I know that trope well enough to know what happened next