I think it was Mandrake Linux for me.
It no longer exists though. …I guess I’m old.
I think it was Mandrake Linux for me.
It no longer exists though. …I guess I’m old.
The packager always should “explicitly require” what are the dependencies in a Nix package… it’s not like it’s a choice, if there are missing dependencies then that’d be a bug.
If the package is not declaring its dependencies properly then it might not run properly in NixOS, since there are no “system libraries” in that OS other than the ones that were installed from Nix packages.
And one of its advantages over AppImages is that instead of bundling everything together causing redundancies and inefficient use of resources, you actually have shared libraries with Nix (not the system ones, but Nix dependencies). If you have multiple AppImages that bundle the same libraries you can end up having the exact same version of the library installed multiple times (or loaded in memory, when running). Appimages do not scale, you would be wasting a lot of resources if you were to make heavy use of them, whereas with Nix you can run an entire OS built with Nix packages.
Huh? as far as I know it has its own libraries and dependency system. What do you mean?
The nice thing about Nix/Guix is that each version of a library only needs to be installed once and it wont really be “bundled” with the app itself. So it would be a lot easier to hunt down the packages that are depending on a bad library.
Flatpak still depends on runtimes though, I have a few different runtimes I had to install just because of one or two flatpaks that required them (like for example I have both the gnome and kde flatpak runtimes, despite not running either of those desktop environments)… and they can depend on specific versions of runtimes too! I remember one time flatpak recommended me to uninstall one flatpak program I had because it depended on a deprecated runtime that was no longer supported.
Also, some flatpaks can depend on another flatpak, like how for Godot they are preparing a “parent” flatpak (I don’t remember the terminology) that godot games can depend on in order to reduce redundancies when having multiple godot games installed.
Because of those things, you are still likely to require a flatpak remote configured and an internet connection when you install a flatpak. It’s not really a fully self contained thing.
Appimages are more self contained… but even those might make assumptions on what libraries the system might have, which makes them not as universal as they might seem. That or the file needs to be really big, unnecessarily so. Usually, a combination or compromise between both problems, at the discretion of the dev doing the packaging.
The advantage with Nix is that it’s more efficient with the users space (because it makes sure you don’t get the exact same version of a library installed twice), while making it impossible to have a dependency conflict regardless of how old or new is what you wanna install (which is something the package manager from your typical distro can’t do).
But C syntax clearly hints to int *p
being the expected format.
Otherwise you would only need to do int* p, q
to declare two pointers… however doing that only declares p
as pointer. You are actually required to type *
in front of each variable name intended to hold a pointer in the declaration: int *p, *q;
Yes… how is “reducing exclamation marks” a good thing when you do it by adding a '
(not to be confused with ,
´,
‘or
’` …which are all different characters).
Does this rely on the assumption that everyone uses a US QWERTY keyboard where !
happens to be slightly more inconvenient than typing '
?
I don’t think “the development” is what is claimed to be at stake here.
OP is not talking about the software, they’re talking about the content. And the content model from Mastodon is not interchangeable with the one from Lemmy, Pixelfed, etc. they serve different purposes and have different models. In fact that’s the main interoperatibility barrier between them.
It’s like saying that if most people use gmail for email you will switch from email to audio calls to avoid communicating with google’s service. As if real time audio were the same thing as sending a message (or as if google was unable to add compatibility with that call service too if they wanted).
One thing you could argue is, instead of switching services, switching to an instance that does defederate if you dont want threads content. But that same argument can be said as well towards those wanting threads federation…
But dont think the point is what does the individual want (if that were the case, just use the option to block threads content for your user, without defederating), the point is what’s best for the fediverse. I think people are afraid that something similar to what happened with “google talk” and their embrace of xmpp will repeat.
Personally, I think there’s no reason to jump the gun this early… all of this post is based on a lot of weak assumptions. I dont believe that threads content would overwhelm the feeds, and if that were to happen then the software could be tweaked so the contribution of each instance to the feeds can be weighted and made more customizable, for example.
Ideally, it would be a P2P protocol where the main seeder is either the content creator directly, or a service paid by the content creator (who is funded by their audience and/or sponsors).
I believe there are many podcasts that work somewhat like that (minus the P2P part, they just simply use RSS). Some hosting services have features to insert ads into the audio podcast being hosted… so the content creators still can choose to do that if they want, but the advantage is that there’s isn’t a monopoly for a single hosting provider and you can access the podcasts from many different podcast apps without needing to rely on a specific website and company that decides how you can watch it.
be nice
What niceness level exactly?
The most nice I can be in my system is -20… but being too nice to one process leaves others with less time and resources in their life.
I mean, it would technically be possible to build a computer out or organic and biological live tissue. It wouldn’t be very practical but it’s technically possible.
I just don’t think it would be very reasonable to consider that the one thing making it intelligent is that they are made of proteins and living cells instead of silicates and diodes. I’d argue that such a claim would, on itself, be a strong claim too.
Note that “real world truth” is something you can never accurately map with just your senses.
No model of the “real world” is accurate, and not everyone maps the “real world truth” they personally experience through their senses in the same way… or even necessarily in a way that’s really truly “correct”, since the senses are often deceiving.
A person who is blind experiences the “real world truth” by mapping it to a different set of models than someone who has additional visual information to mix into that model.
However, that doesn’t mean that the blind person can “never understand” the “real world truth” …it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.
Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we “understand” someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn’t really mean that what we modeled is truly accurate, nor that if we didn’t understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same “real world truth”, they simply don’t understand each other and focus on different aspects/perceptions of it.
Someone (or something) not understanding an idea you hold doesn’t mean that they (or you) aren’t intelligent. It just means you both perceive/model reality in different ways.
Step 1. Analize what’s the possible consequence / event that you find undesirable
Step 2. Determine whether there’s something you can do to prevent it: if there is, go to step 3, if there’s not go to step 4
Step 3. Do it, do that thing that you believe can prevent it. And after you’ve done it, go back to step 2 and reevaluate if there’s something else.
Step 4. Since there’s nothing else you can do to prevent it, accept the fact that this consequence might happen and adapt to it… you already did all you could do given the circumstances and your current state/ability, you can’t do anything about it anymore, so why worry? just accept it. Try and make it less “undesirable”.
Step 5. Wait. Entertain yourself some other way… you did your part.
Step 6. Either the event doesn’t happen, or it happens but you already prepared to accept the consequences.
Step 7. Analyze what (not) happened and how it happened (or didn’t). Try to understand it better so in the future you can better predict / adapt under similar circumstances, and go back to step 1.
sea, sir, its, if, all, ball, car, sent
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
It could still be bayesian reasoning, but a much more complex one, underlaid by a lot of preconceptions (which could have also been acquired in a bayesian way).
Even if the result is random, a highly pre-trained bayessian network that has the experience of seeing many puzzles or tests before that do follow non-random patterns might expect a non-random pattern… so those people might have learned to not expect true randomness, since most things aren’t random.
Yes… the chinese experiment misses the point, because the Turing test was never really about figuring out whether or not an algorithm has “conscience” (what is that even?)… but about determining if an algorithm can exhibit inteligent behavior that’s equivalent/indistinguishable from a human.
The chinese room is useless because the only thing it proves is that people don’t know what conscience is, or what are they even are trying to test.
A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.
I can’t imagine how would you test this in a way that wouldn’t require a human.
And please, get all countries to actually start properly accepting ISO 8601 format for dates as a mandatory universal standard…
Obligatory reference: https://xkcd.com/1179/