

I dunno if I’d consider this in “dad joke” territory.
Per the sidebar:
Clean jokes only please. If you cannot tell this joke to a 5-year-old, you probably shouldn’t post it here. Please post edgier jokes to !unclejokes@lemmy.world
I dunno if I’d consider this in “dad joke” territory.
Per the sidebar:
Clean jokes only please. If you cannot tell this joke to a 5-year-old, you probably shouldn’t post it here. Please post edgier jokes to !unclejokes@lemmy.world
If you want a view to count you need to use an official client
I would think that parsing the website would count the same as any browser-based page load, since parsing the website requires first fetching the page (probably using something like wget or curl under the hood). I dunno if non-logged-in page loads are generally counted toward the overall view count on a given video, though.
I’ll share my input, although it’s primarily speculation and a smidge of deductive reasoning.
Given these three particular pieces of information:
My first instinct is the issue may be upstream (non-local) network congestion. Since it appears that connections are slowing to a crawl rather than dropping packets. Ping requests don’t seem to suffer, but they’re a lot smaller than loading content via CMMS, Reddit, etc. You mentioned it could happen twice or more in a 10 hour shift, or sometimes not at all; network congestion being highly variable could explain this.
Are you in a remote area? If so, there may not be much nearby infrastructure (routers) to handle the big spikes in traffic when everyone in the immediate area clocks in to work at 9am, or gets back from lunch around 1pm, etc. If that’s the case, the local routers would get overwhelmed regularly by congestion and packet delivery times would suffer. This could also happen in more densely populated areas, depending on what the local infrastructure looks like.
Though I’m not entirely sure how to explain speed tests not suffering if congestion is the issue; unless the particular routes to the geographically-close test servers aren’t congested (because large numbers of people are trying to connect to real services, not the speed tests, during these congestion times).
The fact that some live services like Google & Facebook load while others like Reddit and Lemmy do not could be explained by the difference in those services’ respective high-availability (HA) solutions. Facebook and Google don’t typically drop below 99.95%-ish uptime because they scale their server infrastructure very aggressively to meet demand. But even huge services like Reddit have considerably more downtime than Facebook or Google (Reddit seems to have major outages several times a year, while Google and Facebook do not). Some upstream services having more servers to handle more requests more quickly could account for the inconsistent ability to load websites during this congestion.
I’m not sure the best way to test this hypothesis, though. Given how much troubleshooting and information gathering you’ve already done, this is a tricky one.
A wojak image can’t really refute anything; it’s just depicting the original poster as being a seething dumbass (an ad hominem response). If the poster accompanies said wojak image with a counterargument, that could refute something, but most wojak responses don’t bother with actually making any kind of salient point. Which is why it’s such a popular format: it’s low effort.
Edit: Realizing the error in my argument, I have included the following addendum:
I really like the Interactive Relationship Graph on your site. Reminds me of when I used to work with graph databases and could visualize all the information in the database as a handy graph of nodes and relationships.
I guess I’d probably pick “money” because I am generally risk-averse. I might already have incredible luck, but I don’t gamble, so how would I know? I’ve been incredibly lucky in non-gambling endeavours (my health, my family, my career, etc), but if asked to pick between being stochastically lucky and being guaranteed a certain level of comfort (and thus being able to provide for my family) for the rest of my life, I’m gonna pick the latter.
I took a cursory glance through the source code (for the Firefox version, at least), and I’m not seeing any calls to the gitflic.ru URL outside of the update functions (there appear to be two different places where these might be triggered) and one function for importing custom sites:
// Import custom sites from local/online
function import_url_options(e, online) {
let url = '/custom/sites_custom.json';
if (online)
url = 'https://gitflic.ru/project/magnolia1234/bpc_updates/blob/raw?file=sites_custom.json' + '&rel=' + randomInt(100000);
try {
fetch(url)
.then(response => {
if (response.ok) {
response.text().then(result => {
import_json(result);
})
}
});
} catch (err) {
console.log(err);
}
}
I noticed in the manifest.json, there is the optional permissions array:
"optional_permissions": [ "*://*/*" ],
Which seems to grant the extension access to all URLs, so maybe that’s why the HTTP request is able to fire on any given website rather than just the ones explicitly defined in the regular permissions array. Though this is speculation on my part; I’ve only ever written one or two complex Firefox extensions. I’m not sure if the “optional permissions” array can be declined upon installation (or configured in the extension settings after installation); perhaps access to the wildcard URL can be revoked so that this update call isn’t occurring constantly.
All looks okay to me, but this was a very quick audit.
I’ve had great results with various refurbished Dell Latitudes from eBay over the years. I have a stack of about 5 or 6 of 'em and they’ve all run many mainstream Linux distros with fantastic out-of-the-box support. I pass 'em out to members of the household whenever a laptop is needed and they’ll usually get the job done.
I’d just type in “Dell Latitude” on eBay and filter by price and such. I suspect any model with an i5 and 8GB RAM oughta be fine for light programming work. I’ve found sellers with high ratings (like 97% or higher) and thousands of sales are pretty reliable (and tend to have return policies in case you get a lemon). Just test all the hardware (webcam, microphone, headphone jack, USB ports, ethernet, etc) as soon as you get it.
I’ve saved a lot of money over the years buying secondhand, and these machines have been running without a hiccup for years of casual use.
I haven’t had to deal with this specific kind of use case before (accessing the local Jellyfin service while the laptop is connected to a VPN), but after some cursory research, one of these approaches may work for you:
Easy Option (only available on some VPN software):
There may be an option in your VPN client that lets you access local network addresses like your Jellyfin server. Check your settings and see if there are any options like “allow local network traffic” and then try opening up your Jellyfin server in a browser (e.g.: http://192.168.1.100:8096/)
Less Easy Option:
If your VPN client doesn’t have an option for allowing local traffic, you can open up the command prompt on your macbook and run a command like this:
sudo route add -net 192.168.1.0/24 192.168.1.1
Where 192.168.1.0/24
is the local network you want to connect to (where the Jellyfin server is located), and 192.168.1.1
is your local gateway (probably your wifi router’s address). Change both of these depending on how your network’s local IPs are formatted.
This should update your routing table to handle local network addresses without the VPN and this should persist between reboots.
Hope this helps.
That’s super neat. I still have my physical copy of this game from back in the day. I’ve booted it up in a Windows 98 VM before, but that’s way more hassle than just opening it in Firefox.
Not quite. You understood that the blue character was correct. You just missed that the joke was that yellow was wrong.
I think the point of the joke is that the yellow character has fundamentally misunderstood the statement provided by the blue character. They erroneously interpret, “All squares have four sides” as “Squares are the only shape with four sides” because they are not good at parsing rigid, scientific statements (since most people are not particularly scientifically literate).
I think OP’s comics are meant to reflect the frustrations of conversing with people who simply don’t understand what science/research/studies/etc actually say.
Oh boy, I’ve been collecting these for years. Finally my time to shine!
DM or reply if you want more stupid pictures.
I was being sarcastic, lol. It’s a play on the “you have to have a very high IQ to understand Rick and Morty” gag.
Episodes of Rick and Morty really hit close to home in a way that normies couldn’t possibly fathom. It’s a blessing and a curse.
LLMs are pretty good at reverse dictionary lookup. If I’m struggling to remember a particular word, I can describe the term very loosely and usually get exactly what I’m looking for. Which makes sense, given how they work under the hood.
I’ve also occasionally used them for study assistance, like creating mnemonics. I always hated the old mnemonic I learned in school for the OSI model because it had absolutely nothing to do with computers or communication; it was some arbitrary mnemonic about pizza. Was able to make an entirely new mnemonic actually related to the subject matter which makes it way easier to remember: “Precise Data Navigation Takes Some Planning Ahead”. Pretty handy.
My parents and all their friends used to use PTT with their Nextel phones. It was a super handy feature. I wonder why it fell out of style. Seemed more convenient and less tedious than a phone call for short communications.