• 8 Posts
  • 190 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • My Z-wave switches and smart plugs are reasonably accurate for a common house use, but I wouldn’t say that they are ‘very accurate’. I haven’t done any measurements, but if in example I plug in an appliance which has 200W on label I get roughly that number from the system. But obviously I don’t have any way to tell if the smart plug shows wrong value or if the label on the device is incorrect. And with things like LED bulbs the current varies anyways with temperature plus I don’t know if the things take actual line voltage into account which varies a bit as well.

    For my use case they’re accurate enough, but if you need ‘electroncis lab accurate’ results I doubt that any of the smart plugs can provide that.


  • Overall, how long do you think you could cope without your HA platform before it becomes an issue?

    It will never become an issue. As I mentioned, all the smart things I have can still be controlled manually. Sure, things like timing energy consumption to cheaper hours and turning on outside lights when it gets dark either stop working or needs to be manually controlled, but it would be more an annoyance than a issue.

    And when planning for expansions I’m pretty strict that things stay that way. Everything has to work without HA, internet connectivity or anything at all besides obviously having electricity. Automations are just icing on the cake and they can save a few bucks here and there and offer quality of life functionality, but I’d never rely on those alone. Manual override has to be always an option.


  • I don’t really have an recovery strategy in place, but what I do have is that all the smart stuff in my home can be controlled manually too. Light switches work just like dumb ones, thermostats have manual buttons and so on. So even if the server goes down I can still control everything manually. Obviously automations won’t work, but the house isn’t crippled if that single raspberry decides to go belly up.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlUbuntu spotted in the latest Mark Rober video
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Nothing is perfect but “fundamentally broken” is bullshit.

    Compared on how things used to work when Ubuntu came to life it really is fundamentally broken. I’m not the oldest beard around, but I personally have updated both Debian and Ubuntu from obsoleted relase to a current one with very little hiccups in the way. Apt/dpkg is just so good that you could literally bring up a decade old distribution up to date and it was almost without no efforts. The updates ran whenever I chose them to and didn’t break production servers when unattended upgrades were enabled. This is very much not the case with Ubuntu today.

    Hatred for a piece of tech simply because other people said it’s bad, therefore it must be.

    I realize that this isn’t directly because of my comment, but there’s plenty of evidence even on this chain that the problems go way deeper than few individuals ranting over the net that snap is bad. As I already said, it’s objectively worse than the alternatives we’ve had since the 90’s. And the way canonical bundles snap with apt breaks that very long tradition where you could just rely that, when running stable distribution, you could be pretty much certain that ‘apt-get dist-upgrade’ wouldn’t break your system. And even if it did, you could always fix it manually and get the thing back to speed. And this isn’t just a old guy ranting how things were better in the past as you can still get the very reliable experience today, but not with snapd.

    Auto updating is not inherently bad.

    I’m not complaining about auto updates. They are very useful and nice to have, even for advanced users. The problem is that even if snap notification says that ‘software updates now’ it often really doesn’t. Restarting the software, and even some cases running manual update, still brings up the notification that the very same software I updated a second ago needs to restart again to update. Rinse and repeat, while losing your current session over and over again.

    Also, there’s absolutely no indication if anything is actually done. The notification just nags that I need to stop what I’m doing RIGHT NOW and let the system do whatever it wants instead of the tools I’ve chosen to work for me. I don’t want nor need the forced interruptions for my workflow, but when I do have the spare minute to stop working, I expect that the update process actually triggers on that very second and not after some random delay and I also want a progress bar or something to indicate when things are complete and I can resume doing whatever I had in mind.

    it just can’t be a problem to postpone snap updates with a simple command.

    But it is. “<your software> is updating now” message just interrupts pretty much everything I’ve been doing and at that point there’s no way to stop it. And after some update process has finally finalized I need to pretty much reboot to regain control of my system. This is a problem which applies to everybody, regardless of their technical skills.

    My computer is a tool and when I need to actively fight that tool to not interrupt whatever I’m doing it rubs me in a very wrong way. No matter if it’s just browsing the web or writing code to the next best thing ever or watching youtube, I expect the system to be stable for as long as I want it to be. Then there’s a separate time slot when the system can update and maybe break itself in the process, but I control when that time slot exists.

    There’s not a single case that I’ve encountered where snap actually solved a problem I’ve had and there’s a plenty of times when it was either annoying or just straight up caused more problems. Systemd at least have some advantages over SysVInit, but snap doesn’t have even that.

    As mentioned, I’m not the oldest linux guy around, but I’ve been running linux for 20+ years and ~15 of that has kept butter on my bread and snapcraft is easily the most annoying thing that I’ve encountered over that period.


  • You act as if Snap was bad in any way. Proprietary backend does not equal bad.

    I don’t give a rats ass if things I use are propietary or not. FOSS is obviously nice to have, but if something else does the work better I’m all for it, and have paid for several pieces of software. But Ubuntu and Snap (which are running on the thing I’m writing this with) are just objectivey bad. Software updates are even more aggressive than with Windows today and even if I try to work with the “<this software> updates in X days, restart now to update” notifications it just doesn’t do what it says it would/should. And once the package is finally updated the nagging notification returns in a day or two.

    Additionally, snap and/or ubuntu has bricked at least two of my installations in the last few years, canonicals solutions has broken apt/dpkg in a very fundamental way and it most definetly has caused way more issues with my linux-stuff over the years than anything else, systemd included.

    Trying to twist that as an elitist point of view with FOSS (which there are plenty of, obviously) is misleading and just straight up false. Snapcraft and it’s implementation is just broken on so many levels and has pushed me away from ubuntu (and derivatives). Way back when ubuntu started to gain traction it was a really welcomed distribution and I was a happy user for at least a decade, but as then things are now it’s either Debian (mostly for servers) or Mint (on desktops) for me. Whenever I have the choise I won’t even consider ubuntu as an option, both commercially at work and for my personal things.


  • I did quickly check the files on update.zip and it looks like they’re tarballs embedded in a shell script and image files including pretty much the whole operating system on the thing.

    You can extract those even without a VM and do whatever you want with the files and package them back up, so, you can override version checks and you can inject init.d scripts, binaries and pretty much everything to the device, including changing passwords to /etc/shadow and so on.

    I don’t know how the thing actually operates, but if it isn’t absolutely necessary I’d leave bootloader (appears to be uboot) and kernel untouched as messing up those might end up with a bricked device and then easy options are broken and you’ll need to try to gain access via other means, like interfacing directly with the storage on the device (which most likely includes opening the thing up and wiring something like arduino or an serial cable to it).

    But beyond that, once you override version checks, it should be possible to upload the same version number over and over again until you have what you need. After that you just need suitable binaries for the hardware/kernel, likely some libraries from the same package and a init-script and you should be good to go.

    The other way you can approach this is to look for web server configurations from the image and see if there’s any vulnerabilities (like apache running as root and insecure script on top of that to inject system files via http), which might be the safest route at least for a start.

    I’m not really experienced on a things like this, but I know a thing or two about linux, so do your homework before attempting anything, have a good luck and have fun while tinkering!


  • The statement is correct, rsync by itself doesn’t use ssh if you run it as an daemon and if you trigger rsync over ssh then it doesn’t use daemon but instead starts rsync with UID of the ssh-user.

    But, you can run rsyncd and bind it only to localhost and connect to that over ssh-tunnel. That way you can get benefits of rsync daemon and still have encrypted connection with ssh.






  • All of those are still standing on Firefox’s shoulders and the actual rendering engine on the browser isn’t really trivial thing to build. Sure, they’re not going away, and likely Firefox will be around too for quite a while, but the world wide web as we currently know it is changing and Google and Microsoft are few of the bigger players pushing the change.

    If you’re old enough you’ll remember the banners ‘Best viewed with <this browser> on <that resolution>’, and it’s not too far off from the future we’ll have if the big players get their wishes. Things like google suite, whatever meta is offering and pretty much “the internet” as your Joe Average understands it wants to implement technology where it’s not possible to block ads or modify the content you’re shown in any other way. It’s not too far off from your online banking and other very much real life affecting services start to have boundaries in place where they require certain level of ‘security’ from your browser and you can bet that things which allow content modifying things, like adblocker, doesn’t qualify for the new standards.

    On many places it’s already illegal to modify or tamper DRM protected content in any ways (does anyone remember libdvdcss?) and the plan is to include similar (more or less) restrictions to the whole world wide web, which would say that we’ll have things like fediverse who allow browsers like firefox and ‘the rest’ like banking, flight/ticket/hotel/whatever booking sites, big news outlets and so on who only allow the ‘secure’ version of the browser. And that of course has very little to do with actual security, they just want control over your device and what content is fed to you, regardless if you like it or not.


  • I have no idea about cozy.io, but just to offer another option, I’ve been running Seafile for years and it’s pretty solid piece of hardware. And while it does have other stuff than just file storage/sharing, it’s mostly about just files and nothing else. Android client isn’t the best one around, but gets the job done (background tasks at least on mine tend to freeze now and then), on desktop it just works.


  • I have absolutely zero insight on how the foundation and their financing works, but in general it tends to be easier to green light a one time expense than a recurring monthly payment. So it might be just that, a years salary at first to get the gears running again and getting some time to fit the ‘infinite’ running cost into plans/forecasts/everything.


  • It depends. I’ve ran small websites and other services on a old laptop at home. It can be done. But you need to realize the risks that come with it. If the thing I’m running for fun goes down. someone might be slightly annoyed that the thing isn’t accessible all the time, but it doesn’t harm anyones business. And if someones livelihood is depending on the thing then the stakes are a lot higher and you need to take suitable precautions.

    You could of course offload the whole hardware side to amazon/hetzner/microsoft/whoever and run your services on leased hardware which simplifies things a lot, but you still run into a problem where you need to meet more or less arbitary specs for an email server so that Microsoft or Google even accept what you’re sending, you need to have monitoring and staff available to keep things running all the time, plan for backups and other disaster recovery and so on. So it’s “a bit” more than just ‘apt install dovecot postfix apache2’ on a Debian box.


  • Others have already mentioned about the challenges on the software/management side, but you also need to take into consideration hardware failures, power outages, network outages, acceptable downtime and so on. So, even if you could technically shoehorn all of that into a raspberry pi and run it on a windowsill, and I suppose it would run pretty well, you’ll risk losing all of the data if someone spills some coffee on the thing.

    So, if you really insist doing this on your own hardware and maintenance (and want to do it properly), you’d be looking (at least):

    • 2 servers for reundancy, preferably 3rd one laying around for a quick swap
    • Pretty decent UPS setup, again multiple units for reundancy
    • Routers, network hardware, internet uplinks and everything at least duplicated and configured correctly to keep things running
    • A separate backup solution, on at least two different physical locations, so a few more servers and their network, power and other stuff taken care of
    • Monitoring, alerting system in case of failures, someone being on-call for 24/7

    And likely a ton of other stuff I can’t think of right now. So, 10k for hardware, two physical locations and maintenance personnel available all the time. Or you can buy a website hosting (VPS even if you like) for few bucks a month and email service for a 10/month (give or take) and have the services running, backed up and taken care of for far longer than your own hardware lifetime is for a lot cheaper than that hardware alone.



  • I’m currently more of an generic sysadmin than linux admin, as I do both. But the ‘other stuff’ at work runs around teams, office, outlook and things like that, so I’m running a win11 with WSL and it’s good enough for what I need from a workstation. There’s technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it’s just not worth the hassle, spesifically since I need to maintain windows servers too.

    So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I’m using company tools for company jobs. If they take longer, could be more efficient or whatever, it’s not my problem. I’ll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.

    If I switched to linux I’d need to personally take care of my system to meet specs and I wouldn’t have any kind of helpdesk available should I ever need one. So it’s just simpler to stick with what the company provides and if it’s slow then it’s not my headache and I’ve accepted that mindset.


  • The package file, no matter if it’s rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There’s also some metadata, info for uninstallation and things like that, but that’s mostly irrelevant for end user.

    And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that’s why you mostly can’t run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn’t on lubuntu and thus the packages wouldn’t be compatible, but I’m almost certain that on those spesific two it’s not the case.

    And then there’s things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there’s a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you’re good to go.


  • Phobia, by definition, is uncontrollable, irrational, and lasting fear for something. In the current geopolitics situation I’d say that it’s not uncontrollable and very much not irrational. Fear, as a fellow Finn, might be a bit strong word, but it’s a definetly a concern.

    When I first read that I thought that the response is a bit harsh, as Russian (and Soviet Union) individuals have traditionally been a big part of open source community and their achievements on computing are pretty significant, but when you dig a bit deeper on that, a majority of Soviet era things are actually built by Ukrainians in Kyiv (obviously Ukraine as a country wasn’t a thing back then).

    Also, based on my very limited sight on the matter, Russians are not banned from contributing, but this is more of an statement that anyone working for the government in Russia can’t be a part of kernel development team. There’s of course legal reasons for that, very much including the trade bans against Russia, but also the moral part of it, which Linus seems to take a stand on.

    Personally I’ve seen individuals at Russia to do quite amazing feats with both hardware and software, but as none of us are in a void without any external infcluence nor affect, I think that, while harsh, the “sanctions” (for a lack of better word) aren’t overshooting anything, but they’re instead leveling the playing field. Any Joe Anynymous could write a code which compromises the kernel as a whole, but should that Joe live in Russia, it might bring a government backed team which can hide their tracks on a quite a bit different level with their resources than any individual could ever even dream about.

    So, while that decision might slow down some implementations and it might include some of the most capable of developers, the fear that one of them might corrupt the whole project isn’t unreasonable and, with ongoing sanctions in place (and legal requirements that follow) the core dev team might not even have a choice on this.

    In current global environment we’re living in, I’d rather have a bit too careful management than one which doesn’t take things seriously enough. We already have Canonical and others to break stuff way too often, we don’t need malicious government to expand on that with nefarious purposes which could compromise a shit on of stuff on a very fundamental level if left unattended.


  • NAS stands for ‘Network Attached Storage’ and there’s dedicated hardware for that task from multiple brands. It’s a somewhat spesific thing and from what I understand you have a multi-purpose server running on your network. For discussion it’s better to use the established terminology to avoid confusion on what’s what. Your generic server can of course act like a NAS, but a 100€ Synlogy NAS can’t (for the most part) act as a generic server.

    Similarly there’s a dedicated hardware for routers and they are not the same than generic servers which can run whatever. Dedicated routers do some things way better/faster than generic server, and there’s pretty much always a trade-off between the two. You can of course install hardware to your server to be as good as or even better than any consumer grade router and run a pfsense on virtual machine on top of it, but that’s going to be at least more expensive than dedicated hardware.

    So, your server is running pihole in a container on the same network address/hardware than the rest of your server, and I suppose you already gathered from other messages that the firewall component on it treats traffic coming from outside the server itself differently than traffic originating from the server itself. For this spesific case I’d say it’s just simpler to configure the server to use DNS server as localhost:1053 than trying to work out firewall forwarding rules for it, if possible. If not, and you absolutely insist that your pihole runs on a unprivileged port and that your server also has to use pihole as DNS sever, then you need to dig out a firewall config for outgoing traffic which redirects the destination port. Or you could set up a dns proxy on the server which uses pihole as upstream and serves addresses to localhost only or one of the other multiple ways to achieve what you’re after, but each of those have some kind of trade-off and there’s too many to go trough in a single post.