• Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      57
      arrow-down
      7
      ·
      3 months ago

      To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

      You dare question a monopoly corporation and the spymasters of this country??

      (/s)

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        3 months ago

        industrial control and enterprise networks

        That’s doing a lot of work here.

        Yes, it’s important in certain situations, but for consumer devices, it’s just another thing that can go wrong when using alternative operating systems. Regular users don’t have the physical risk these other systems do, and making it more difficult for users to install more secure operating systems goes against the bigger threat.

        Linux is compatible with Secure Boot (source: I exclusively run Linux, and use Secure Boot on my systems), but some distros or manufacturers screw it up. For example, Google Pixel devices warn you about alternative ROMs on boot, and this makes GrapheneOS look like sketchy software, when it’s really just AOSP with security patches on top (i.e. more secure than what ships with the device). The boot is still secure, it’s just that the signature doesn’t match what the phone is looking for.

        It’s just FUD on consumer devices, but it’s totally valid in other contexts. If I was running a data center or enterprise, you bet I’d make sure everything was protected with secure boot. But if I run into any problems on personal devices, I’m turning it off. Context matters.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        21
        ·
        3 months ago

        Yes, surely randoms on Lemmy know better than Microsoft and the NSA in regards to security.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          5
          ·
          3 months ago

          Oh anyone who doesn’t trust Microsoft with their life is a complete idiot. And the NSA only illegally spied on everyone until Bush the II made it legal! So of course we should unquestioningly follow their configuration guides. I mean - haha - we don’t wanna get disappeared! Haha ha. Not. Not that that’s ever happened. That we know of. For sure. Probably.

          • capital@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            11
            ·
            3 months ago

            in regards to security

            in regards to security

            in regards to security

            in regards to security

            Just wanted to make sure you saw it this time because you went off on a tangent there.

            • azuth@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              16
              ·
              3 months ago

              It doesn’t matter if they know about security (which they do). A burglar could know about locks and home security systems, would you take his advice?

              Their positions on security of others is dismissed on grounds of trust not of competence.

              • mriguy@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                ·
                edit-2
                3 months ago

                The NSA has two jobs.

                The first is to break into any computer or communications stream that they feel the need to for “national security needs”. A lot of leeway for bad behavior there, and yes, they’ve done, and almost certainly continue to do, bad things. Note that in theory that is only allowed for foreign targets, but they always seem to find ways around that.

                The second, and less well known, job is to ensure that nobody but them can do that to US computers and communications streams. So if they say something will make your computer more secure, it’s probably true, with the important addition of “except from them”.

                I won’t pretend I like any of this, but most people are much more likely to be targeted by scammers, bitcoin miners, and ransomware than they are by the NSA itself, so in that sense, following the NSA’s recommendation here is probably better than not.

                • azuth@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  3 months ago

                  Exploits don’t care if you are actually the NSA or not. The NSA certainly knowns that yet they keep exploits secret at least from the public.

                  They have argued for key escrow for God’s shake.

                  They are primarily an intelligence agency. If you are not likely to be targeted by the NSA you are also unlikely to be targeted by any of their adversaries. They don’t give a shit if you get scammed, they are not the FBI, who also keep secret exploits and are anti-encryption.

                  Additionally using their “best” exploits on more simple targets still poses a risk to them being discovered and fixed. Therefore it’s beneficial to them for everybody’s security to be compromised. It also provides deniability.

                  • sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    Right. Their advice for the general public is a mix of “best practice” and risk. If an exploit is not actively exploited in the wild, they’ll probably sit on it for intelligence purposes and instead recommend best practices (which are good) that doesn’t impact their ability to use the exploit.

                    So trust them when they say do X, but don’t take silence to mean you’re good.

                • azuth@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  3 months ago

                  Do you have any evidence those two people are still committing burglaries? The NSA is not an ex-intelligence agency.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  I get my advice from LockPickingLawyer on YouTube. He’ll demonstrate the weaknesses of various locks, and say which to avoid and which are probably okay (“okay” is a really strong recommendation from him). He’ll still break into really secure locks in <2 min, but he’ll describe the skills necessary to break in and let you decide on what your threat level.

                  Basically, as long as it’s bump and bypass resistant, you’re good. Burglars aren’t going to pick locks, they’ll either break a window or move on if the lock stops them. A good lock doesn’t keep out a burglar, it just slows them down enough that they’ll give up.

                  So yes, get advice from people who have the skills to break the protection they’re recommending, they’ll be able to separate things into threat categories. If you want OPSec advice, visit black hat hacking forums and whatnot, you’ll get way better advice than sticking with the normie channels.

              • Emerald@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                3 months ago

                A burglar could know about locks and home security systems, would you take his advice?

                If they were an expert burglar, I might

                Source: I’m an expert burglar and all of the others on my burglar crew are very helpful when people ask about home security stuff.

        • Cosmicomical@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          3 months ago

          Security is the last thing NSA and Micro$oft care about. NSA wants to be sure they can do all they need to with your devices, and M$ just wants to discourage you from switching to linux.

      • ilinamorato@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        edit-2
        3 months ago

        Ok, so I am not an expert, and I am not the OP. But my understanding is that Secure Boot is checking with a relatively small list of trustworthy signing certificates to make sure that the OS and hardware are what they claim to be on boot. One of those certificates belongs to a Microsoft application called Shim, which can be updated regularly as new stuff comes out. And technically you can whitelist other certificates, too, but I have no idea how you might do that.

        The problem is, there’s no real way to get around the reality that you’re trusting Microsoft to not be compromised, to not go evil, to not misuse their ubiquity and position of trust as a way to depress competition, etc. It’s a single point of failure that’s presents a massive and very attractive target to attackers, since it could be used to intentionally do what CrowdStrike did accidentally last week.

        And it’s not necessarily proven that it can do what it claims to do, either. In fact, it might be a quixotic and ultimately impossible task to try and prevent boot attacks from UEFI.

        But OP might have other reasons in mind, I dunno.

        • cmnybo@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          22
          ·
          3 months ago

          To use secure boot correctly, you need disable or delete the keys that come preinstalled and add your own keys. Then you have to sign the kernel and any drivers yourself. It is possible to automate the signing the kernel and kernel modules though. Just make sure the private key is kept secure. If someone else gets a hold of it, they can create code that your computer will trust.

          • NekkoDroid@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 months ago

            The kernel modules usually are signed with a different key. That key is created at build time and its private key is discarded after the build (and after the modules have been signed) and the kernel uses the public key to validate the modules IIRC. That is how Archlinux enables can somewhat support Secure Boot without the user needing to sign every kernel module or firmware file (it is also the reason why all the kernel packages aren’t reproducible).

          • jabjoe@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Your want to store a copy of the private key on the encrypted machine so it can automatically sign kernel updates.

        • NekkoDroid@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          And technically you can whitelist other certificates, too, but I have no idea how you might do that.

          When you enter the UEFI somewhere there will be a Secure Boot section, there there is usually a way to either disable Secure Boot or to change it into “Setup Mode”. This “Setup Mode” allows enrolling new keys, I don’t know of any programs on Windows that can do it, but sbctl can do it and the systemd-boot bootloader both can enroll your own custom keys.

      • trollbearpig@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 months ago

        Probably too late, but just to complement what others have said. The UEFI is responsible for loading the boot software thst runs when the computer is turned on. In theory, some malware that wants to make itself persistent and avoid detection could replace/change the boot software to inject itself there.

        Secure boot is sold as a way to prevent this. The way it works, at high level, is that the UEFI has a set of trusted keys that it uses to verify the boot software it loads. So, on boot, the UEFI check that the boot software it’s loading is signed by one of these keys. If the siganture check fails, it will refuse to load the software since it was clearly tampered with.

        So far so good, so what’s the problem? The problem is, who picks the keys that the UEFI trusts? By default, the trusted keys are going to be the keys of the big tech companies. So you would get the keys from Microsoft, Apple, Google, Steam, Canonical, etc, i.e. of the big companies making OSes. The worry here is that this will lock users into a set of approved OSes and will prevent any new companies from entering the field. Just imagine telling a not very technical user that to install your esoteric distro they need to disable something called secure boot hahaha.

        And then you can start imagining what would happen if companies start abusing this, like Microsoft and/or Apple paying to make sure only their OSes load by default. To be clear, I’m not saying this is happening right now. But the point is that this is a technology with a huge potential for abuse. Some people, myself included, believe that this will result in personal computers moving towards a similar model to the one used in mobile devices and video game consoles where your device, by default, is limited to run only approved software which would be terrible for software freedom.

        Do note that, at least for now, you can disable the feature or add custom keys. So a technical user can bypass these restrictions. But this is yet another barrier a user has to bypass to get to use their own computer as they want. And even if we as technical users can bypass this, this will result in us being fucked indirectly. The best example of this are the current Attestation APIs in Android (and iOS, but iOS is such a closed environment that it’s just beating a dead horse hahahah). In theory, you can root and even degoogle (some) android devices. But in practice, this will result in several apps (banks in particular, but more apps too) to stop working because they detect a modified device/OS. So while my device can technically be opened, in practice I have no choice but to continue using Google’s bullshit. They can afford to do this because 99% of users will just run the default configuration they are provided, so they are ok with losing the remaining users.

        But at least we are stopping malware from corrupting boot right? Well, yes, assuming correct implementations. But as you can see from the article that’s not a given. But even if it works as advertised, we have to ask ourselves how much does this protect us in practice. For your average Joe, malware that can access user space is already enough to fuck you over. The most common example is ransonware that will just encrypt your personal files wothout needing to mess with the OS or UEFI at all. Similarly a keylogger can do its thing without messing with boot. Etc, etc. For an average user all this secure boot thing is just security theater, it doesn’t stop the real security problems you will encounter in practice. So, IMO it’s just not worth it given the potential for abuse and how useless it is.

        It’s worth mentioning that the equation changes for big companies and governments. In their case, other well funded agents are willing to invest a lot of resources to create very sofisticated malware. Like the malware used to attack the nuclear program plants in Iran. For them, all this may be worth it to lock down their software as much as possible. But they are playing and entirely different game than the rest of us. And their concerns should not infect our day to day lives.

        • Crozekiel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          3 months ago

          “And then you can start imagining what would happen if companies start abusing this, like Microsoft and/or Apple paying to make sure only their OSes load by default.”

          I’m convinced that this is definitely the end goal for Microsoft, especially with the windows 11 TPM requirement. We are in the early stages of their plan to mold the PC ecosystem to be more like mobile. This is the biggest reason I decided to move to Linux - it’s now or never in my opinion.

          • ruse8145@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 months ago

            This is the most open time period for hardware as far as options go since like, the 90s. Microsoft isn’t taking away options.

      • Supermariofan67@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        It is based on the assumption that every piece of code in the entire stack from the UEFI firmware to the operating system userspace is free of vulnerabilities

        • Emerald@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          That doesn’t mean it’s useless. All software is prone to vulnerabilities and exploits, but that doesn’t mean its not worth using it at all. TrueCrypt was a good solution for the time, even if we now know it is pretty vulnerable