• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    arrow-up
    54
    arrow-down
    2
    ·
    2 days ago

    In my personal life I will probably “never” intentionally use ipv6.

    But it is a DAMNED good sniff test to figure out if an IT/NT team is too dumb to live BEFORE they break your entire infrastructure. If they insist that the single most important thing is to turn it off on every machine? They better have a real good reason other than “it’s hard”

    • Nightwatch Admin@feddit.nl
      link
      fedilink
      arrow-up
      26
      arrow-down
      9
      ·
      2 days ago

      It’s vulnerable af. And I mean really, it’s as bad as Netscalers or Fortigate shit. Like https://www.bleepingcomputer.com/news/security/hackers-abuse-ipv6-networking-feature-to-hijack-software-updates/ or https://www.bleepingcomputer.com/news/security/hackers-abuse-ipv6-networking-feature-to-hijack-software-updates/

      Problem is, yes it’s hard to implement but it’s even a lot harder to get it properly secured. Especially because few people are using it, and not securing it is worse than disabling it.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        23 hours ago

        But you could do the same thing with a rogue DHCP server I IPv4… With similar methods to prevent the misbehavior on networks

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        arrow-up
        20
        ·
        2 days ago

        And I would consider a detailed argument on why it is more secure to disable it to be a good reason.

        Personally? I consider an IT team who don’t know how to secure an ipv6 enabled network to not be competent. But that is a different conversation.

        • Nightwatch Admin@feddit.nl
          link
          fedilink
          arrow-up
          11
          ·
          2 days ago

          Yeah, I run dual stack without much trouble myself. I believe it is mainly difficult for people because eyeball diagnostics are impossible with 6.

        • TexasDrunk@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          2 days ago

          My detailed explanation at my old job is that the dev team was full of idiots who hardcoded ipv4 addresses into their fucking code. Seriously. When we migrated from data center to cloud they had to go patch everything. The CTO wouldn’t do shit about it and the director was just there riding things out until retirement.

          • Auli@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            It does not have less eyes on and it’s 50% of Google traffic.

            • jj4211@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              8 hours ago

              Think they mean local networks.

              If an IT department carefully curates IPv4 but ignores IPv6, then a rogue actor can set up a parallel IPv6 network largely without being noticed.

              IPv6 can be managed, just that it is a blindside for a lot of these departments.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      2 days ago

      Realistically no organization has so many endpoints that they need IPv6 on their internal networks. There’s no reason to deal with more complicated addressing schemes except on the public Internet. Only the border devices should be using IPv6.

      Hopefully if an organization has remote endpoints which are connecting to the internal network over the Internet, they are doing that through a VPN and can still just be assigned IPv4 addresses on dedicated VLANs when they connect.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        you sir/maam have not seen the netflix talk on using IPv6 for their full internal stack because of inefficiencies allocating IPv4 ranges i’m guessing

      • Olap@lemmy.world
        link
        fedilink
        arrow-up
        12
        arrow-down
        2
        ·
        2 days ago

        If you don’t have ipv6 internally, you probably can’t access ipv6 externally. 6to4 gateways are a thing. 4to6? Not so much.

        And this is why ipv6 will ultimately take another 20 years for full coverage. If it was more backwards compatible from the starting address-wise then this would all have been smoother. Should have stuck with point separators. Should have assumed zero padding for v4 style addresses rather than a prefix

        • The_Decryptor@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          If you don’t have ipv6 internally, you probably can’t access ipv6 externally. 6to4 gateways are a thing. 4to6? Not so much.

          I’m pretty sure stateful gateways do exist, but it’s a massive ball of complexity that would be entirely avoided if people just used native v6.