I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was Docker that had modified them. And after some brief research, I found a number of open issues, just like this one, of people complaining about this behaviour. I think it’s an enourmous security risk to have Docker silently do this by default.

I have heard that Podman doesn’t suffer from this issue, as it is daemonless. If that is true, I will certainly be switching from Docker to Podman.

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    8 months ago

    Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

    I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

    My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn’t be exposed.

    Excerpt from another comment of mine:

    It’s only docker where you have to deal with something like this:

    ---
    services:
      webtop:
        image: lscr.io/linuxserver/webtop:latest
        container_name: webtop
        security_opt:
          - seccomp:unconfined #optional
        environment:
          - PUID=1000
          - PGID=1000
          - TZ=Etc/UTC
          - SUBFOLDER=/ #optional
          - TITLE=Webtop #optional
        volumes:
          - /path/to/data:/config
          - /var/run/docker.sock:/var/run/docker.sock #optional
        ports:
          - 3000:3000
          - 3001:3001
        restart: unless-stopped
    

    Originally from here, edited for brevity.

    Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

    • Adam@doomscroll.n8e.dev
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      8 months ago

      But… You literally have ports rules in there. Rules that expose ports.

      You don’t get to grumble that docker is doing something when you’re telling it to do it

      Dockers manipulation of nftables is pretty well defined in their documentation. If you dig deep everything is tagged and natted through to the docker internal networks.

      As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        8 months ago

        Dockers manipulation of nftables is pretty well defined in their documentation

        Documentation people don’t read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

        As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

        Too bad people don’t read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver’s webtop, because it’s an example of the two of the worst footguns in docker in one

        To include the rest of my comment that I linked to:

        Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

        No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

        On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

        You originally stated:

        I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

        And I’m trying to say that even if that was true, it would still be better than a footgun where people expose stuff that’s not supposed to be exposed.

        But that isn’t the case for podman. A quick look through the github issues for podman, and I don’t see it inundated with newbies asking “how to expose services?” because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

        (I don’t have anything against you, I just really hate the way docker does things.)

        • Adam@doomscroll.n8e.dev
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          8 months ago

          Documentation people don’t read

          Too bad people don’t read that advice

          Sure, I get it, this stuff should be accessible for all. Easy to use with sane defaults and all that. But at the end of the day anyone wanting to using this stuff is exposing potential/actual vulnerabilites to the internet (via the OS, the software stack, the configuration, … ad nauseum), and the management and ultimate responsibility for that falls on their shoulders.

          If they’re not doing the absolute minimum of R’ingTFM for something as complex as Docker then what else has been missed?

          People expect, that, like most other services, docker binds to ports/addresses behind the firewall

          Unless you tell it otherwise that’s exactly what it does. If you don’t bind ports good luck accessing your NAT’d 172.17.0.x:3001 service from the internet. Podman has the exact same functionality.

    • null@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      My solution to this has been to not forward the ports on individual services at all. I put a reverse proxy in front of them, refer to them by container name in the reverse proxy settings, and make sure they’re on the same docker network.

    • wreckedcarzz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      So uh, I just spun up a vps a couple days ago, few docker containers, usual security best practices… I used ufw to block all and open only ssh and a couple others, as that’s what I’ve been told all I need to do. Should I be panicking about my containers fucking with the firewall?

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 months ago

        Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it’s only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

        An easy way to see containers running is: docker ps, where you can look at forwarded ports.

        Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.

        • wreckedcarzz@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Just waking up, I’ve been running docker on my nas for a few years now and was never made aware of this - the nas ports appear safe, but the vps is not, so I swapped in 127.0.0.1 in front of the port number (so it’s now 127.0.0.1:8080:80 or what have you), and that appears to resolve it. I have nginx running so of course that’s how I want to have a couple things exposed, not everything via port.

          My understanding was that port:port just was local for allowing redirection from container to host, and you’d still need to do network firewall management yourself to allow stuff through, and that appears the case on my home network, so I never had reason to question it. Thanks, I learned something today :)

          Might do the same to my nas containers, too, just to be safe. I’m using those containers as a testbed for the vps containers so I don’t want to forget…

      • Adam@doomscroll.n8e.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        8 months ago

        Docker will have only exposed container ports if you told it to.

        If you used -p 8080:80 (cli) or - 8080:80 (docker-compose) then docker will have dutifully NAT’d those ports through your firewall. You can either not do either of those if it’s a port you don’t want exposed or as @moonpiedumplings@programming.dev says below you can ensure it’s only mapped to localhost (or an otherwise non-public) IP.

      • Droolio@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Actually, ufw has its own separate issue you may need to deal with. (Or bind ports to localhost/127.0.0.1 as others have stated.)