moby: 20.10.0-beta1@Fedora 33: Failed to program NAT chain: ZONE_CONFLICT: 'docker0' already bound to a zone

After installing Docker-20 Testing (fedora 32 package) on Fedra 33, it failed to start up because of some bridge configuration error.

When I manually created a new bridge named docker1, it worked nicely. Also running cgroups v2! 😄

The debug log when it failed:

dockerd -D    
INFO[2020-10-30T09:43:21.242458907+01:00] Starting up                                  
DEBU[2020-10-30T09:43:21.255780567+01:00] Listener created for HTTP on unix (/var/run/docker.sock) 
INFO[2020-10-30T09:43:21.256161442+01:00] detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf 
DEBU[2020-10-30T09:43:21.257417063+01:00] Golang's threads limit set to 288090         
INFO[2020-10-30T09:43:21.258514586+01:00] parsed scheme: "unix"                         module=grpc
INFO[2020-10-30T09:43:21.258528605+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2020-10-30T09:43:21.258548627+01:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2020-10-30T09:43:21.258557801+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2020-10-30T09:43:21.258581364+01:00] metrics API listening on /var/run/docker/metrics.sock 
INFO[2020-10-30T09:43:21.259198651+01:00] parsed scheme: "unix"                         module=grpc
INFO[2020-10-30T09:43:21.259212028+01:00] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2020-10-30T09:43:21.259226278+01:00] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2020-10-30T09:43:21.259235344+01:00] ClientConn switching balancer to "pick_first"  module=grpc
DEBU[2020-10-30T09:43:21.259637228+01:00] processing event stream                       module=libcontainerd namespace=plugins.moby
DEBU[2020-10-30T09:43:21.259670740+01:00] Using default logging driver json-file       
DEBU[2020-10-30T09:43:21.259689676+01:00] [graphdriver] trying provided driver: overlay2 
DEBU[2020-10-30T09:43:21.293128305+01:00] backingFs=extfs, projectQuotaSupported=false, indexOff="index=off,"  storage-driver=overlay2
DEBU[2020-10-30T09:43:21.293297231+01:00] Initialized graph driver overlay2            
DEBU[2020-10-30T09:43:21.293610297+01:00] No quota support for local volumes in /media/containers/docker2/volumes: Filesystem does not support, or has not enabled quotas 
DEBU[2020-10-30T09:43:21.300960950+01:00] Max Concurrent Downloads: 3                  
DEBU[2020-10-30T09:43:21.300996329+01:00] Max Concurrent Uploads: 5                    
DEBU[2020-10-30T09:43:21.301008840+01:00] Max Download Attempts: 5                     
INFO[2020-10-30T09:43:21.301031196+01:00] Loading containers: start.                   
DEBU[2020-10-30T09:43:21.303857959+01:00] processing event stream                       module=libcontainerd namespace=moby
DEBU[2020-10-30T09:43:21.303955712+01:00] Option Experimental: false                   
DEBU[2020-10-30T09:43:21.303969658+01:00] Option DefaultDriver: bridge                 
DEBU[2020-10-30T09:43:21.303986702+01:00] Option DefaultNetwork: bridge                
DEBU[2020-10-30T09:43:21.304003162+01:00] Network Control Plane MTU: 1500              
DEBU[2020-10-30T09:43:21.319268432+01:00] Firewalld: creating docker zone              
DEBU[2020-10-30T09:43:21.553415597+01:00] Firewalld passthrough: ipv4, [-t filter -C FORWARD -j DOCKER-ISOLATION] 
DEBU[2020-10-30T09:43:21.560123457+01:00] Firewalld passthrough: ipv4, [-t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2020-10-30T09:43:21.567381848+01:00] Firewalld passthrough: ipv4, [-t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER] 
DEBU[2020-10-30T09:43:21.574448176+01:00] Firewalld passthrough: ipv4, [-t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER] 
DEBU[2020-10-30T09:43:21.581039502+01:00] Firewalld passthrough: ipv4, [-t nat -D PREROUTING] 
DEBU[2020-10-30T09:43:21.588309111+01:00] Firewalld passthrough: ipv4, [-t nat -D OUTPUT] 
DEBU[2020-10-30T09:43:21.594810533+01:00] Firewalld passthrough: ipv4, [-t nat -F DOCKER] 
DEBU[2020-10-30T09:43:21.601395679+01:00] Firewalld passthrough: ipv4, [-t nat -X DOCKER] 
DEBU[2020-10-30T09:43:21.607954591+01:00] Firewalld passthrough: ipv4, [-t filter -F DOCKER] 
DEBU[2020-10-30T09:43:21.614668501+01:00] Firewalld passthrough: ipv4, [-t filter -X DOCKER] 
DEBU[2020-10-30T09:43:21.621096885+01:00] Firewalld passthrough: ipv4, [-t filter -F DOCKER-ISOLATION-STAGE-1] 
DEBU[2020-10-30T09:43:21.628080626+01:00] Firewalld passthrough: ipv4, [-t filter -X DOCKER-ISOLATION-STAGE-1] 
DEBU[2020-10-30T09:43:21.634631543+01:00] Firewalld passthrough: ipv4, [-t filter -F DOCKER-ISOLATION-STAGE-2] 
DEBU[2020-10-30T09:43:21.641210653+01:00] Firewalld passthrough: ipv4, [-t filter -X DOCKER-ISOLATION-STAGE-2] 
DEBU[2020-10-30T09:43:21.648586098+01:00] Firewalld passthrough: ipv4, [-t filter -F DOCKER-ISOLATION] 
DEBU[2020-10-30T09:43:21.655562051+01:00] Firewalld passthrough: ipv4, [-t filter -X DOCKER-ISOLATION] 
DEBU[2020-10-30T09:43:21.662277936+01:00] Firewalld passthrough: ipv4, [-t nat -n -L DOCKER] 
DEBU[2020-10-30T09:43:21.668989005+01:00] Firewalld passthrough: ipv4, [-t nat -N DOCKER] 
DEBU[2020-10-30T09:43:21.675224377+01:00] Firewalld passthrough: ipv4, [-t filter -n -L DOCKER] 
DEBU[2020-10-30T09:43:21.682197006+01:00] Firewalld passthrough: ipv4, [-t filter -N DOCKER] 
DEBU[2020-10-30T09:43:21.688639041+01:00] Firewalld passthrough: ipv4, [-t filter -n -L DOCKER-ISOLATION-STAGE-1] 
DEBU[2020-10-30T09:43:21.694956958+01:00] Firewalld passthrough: ipv4, [-t filter -N DOCKER-ISOLATION-STAGE-1] 
DEBU[2020-10-30T09:43:21.701130006+01:00] Firewalld passthrough: ipv4, [-t filter -n -L DOCKER-ISOLATION-STAGE-2] 
DEBU[2020-10-30T09:43:21.707974673+01:00] Firewalld passthrough: ipv4, [-t filter -N DOCKER-ISOLATION-STAGE-2] 
DEBU[2020-10-30T09:43:21.714760966+01:00] Firewalld passthrough: ipv4, [-t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN] 
DEBU[2020-10-30T09:43:21.721369144+01:00] Firewalld passthrough: ipv4, [-A DOCKER-ISOLATION-STAGE-1 -j RETURN] 
DEBU[2020-10-30T09:43:21.727683636+01:00] Firewalld passthrough: ipv4, [-t filter -C DOCKER-ISOLATION-STAGE-2 -j RETURN] 
DEBU[2020-10-30T09:43:21.734285735+01:00] Firewalld passthrough: ipv4, [-A DOCKER-ISOLATION-STAGE-2 -j RETURN] 
INFO[2020-10-30T09:43:21.754847426+01:00] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
DEBU[2020-10-30T09:43:21.754894428+01:00] Allocating IPv4 pools for network bridge (be6c3a041ca9860d77b712248ec7384dff8e4af23bafd376f7df7d0246eb22b9) 
DEBU[2020-10-30T09:43:21.754926747+01:00] RequestPool(LocalDefault, 172.17.0.0/16, , map[], false) 
DEBU[2020-10-30T09:43:21.754972041+01:00] RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway]) 
DEBU[2020-10-30T09:43:21.754998201+01:00] Request address PoolID:172.17.0.0/16 App: ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:<nil>  
DEBU[2020-10-30T09:43:21.755111853+01:00] Did not find any interface with name docker0: Link not found 
DEBU[2020-10-30T09:43:21.755174157+01:00] Setting bridge mac address to 02:42:11:be:7b:3c 
DEBU[2020-10-30T09:43:21.757940022+01:00] Assigning address to bridge interface docker0: 172.17.0.1/16 
DEBU[2020-10-30T09:43:21.759649563+01:00] Firewalld passthrough: ipv4, [-t nat -C POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE] 
DEBU[2020-10-30T09:43:21.771873420+01:00] Firewalld passthrough: ipv4, [-t nat -I POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE] 
DEBU[2020-10-30T09:43:21.784744228+01:00] Firewalld passthrough: ipv4, [-t nat -C DOCKER -i docker0 -j RETURN] 
DEBU[2020-10-30T09:43:21.791264397+01:00] Firewalld passthrough: ipv4, [-t nat -I DOCKER -i docker0 -j RETURN] 
DEBU[2020-10-30T09:43:21.797342281+01:00] Firewalld passthrough: ipv4, [-D FORWARD -i docker0 -o docker0 -j DROP] 
DEBU[2020-10-30T09:43:21.804548744+01:00] Firewalld passthrough: ipv4, [-t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT] 
DEBU[2020-10-30T09:43:21.812057238+01:00] Firewalld passthrough: ipv4, [-I FORWARD -i docker0 -o docker0 -j ACCEPT] 
DEBU[2020-10-30T09:43:21.819181895+01:00] Firewalld passthrough: ipv4, [-t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT] 
DEBU[2020-10-30T09:43:21.826690007+01:00] Firewalld passthrough: ipv4, [-I FORWARD -i docker0 ! -o docker0 -j ACCEPT] 
DEBU[2020-10-30T09:43:21.835453824+01:00] Firewalld: adding docker0 interface to docker zone 
DEBU[2020-10-30T09:43:21.837711880+01:00] releasing IPv4 pools from network bridge (be6c3a041ca9860d77b712248ec7384dff8e4af23bafd376f7df7d0246eb22b9) 
DEBU[2020-10-30T09:43:21.837756395+01:00] ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.1) 
DEBU[2020-10-30T09:43:21.837770746+01:00] Released address PoolID:LocalDefault/172.17.0.0/16, Address:172.17.0.1 Sequence:App: ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:2 
DEBU[2020-10-30T09:43:21.837781619+01:00] ReleasePool(LocalDefault/172.17.0.0/16)      
DEBU[2020-10-30T09:43:21.837791529+01:00] daemon configured with a 15 seconds minimum shutdown timeout 
DEBU[2020-10-30T09:43:21.837797380+01:00] start clean shutdown of all containers with a 15 seconds timeout... 
DEBU[2020-10-30T09:43:21.837835417+01:00] found 0 orphan layers                        
DEBU[2020-10-30T09:43:21.857866798+01:00] Cleaning up old mountid : start.             
INFO[2020-10-30T09:43:21.857968240+01:00] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
DEBU[2020-10-30T09:43:21.868911925+01:00] Cleaning up old mountid : done.              
failed to start daemon: Error initializing network controller: Error creating default "bridge" network: Failed to program NAT chain: ZONE_CONFLICT: 'docker0' already bound to a zone

BUG REPORT INFORMATION

Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST –>

Description

It fails to configure old bridge interface.

Steps to reproduce the issue:

  1. Install Docker 20 on Fedora 33 (with Cgroups v2)
  2. Existing docker0 bridge from previous installation?

Describe the results you received: It should be able to configure the existing bridge

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

docker version
Client: Docker Engine - Community
 Version:           20.10.0-beta1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        ac365d7
 Built:             Tue Oct 13 18:17:19 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0-beta1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       9c15e82
  Built:            Tue Oct 13 18:15:04 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.1
  GitCommit:        c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 20.10.0-beta1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.8.16-300.fc33.x86_64
 Operating System: Fedora 33 (Workstation Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 39.13GiB
 Name: linux
 ID: ENRR:EUCJ:7AIV:5ZVE:T5MU:7QMQ:WYXS:7TQZ:H7XW:ZKCI:O5UO:W5Q6
 Docker Root Dir: /media/containers/docker2
 Debug Mode: true
  File Descriptors: 26
  Goroutines: 40
  System Time: 2020-10-30T09:45:51.271674482+01:00
  EventsListeners: 0
 Username: richard87
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  localhost:32000
  127.0.0.1:32000
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: Support for cgroup v2 is experimental

Additional environment details (AWS, VirtualBox, physical, etc.): Fedora 33 (upgraded from Fedora 32), cgroups v2

Running with custom bridge 😃

  1. Running brctl addbr docker1
  2. Editing daemon.json adding "bridge: "docker1" to the config file
docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete 
Digest: sha256:8c5aeeb6a5f3ba4883347d3747a7249f491766ca1caa47e5da5dfcf6b9b717c0
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 21 (7 by maintainers)

Most upvoted comments

I’ve got a similar error after upgrading docker to version 20.10 on Fedora 32.

$ journalctl -xe

Dec 11 12:37:37 success firewalld[1716]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Dec 11 12:37:37 success firewalld[1716]: ERROR: ZONE_CONFLICT: 'docker0' already bound to a zone

To fix this, I removed docker0 from the interfaces it is listed in (I’ve added it manually some time ago).

sudo firewall-cmd --get-zone-of-interface=docker0 | xargs -ri sh -c "sudo firewall-cmd --zone={} --remove-interface=docker0 && sudo firewall-cmd --zone={} --remove-interface=docker0 --permanent"

Update 2022: @sar and @mjudeikis, thanks for the adjustments to the command.

Here is a related issue on redhat issue trackers: https://bugzilla.redhat.com/show_bug.cgi?id=1829090

A solution (that worked for me and others) is to delete the “/etc/firewalld/zones/trusted.xml” file.

To fix this, I removed docker0 from trusted interfaces (I’ve added it manually some time ago).

sudo firewall-cmd --zone=trusted --remove-interface=docker0
sudo firewall-cmd --zone=trusted --remove-interface=docker0 --permanent

Thanks for the workaround.

In case anyone searching runs into this error:

$ sudo firewall-cmd --zone=trusted --remove-interface=docker0
Error: ZONE_CONFLICT: remove_interface(trusted, docker0): zoi='DockerServer'

The zone may be named differently from trusted on your infrastructure and its emitted back under zoi=<zone-of-interface> or can be probed manually with the command $ sudo firewall-cmd --get-zone-of-interface=docker0.

I don’t think, you need to remove the whole /etc/firewalld/zones/trusted.xml file, but just the line with docker0 interface. You might have some other interfaces in this file, too. Like admun also commented, the “bug” is still existent in Fedora 33 docker 20.10 release.

Hi!

I have deleted everything I could find related to docker0/1/2, og restarted the computer 3 times, everything seems to work now!

Thanks for all the feedback and help 😃

(Edit: Removed everything I created with brctl and deleted everything related to docker in fedoras Firewall gui)

I know this is already fixed but I been solving this for myself like once a month and aways come to this tread for copy paste 😃 so gonna part mine here too:

sudo systemctl stop docker
sudo firewall-cmd --zone=FedoraWorkstation --remove-interface=docker0 --permanent
sudo systemctl start docker
# in most cases existing networks will lose mapping
docker network prune 

Here is a related issue on redhat issue trackers: https://bugzilla.redhat.com/show_bug.cgi?id=1829090

A solution (that worked for me and others) is to delete the “/etc/firewalld/zones/trusted.xml” file.

Restarting firewalld after deleting the file works for me

just install the docker-ce 20.10 on Fedora 33, need to remove /etc/firewalld/zones/trusted.xml in order to fix this issue…

Tested the changes out on a CentosOS 8 Vagrant VM and I don’t see any issues Enabled firewalld

[vagrant@centos8 ~]$ sudo systemctl enable firewalld
Created symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service → /usr/lib/systemd/system/firewalld.service.
Created symlink /etc/systemd/system/multi-user.target.wants/firewalld.service → /usr/lib/systemd/system/firewalld.service.
[vagrant@centos8 ~]$ sudo systemctl start firewalld

Install Docker CE 20.10.0-beta1

[vagrant@centos8 ~]$ curl -fsSL https://get.docker.com/ | CHANNEL=test sh
# Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
+ sudo -E sh -c 'yum install -y -q yum-utils'
warning: /var/cache/dnf/BaseOS-31c79d9833c65cf7/packages/libzstd-1.4.2-2.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
+ sudo -E sh -c 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
+ '[' test '!=' stable ']'
+ sudo -E sh -c 'yum-config-manager --disable docker-ce-*'
+ sudo -E sh -c 'yum-config-manager --enable docker-ce-test'
+ sudo -E sh -c 'yum makecache'
CentOS-8 - AppStream                                                                                                                   7.7 kB/s | 4.3 kB     00:00    
CentOS-8 - Base                                                                                                                        6.1 kB/s | 3.9 kB     00:00    
CentOS-8 - Extras                                                                                                                      3.2 kB/s | 1.5 kB     00:00    
Docker CE Test - x86_64                                                                                                                 12 kB/s | 5.4 kB     00:00    
Metadata cache created.
+ '[' -n '' ']'
+ sudo -E sh -c 'yum install -y -q docker-ce'
warning: /var/cache/dnf/docker-ce-test-b30d87b7567a33c8/packages/containerd.io-1.4.1-3.1.el8.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <docker@docker.com>"
 Fingerprint: 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35
 From       : https://download.docker.com/linux/centos/gpg

If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker vagrant

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

Enable and Start Dockerd

[vagrant@centos8 ~]$ sudo sudo systemctl enable docker^C
[vagrant@centos8 ~]$ sudo systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[vagrant@centos8 ~]$ sudo systemctl start docker

Check Docker Zones and interfaces in firewalld

[vagrant@centos8 ~]$ firewall-cmd --get-active-zones
docker
  interfaces: docker0
public
  interfaces: eth0

Reboot and again test it out

[vagrant@centos8 ~]$ sudo reboot
Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.
🐳 ~/vagrant/centos8$ vagrant ssh
Last login: Sat Nov  7 01:30:05 2020 from 10.0.2.2
[vagrant@centos8 ~]$ firewall-cmd --get-active-zones
docker
  interfaces: docker0
public
  interfaces: eth0
[vagrant@centos8 ~]$ sudo docker run -it alpine nslookup www.google.com
Server:		10.0.2.3
Address:	10.0.2.3:53

Non-authoritative answer:
Name:	www.google.com
Address: 2607:f8b0:4005:804::2004

Non-authoritative answer:
Name:	www.google.com
Address: 172.217.6.36