Docker Networking: A Step-by-Step Guide for Developers
Get hands-on experience with Docker networking through step-by-step demonstrations and practical examples
Overview of Docker
Docker is a platform that allows developers to package and run applications in an isolated environment called a "container." Docker has become a popular tool among developers for its ability in application deployment and management. However, networking can be a complex topic when it comes to Docker, especially for beginners. In this guide, we will provide a step-by-step overview of Docker networking, including the Container Networking Model, network drivers, and practical examples.
Networking is about establishing connections, transferring data, and exchanging information between nodes. Similarly, Docker networking involves making connections between containers and external systems through the host machine while the Docker engine is working.
Container Networking Model
Container network model is a standard proposed by Docker that provides a well-defined interface or API that helps in establishing connections between containers and network plugins. Libnetwork is the native Go implementation of the CNM for connecting containers. It provides an interface between the Docker engine and network drivers. It is built on 3 main components: Sandbox, Endpoint, Network
Sandbox: A sandbox contains the configuration of the container's network. This includes routing, DNS settings, and endpoints for multiple networks.
Endpoint: Offers connectivity for services provided by the container. It links a sandbox to a network.
Network: Provides connectivity between a group of endpoints belonging to the same network. It isolates them from the rest of the system.
Apart from these, there has 2 more objects, Network Controller and Driver.
Network Controller: This serves as the entry point into libnetwork, offering simple APIs for users.
Driver: responsible for managing the network. Drivers can be both inbuilt and remote (provided by 3rd party plugin) to satisfy different use cases and scenarios.
Network Drivers
Docker supports different types of network drivers for certain use cases. Those are:
Bridge Network
: The private default network driver is automatically created by Docker on the host.Host Network
: Using this network, you can eliminate the isolation between the container and the host, allowing you to directly use the host IP. However, you will be unable to run multiple web containers on the same host using the same port.Overlay Network
: connects multiple Docker engines together and enables swarm services to communicate with each other.IPvlan Network
: allows you to create a completely new virtual network inside your Docker host. IPv4 and IPv6 can be controlled by these networks.Macvlan Network
: assigns a MAC address to a container, treating the container as a physical device on your network.none
: You can completely disable the networking service of a container.
Let's do some hands-on:
- Open your terminal and list all the current networks before doing anything.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
14b0ffc042ae bridge bridge local
4ef48b726b9f host host local
880e26dbaeae none null local
There are three network listed: bridge, host and none. Every network has its own id. Bridge is the default and most common network. This network is easy to manage. Whenever you run a container, if you don't mention any network configuration, Docker engine automatically assigns bridge network to it.
Let's say you run a container from nginx
image. Now inspect the network of the container by docker inspect <container_name/ id>
$ docker run -d -it --name alpine1 alpine:latest
7f9d2253f4695a9658648d8a05d13e8b891b75826e6bb90ddef4a18937959e28
$ docker run -d -it --name alpine2 alpine:latest
1dd222e3a86f82aafb5a83c0bff5d3a5d09f35d50fb31791dd8eabd0659ccfdc
$ docker inspect alpine1
You will get lots of info about the container and search Networks:
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "14b0ffc042aec3169a4ee7d2c535bc665ecc9fc01aba11dcd4bce0cd7a928b09",
"EndpointID": "deeb72653d6a80e950f84e7cfa491f0cd7e1d626eb1382f01fe134a6c960d09c",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
You will get it where the bridge is mentioned and all the network details are given.
- Get the IP address assigned to the container:
$ docker inspect alpine1 | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
- See what containers are connected with the bridge network.
[
{
"Name": "bridge",
"Id": "14b0ffc042aec3169a4ee7d2c535bc665ecc9fc01aba11dcd4bce0cd7a928b09",
"Created": "2023-04-22T13:03:42.690383117Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1dd222e3a86f82aafb5a83c0bff5d3a5d09f35d50fb31791dd8eabd0659ccfdc": {
"Name": "alpine2",
"EndpointID": "d14bdbef971d69900eeb9cc60cd1e023a7fed313b6060ce08e66cbb0ff842d74",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"7f9d2253f4695a9658648d8a05d13e8b891b75826e6bb90ddef4a18937959e28": {
"Name": "alpine1",
"EndpointID": "deeb72653d6a80e950f84e7cfa491f0cd7e1d626eb1382f01fe134a6c960d09c",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Under the "Containers"
key, connected containers are listed along with information about their IP addresses.
- See the IP address of the network driver easily:
$ ip a | grep docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
docker0
is the name of default
bridge network.
Now let's use the bash shell of alpine1
container using docker attach alpine1
Use ip addr show
to see the IP address of the container.
$ docker attach alpine1
/ #
It will open the interactive mode of the container and now you can use bash commands inside the alpine1.
Use ip addr show
to see the IP address of the container.
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1000
link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
79: eth0@if80: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
In the first line, lo
represents the loopback device, which is a virtual network interface that allows your computer to communicate with itself. This device is primarily used for diagnostics, troubleshooting, and connecting to servers running on the local machine.
tunl0
is an IPIP tunnel used for encapsulating pod traffic. Ignore it now.
In the second last line, inet 172.17.0.2/16
is the IP address of the alpine1
container.
Now make sure you have an internet connection, and let's ping Google's server in alpine1
interactive mode. You can stop it by pressing shift+ctrl+c
/ # ping google.com
PING google.com (142.250.183.206): 56 data bytes
64 bytes from 142.250.183.206: seq=0 ttl=253 time=39.181 ms
64 bytes from 142.250.183.206: seq=1 ttl=253 time=39.688 ms
64 bytes from 142.250.183.206: seq=2 ttl=253 time=39.124 ms
64 bytes from 142.250.183.206: seq=3 ttl=253 time=39.293 ms
64 bytes from 142.250.183.206: seq=4 ttl=253 time=39.759 ms
64 bytes from 142.250.183.206: seq=5 ttl=253 time=39.112 ms
64 bytes from 142.250.183.206: seq=6 ttl=253 time=39.349 ms
64 bytes from 142.250.183.206: seq=7 ttl=253 time=39.866 ms
64 bytes from 142.250.183.206: seq=8 ttl=253 time=39.381 ms
^C
--- google.com ping statistics ---
9 packets transmitted, 9 packets received, 0% packet loss
round-trip min/avg/max = 39.112/39.417/39.866 ms
/ #
You can stop the ping by pressing shift+ctrl+c
. Now without typing exit
, you can stop and delete the containers.
docker container stop alpine1 alpine2
docker container rm alpine1 alpine2
Custom or user-defined network
To create your own custom network, all containers connected to this network can communicate with each other without exposing their ports to the outside world. This results in improved isolation because all networks with the --network
option specified are attached to a bridge network.
How to create a user-defined network
$ docker network create --driver bridge my-bridge-1
We can create our own bridge network by using the above command . If you don’t specify the --driver
option, the command automatically creates a bridge
network for you.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
14b0ffc042ae bridge bridge local
4ef48b726b9f host host local
2b81b2102b5b my-bridge-1 bridge local
880e26dbaeae none null local
You can see a new my-bridge-1
the user-defined network has been created with bridge
driver. Use inspect
command to see the IP address assigned to the new network.
$ docker network inspect -f '{{json .IPAM.Config}}' my-bridge-1
[{"Subnet":"172.19.0.0/16","Gateway":"172.19.0.1"}]
Now, let's use netshoot, a Docker image equipped with a set of networking troubleshooting tools that can be used for Docker networking.
Create two containers with the same netshoot
image.
$ docker run -d -it --name net1 --rm --network my-bridge-1 nicolaka/netshoot /bin/bash
$ docker run -d -it --name net2 --rm --network my-bridge-1 nicolaka/netshoot /bin/bash
The --rm
flag is used to instruct Docker Engine to clean up the container and remove the file system once the container exits. It will open the container in an interactive mode as I have used -it
and also attach it to user-defined network by using --network my-bridge-1
.
Use ip a
to check the IP address of the newly created container, which is generated from my-bridge-1
, making them quite similar.
$ docker attach net1
93758dba332c:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr ea3f:1f15:b8f3::
85: eth0@if86: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
valid_lft forever preferred_lft forever
Now let's ping another container net2
from net1
using the IP address of net2
You can see that two containers can communicate with each other because they are on the same network my-bridge-1
.
Now let's use a new container with a different network.
$ docker network create my-bridge-2
db9eacc9d083168ab71af8b2d6cbad6fd23e587995de44120d421750b35c32df
$ docker run -d -it --name net3 --rm --network my-bridge-2 nicolaka/netshoot /bin/bash
cacbe718b4178de991dc6e7ca7ad5bde6c18b0b3c8f902f93910b1bc068184eb
$ docker attach net3
cacbe718b417:~# ping 172.19.0.2
PING 172.19.0.2 (172.19.0.2) 56(84) bytes of data.
From the net3
container, if we want to ping net1
using its IP address, it will not work because they are both on different networks. Refer to the image below.
Host Network
As the name suggests, host drivers utilize the networking provided by the host machine. This removes network isolation between the container and the host machine where Docker is running. For instance, if you run a container that binds to port 80 and uses host networking, the container's application becomes accessible on port 80 via the host's IP address. With this configuration, you cannot run multiple web containers on the same host using the same port, as the port is now shared among all containers within the host network.
To use host network:
$ docker run -d --name webhost --network host nginx:latest
There is no need to expose the port now, as the container utilizes the host network, allowing you to access Nginx on port 80.
Now, Stop the Nginx container.
$ docker stop webhost
Macvlan and IPvland Network
Macvlan and IPvlan are used in Docker networking to assign a unique MAC and IP address to each container, allowing them to communicate directly with the physical network connected to the Docker host.
You might want to use Macvlan or IPvlan when running a container that requires a service such as DNS or DHCP, but the host already has a DNS or DHCP server running. In such a scenario, using a bridge network and exposing the service to a different external port may not work for standardized protocols like DNS or DHCP, since clients expect them to operate on specific ports.
Macvlan Network
Macvlan is a way to connect Docker containers directly to the physical network, which can be useful for certain types of applications. It assigns a unique MAC address to each container's virtual network interface, making it look like a physical interface.
To create a Macvlan
network, you need to use --drive macvlan
and specify the subnet and gateway of the host machine. This will create a virtual network interface for the container with a unique MAC address that's connected directly to the physical network.
wlo1
is a network interface name representing a wireless network interface on a computer. In this instance, it has been assigned an IP address of 192.168.0.110
with a subnet mask of 24
(192.168.0.0/24
). This IP address is likely assigned dynamically (scope global dynamic) by a DHCP server on the local network.
$ docker network create -d macvlan \
--subnet=192.168.0.0/24 \
--gateway=192.168.0.1 \
-o parent=eth0 macnet
When using a macvlan network, you can exclude specific IP addresses from being assigned to containers in the network. This can be useful if an IP address is already in use and you want to prevent it from being assigned to a container. To achieve this, you can use the --aux-addresses
option to specify the IP addresses to exclude.
$ docker network create -d macvlan \
--subnet=192.168.0.0/24 \
--gateway=192.168.0.1 \
--aux-addresses="my-macvlan=192.168.1.100" \
-o parent=eth0 macnet
IPvlan Network
ipvlan
is a type of Docker network driver that allows you to create multiple virtual networks on a single physical interface. Each virtual network operates as a separate layer 2 domain.
$ docker network create -d ipvlan \
--subnet=192.168.100.0/24 \
--subnet=192.168.200.0/24 \
--gateway=192.168.100.254 \
--gateway=192.168.200.254 \
-o parent=eth0 ipnet
Difference between macvlan and ipvlan
macvlan
and ipvlan
are two ways to create virtual networks inside a computer, which can allow multiple containers to communicate with each other and with the outside world.
macvlan
creates virtual network interfaces that connect directly to the physical network. They possess their own MAC addresses and can be detected by other devices on the network. This is beneficial in situations where containers need to communicate directly with the physical network, such as with a router or a DHCP server.
On the other hand, ipvlan
creates virtual network interfaces that share the same MAC address as the physical network interface, which means they are less visible on the network. This can be useful when you want to isolate containers from the physical network or when you need to give a container multiple IP addresses.
In simple terms, macvlan
is beneficial when you need containers to communicate directly with the physical network, whereas ipvlan
is advantageous when you want to maintain separation between containers and the physical network or when a container requires multiple IP addresses.
In two separate terminals, create two separate containers on `ipvlan` and they communicate with each other.
$ docker run -it --name net1 --network ipnet nicolaka/netshoot /bin/bash
$ docker run -it --name net2 --network ipnet nicolaka/netshoot /bin/bash
Overlay Network
When using the overlay network driver in Docker, you can create a distributed network that spans across multiple Docker hosts. This network is created on top of the individual host-specific networks and allows containers to communicate securely with each other even when encryption is enabled. Docker handles the routing of each packet to the correct Docker daemon host and the correct destination container. An overlay network called ingress, handles the control and data traffic related to swarm services. You can learn more about it by visiting: https://docs.docker.com/network/overlay/
Disable networking for a container
If you wish to entirely disable the networking stack for a container, you can utilize the --network none
flag when initiating the container. Inside the container, only the loopback device will be created. Consider the following example:
Conclusion
In conclusion, Docker provides an efficient way to package and run applications in an isolated environment using containers. Its networking capabilities allow for establishing connections between containers and external systems through the host machine while the Docker engine is working. The Container Networking Model provides a well-defined interface for connecting containers, and Docker supports different types of network drivers for specific use cases. Overall, Docker is a powerful tool for developers to streamline their application deployment process.
Subscribe to my newsletter for more content like this. If you enjoyed reading this article, please consider sharing it with your colleagues and friends on social media. Additionally, you can follow me on Twitter for more updates on technology and coding. Thank you for reading!