Hello friends and fellow curious engineers! As part of my new video series on Cisco U., I'm revisiting the topic of container networking. If you want to check out the new video, head on over to @CiscoUtube forContainer Networking at Layer 1or watch it below:
Following up on my last blog post where I explored the basics of the Linux 'ip' command, I'm back with a topic that I've found both interesting and a source of confusion for many people: container networking. Specifically, Docker container networking. I knew as soon as I decided on container networking for my next topic that there is far too much material to cover in a single blog post. I'd need to scope the content down to make it blog-sized. As I considered possible choices for where to spend time, I figured that exploring the default Docker networking behavior and setup was a great place to start. If there is interest in learning more about the topic, I'd be happy to continue and explore other aspects of Docker networking in future posts.
Before I jump right into the technical bits, I wanted to define exactly what I mean by "default Docker networking." Docker offers engineers many options for setting up networking. These options are available in the form of different network drivers that are included with Docker itself or added as a networking plugin. There are three options I would recommend every network engineer to be familiar with: host, bridge, and none.
Containers attached to a network using the host driver run without any network isolation from the underlying host that is running the container. That means that applications running within the container have full access toallnetwork interfaces and traffic on the hosting server itself. This option isn't often used, because typical container use cases involve a desire to keep workloads running in containers isolated from each other. However, for use cases when a container is used to simplify the installation/maintenance of an application, and there is a single container running on each host, a Docker host network provides a solution that offers the best network performance and least complexity in the network configuration.
Containers attached to a network using the null driver (i.e., none) have no networking created by Docker when starting up. This option is most often used while working on custom networking for an application or service.
Containers attached to a network using the bridge driver are placed onto an isolated layer 2 network created on the host. Each container on this isolated network is assigned a network interface and an IP address. Communication between containers on the same bridge network on the host is allowed, the same way two hosts connected to the same switch would be allowed. In fact, a great way to think about a bridge network is like it is a single VLAN switch.
With those basics covered, let's circle back to the question of "what does default Docker networking mean?" Whenever you start a container with "docker run" and do NOT specify a network to attach the container, it will be placed on a Docker network called "bridge" that uses the bridge driver. This bridge network is created by default when the Docker daemon is installed. And so, the concept of "default Docker networking" in this blog post refers to the network activities that occur within that default "bridge" Docker network.
I hope that you will want to experiment and play along "at home" with me after you read this blog. While Docker can be installed on just about any operating system today, there are significant differences in the low-level implementation details on networking. I recommend you start experimenting and learning about Docker networking with a standard Linux system, rather than Docker installed on Windows or macOS. Once you understand how Docker networking works natively in Linux, moving to other options is much easier.
If you don't have a Linux system to work with, I recommend looking at the DevNet Expert Candidate Workstation (CWS) image as a resource for candidates working toward the Cisco Certified DevNet Expert lab exam. Even if you aren't preparing for the DevNet Expert certification, it can still be a useful resource. The DevNet Expert CWS comes installed with many standard network automation tools you may want to learn and use - including Docker. You can download the DevNet Expert CWS from the Cisco Learning Network (which is what I'm using for this blog), but a standard installation of Docker Engine (or Docker Desktop) on your Linux system is all you need to get started.
Before we start up any containers on the host, let's explore what networking setup is done on the host just by installing Docker. For this exploration, we'll leverage some of the commands we learned in my blog post on the "ip" command, as well as a few new ones.
First up, let's look at the Docker networks that are set up on my host system.
docker network lsNETWORK ID NAME DRIVER SCOPEd6a4ce6ed0fa bridge bridge local5f12db536980 host host locald35eb80d4a39 none null localAll of these are set up by default by Docker. There is one of each of the basic types I discussed above: bridge, host, and none. I mentioned that the "bridge" network is the network that Docker uses by default. But, how can weknowthat? Let's inspect the bridge network.
docker network inspect bridge [ { "Name": "bridge", "Id": "d6a4ce6ed0fadde2ade3b9ff6f561c5189e9a3be01df959e7c04f514f88241a2", "Created": "2022-07-22T19:04:58.026025475Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [{ "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" }] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {"com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0","com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} }]
There's a lot in this output. To make things easier, I've color-coded a few parts that I want to call out and explain specifically.
First up, take a look at"com.docker.network.bridge.default_bridge": "true"in blue. This configuration option dictates that when containers are created without an assigned network, they will be automatically placed on this bridge network. (If you "inspect" the other networks you'll find they lack this option.)
Next, locate the option"com.docker.network.bridge.name": "docker0"in red. Much of what Docker does when starting and running containers takes advantage of other features of Linux that have existed for years. Docker's networking parts are no different. This option indicates which "Linux bridge" is doing the actual networking for the containers. In just a moment, we'll look at the "docker0" Linux bridge from outside of Docker - where we can connect some of the dots and expose the "magic."
When a container is started, it must have an IP address assigned on the bridge network, just like any host connected to a switch would. In green, you can see the subnet that will be used to assign IPs and the gateway address that will be configured on each container. You might be wondering where this "gateway" address is used. We'll get to that in a minute. ?
Now, let's look at what Docker added to the host system to set up this bridge network.
In order to explore the Linux bridge configuration, we'll be using the "brctl" command on Linux. (The CWS doesn't have this command by default, so I installed it.)
root@expert-cws:~#apt-get install bridge-utilsReading package lists... DoneBuilding dependency tree Reading state information... Donebridge-utils is already the newest version (1.6-2ubuntu1).0 upgraded, 0 newly installed, 0 to remove and 121 not upgraded.
It requires root privileges to use the "brctl" command, so be sure to use "sudo" or login as root.
Once installed, we can take a look at the bridges that are currently created on our host.
root@expert-cws:~#brctl show docker0bridge name bridge id STP enabled interfacesdocker08000.02429a0c8aee no
And look at that: there is a bridge named"docker0