How to use concept of Networking in Docker ?

Reading Time: 4 minutes

Introduction

As we gave a basic introduction to Docker in this blog. Docker provides OS independent containers that run in an isolated environment. Docker networking enables these containers to communicate with other containers and other required places like the internet to gain the updates of applications. With the understanding of networking in Docker, we can create custom networks for our containers as per the requirements.

Docker Architecture
Basic Docker Architecture

Container Network Model (CNM)

CNM designates the steps for providing network to containers while providing an abstraction that supports multiple network drivers. libenetwork, which is an open source go library, implements this on docker.

Container Network Model
Container Network Model

The IPAM plugins and network plugins are CNM interfaces. The IPAM plugin is to create/delete address pools and allocate/deallocate container IP addresses. The network plugin is to create/delete networks and add/remove containers from networks.

Objects in CNM

Network Controller

The provides the entry-point into libnetwork that exposes simple APIs for the users (such as Docker Engine) to allocate and manage Networks. It allows user to bind a particular driver to a given network.

Driver

Driver owns the network and is responsible for managing the network. They can be both inbuilt (such as Bridge, Host, None & overlay) and remote (from plugin providers).

Network

The Network is a group of Endpoints that are able to communicate with each-other directly. Network Controller provides APIs to create and manage Network object. Whenever a Network is created or updated, the corresponding Driver will be notified of the event.

Endpoint

Endpoint represents a Service Endpoint. It provides the connectivity for services exposed by a container in a network with other services provided by other containers in the network. Network object provides APIs to create and manage an endpoint. An endpoint can be attached to only one network.

Sandbox

Sandbox object represents container’s network configuration such as IP address, MAC address, routes, DNS entries. The user request for endpoint creation on a network maintained by a driver creates Sandbox. That driver allocates the required network resources (such as the IP address) and passing the info called SandboxInfo back to libnetwork which would make use of OS specific constructs to populate the network configuration into the containers. Multiple endpoints attached to different networks in a sandbox is possible.

Network Drivers

Now we will see different Network Drivers used in Docker networking.

None (null):

Where container requires a truly isolated environment with not network to connect, we use null network. No container can access this container or vice versa. This network is useful for a loopback device.

none
Null network

Host:

this driver used in network mode for a container, that is to not to be isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. So with this, you will not be able to run multiple web containers on the same host, on the same port as the port is now common to all containers in the host network. The host networking driver only works on Linux hosts, and is not supported on Docker Desktop. As in this image below same port containers would not run.

host
Host network

Macvlan:

MACvLAN Allows you to assign a MAC address to a container, making it appear as a physical device on your network. Then, the Docker daemon routes traffic to containers by their MAC addresses. Macvlan driver is the best choice when you are expected to be directly connected to the physical network, rather than routed through the Docker host network stack.

macvlan
Macvlan Network

IPvlan

Similar to Macvlan it assigns mac and IP address to containers and make them appear as physical devices. The difference is that this does create multiple mac addresses but shares MAC address of primary network or parent network.

Ipvlan
Ipvlan Network

Overlay

this driver used in network mode where we require orchestration of multiple containers. It creates an internal private network that spans across all the nodes participating in the swarm cluster. So, Overlay networks facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker Daemons.

Overlay
Overlay Network

Bridge

The bridge network is a private default internal network created by docker on the host. So, all containers get an internal IP address and these containers can access each other, using this internal IP. The Bridge networks are usually used when your applications run in standalone containers that need to communicate. They are combinations of routers and switches.

bridge
Bridge Network

Conclusion

Hope you were able to learn something. The blog was to make people relate normal networking with container networks. You can give feedback and ask questions in comments.

knoldus footer

Written by 

Vaibhav Kumar is a DevOps Engineer at Knoldus | Part of Nashtech with experience in architecting and automating integral deployments over infrastructure. Proficient in Jenkins, Git, AWS and in developing CI pipelines. Able to perform configuration management using ansible and infrastructure management using terraform. Like to script and do developing in Python. Other than AWS, have an experience in Google Cloud, Azure cloud services. Other than Jenkins, CI/CD in Azure Pipelines, GitHub Actions, Teamcity. Loves to explore new technologies and ways to improve work with automation.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading