Containerization – Expert Network Consultant http://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Thu, 23 Mar 2023 16:42:10 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.2 How to Create and Use a Dockerized DHCP Server for Your Applications and Networks http://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/how-to-create-and-use-a-dockerized-dhcp-server-for-your-applications-and-networks/ Thu, 23 Mar 2023 16:45:39 +0000 http://www.expertnetworkconsultant.com/?p=6022 Continue readingHow to Create and Use a Dockerized DHCP Server for Your Applications and Networks]]> Docker is a powerful platform for containerizing and deploying applications, and its networking capabilities allow for the creation of isolated test networks and the management of containerized applications.

In some cases, however, containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications, and create virtual networks for practicing networking concepts and configurations. In this article, we will walk through the steps for creating and using a Dockerized DHCP server for your applications and networks.

We will cover how to create a bridge network, run the DHCP server container, and configure your host and other containers to use the DHCP server to obtain IP addresses.

Choose a base image: You will need a base image for your DHCP server. In this example, we will use the Alpine Linux base image, which is a lightweight distribution of Linux that is popular for Docker images.

Install DHCP server software: Next, you will need to install the DHCP server software on your image. In this example, we will use the ISC DHCP server software, which is a widely used and well-supported DHCP server.

Configure the DHCP server: Once you have installed the DHCP server software, you will need to configure it to lease IPs. You will need to specify the range of IP addresses that can be leased, the subnet mask, and other network settings.

Create a Docker Network – I have called mine <my>-<network>

docker network create my-network

Create the DHCPD.CONF file in the build directory.

##########dhcpd.conf###########

default-lease-time 259200;
max-lease-time 777600;
option domain-name "your-domain.com";

subnet 192.168.2.0 netmask 255.255.255.0{
range 192.168.2.2 192.168.2.250;
option broadcast-address 192.168.2.255;
option routers 192.168.2.1;
option domain-name-servers 192.168.1.1;
}

Create a Dockerfile: With the base image and DHCP server software installed and configured, you can now create a Dockerfile that will build the image. Here is an example Dockerfile:

Create a Dockerfile

FROM alpine:latest

RUN apk add --no-cache dhcp

COPY dhcpd.conf /etc/dhcpd.conf

EXPOSE 67/udp

ENTRYPOINT ["dhcpd", "-f", "-d", "--no-pid"]

In this Dockerfile, we start with the latest Alpine Linux image, then we install the ISC DHCP server software using the apk package manager. We copy a pre-configured dhcpd.conf file to the /etc directory, which contains the configuration settings for the DHCP server. We expose port 67/udp, which is the port used by DHCP servers to lease IP addresses. Finally, we set the ENTRYPOINT to start the dhcpd daemon with the specified options.

Build the image: Once you have created the Dockerfile, you can build the image using the docker build command:

docker build -t dhcp-server .

Run the container: With the image built, you can now run a container from the image using the docker run command:

docker run -d --name dhcp-server --net=host dhcp-server

In this command, we run the container in detached mode (-d), give it a name (–name dhcp-server), and use the host network (–net=host) so that the DHCP server can lease IPs to devices on the same network as the host. We specify the name of the image we built in the previous step (dhcp-server) as the container to run.

Your DHCP server container should now be running and leasing IPs to devices on your network. You can view the logs of the container using the docker logs command:

docker logs dhcp-server

And you can stop and remove the container using the docker stop and docker rm commands:

docker stop dhcp-server
docker rm dhcp-server

There are several use cases for having a Docker image running as a DHCP server:

Development and testing: Developers and testers can use a Dockerized DHCP server to create isolated test networks for their applications or services. This allows them to test network configurations and connectivity without interfering with the production network.

Containerized applications: Some containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications.

Education and training: DHCP servers are commonly used in networking courses and training programs. By running a Dockerized DHCP server, educators and students can create virtual networks for practicing networking concepts and configurations.

To get hosts to connect to the network served by the Dockerized DHCP server, you will need to configure the hosts to use DHCP to obtain an IP address. This can usually be done by configuring the network interface of the host to use DHCP. The exact steps to do this will depend on the operating system of the host.

For example, on a Linux host, you can configure the network interface to use DHCP by editing the /etc/network/interfaces file and adding the following lines

auto eth0
iface eth0 inet dhcp

On a Windows host, you can configure the network interface to use DHCP by going to the Control Panel, selecting Network and Sharing Center, selecting Change adapter settings, right-clicking on the network adapter, selecting Properties, selecting Internet Protocol Version 4 (TCP/IPv4), and selecting Obtain an IP address automatically.

Once the host is configured to use DHCP, it will automatically obtain an IP address from the Dockerized DHCP server when it is connected to the network.

You might rightly ask how these other containers or hosts get an IP address from the above DHCP server container.

Well, below is the answer to your question.

You would need to create a Docker network to add containers in there before they can receive IP addresses from the DHCP server. When you create a Docker network, you can specify that it is a bridge network, which is the default network type for Docker. Containers connected to a bridge network can communicate with each other using their IP addresses.

To create a bridge network, you can use the docker network create command. Here’s an example:

docker network create my-network

This command creates a bridge network named my-network. You can then start your DHCP server container on this network by using the –network option when running the container:

docker run -d --name dhcp-server --network my-network dhcp-server

This command starts the DHCP server container in detached mode (-d), names the container dhcp-server, and connects it to the my-network network.

Once your DHCP server container is running on the my-network network, you can start other containers on the same network by using the –network option:

docker run -d --name my-container --network my-network my-image

This command starts a container named my-container from the my-image image, and connects it to the my-network network.

When the container starts up, it will obtain an IP address from the DHCP server running on the my-network network. You can view the IP address of the container by using the docker inspect command:

docker inspect my-container

In the output, look for the IPAddress field under the NetworkSettings section. This will show you the IP address that was assigned to the container by the DHCP server.

Ubuntu has a good guide on DHCP – https://ubuntu.com/server/docs/network-dhcp

]]>
Efficiently Troubleshoot Kubernetes Pods http://www.expertnetworkconsultant.com/network-troubleshooting-steps-and-techniques/efficiently-troubleshoot-kubernetes-pods/ Mon, 06 Mar 2023 17:00:19 +0000 http://www.expertnetworkconsultant.com/?p=5875 Continue readingEfficiently Troubleshoot Kubernetes Pods]]> Kubernetes is an incredibly powerful platform that allows developers to deploy and manage containerized applications at scale. While Kubernetes has many benefits, it can be challenging to troubleshoot issues that arise with individual pods. Fortunately, there are several best practices that developers can follow to quickly and efficiently troubleshoot Kubernetes pods.

Use kubectl describe to gather information
When troubleshooting Kubernetes pods, the first step is to gather as much information as possible. The kubectl describe command is an invaluable tool for this task. This command provides detailed information about the state of a particular pod, including its current status, any events that have occurred, and any containers that are running within the pod.

To use the kubectl describe command, simply run the following command:

kubectl describe pod 

This will provide you with a wealth of information about the current state of the pod, including any error messages that may be present.

Check the logs
The next step in troubleshooting Kubernetes pods is to check the logs of the container running within the pod. Kubernetes provides a centralized logging system that allows developers to access logs from all containers within a cluster.

To view the logs of a particular container, you can use the kubectl logs command. For example, to view the logs of the container running within a pod named my-pod, you can run the following command:

kubectl logs my-pod 

This will display the logs of the specified container, allowing you to identify any issues that may be present.

Check resource allocation
Kubernetes allows developers to allocate resources to individual pods, including CPU and memory. If a pod is experiencing issues, it’s important to check the resource allocation to ensure that the pod has enough resources to operate correctly.

To check the resource allocation of a particular pod, you can use the kubectl top command. This command provides real-time information about the CPU and memory usage of each pod within a cluster.

For example, to view the resource allocation of a pod named my-pod, you can run the following command:

kubectl top pod my-pod

This will display the CPU and memory usage of the specified pod, allowing you to identify any resource allocation issues.

Check network connectivity
Finally, it’s important to check the network connectivity of a pod if it’s experiencing issues. Kubernetes provides several networking options, including service discovery and load balancing, that can be used to ensure that pods can communicate with each other.

To check the network connectivity of a pod, you can use the kubectl exec command to execute commands within the pod’s container. For example, to check the network connectivity of a pod named my-pod, you can run the following command:

kubectl exec my-pod -- curl http://<service-name>

This will execute the curl command within the specified container, allowing you to check the connectivity of the pod.

In conclusion, Kubernetes is a powerful platform for managing containerized applications, but troubleshooting individual pods can be challenging. By following these best practices, developers can quickly and efficiently troubleshoot Kubernetes pods, ensuring that their applications are running smoothly and without interruption.

Based on the guideline above, let us walk through the steps;

Everyone wants a healthy Pod. Your applications rely on a healthy Pod state for successful delivery of services for consumers. Just as life has its challenges, sometimes you may experience issues with your pods which could put them in any of these states.

Pending: Pods can be in a pending state if there are insufficient resources in the cluster to schedule the pod, or if there is a scheduling issue due to resource constraints, node affinity/anti-affinity, or pod affinity/anti-affinity.

CrashLoopBackOff: Pods can be in a CrashLoopBackOff state if the container in the pod is crashing repeatedly. This can be due to issues with the container image, configuration, dependencies, or resources.

Error: Pods can be in an Error state if there is an issue with the pod’s configuration or if the container is unable to start or run due to issues with the container image, configuration, or dependencies.

Check pod status: The first step in troubleshooting Kubernetes pods is to check the pod status. You can use the kubectl get pods command to view the status of all pods in a given namespace. If a pod is in a Pending, CrashLoopBackOff, or Error state, it indicates that there is an issue that needs to be resolved.

kubectl get pods
# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-585449566-4rqvm   1/1     Running   0          59s

Check container logs: Once you have identified the pod with issues, the next step is to check the container logs. You can use the kubectl logs command to view the logs of a specific container in a pod. This can help you identify any errors or issues that the container is encountering.

kubectl logs 
# kubectl logs nginx-585449566-4rqvm
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/03/06 11:04:49 [notice] 1#1: using the "epoll" event method
2023/03/06 11:04:49 [notice] 1#1: nginx/1.23.3
2023/03/06 11:04:49 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/03/06 11:04:49 [notice] 1#1: OS: Linux 5.15.0-56-generic
2023/03/06 11:04:49 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/03/06 11:04:49 [notice] 1#1: start worker processes
2023/03/06 11:04:49 [notice] 1#1: start worker process 29
2023/03/06 11:04:49 [notice] 1#1: start worker process 30
# 

Check container configuration: If the container logs do not provide any clues, the next step is to check the container configuration. You can use the kubectl describe pod command to view the pod configuration, including container image, environment variables, and resource limits. Make sure that the container configuration is correct and that all required resources are available.

kubectl describe pod  
Name:             nginx-585449566-4rqvm
Namespace:        default
Priority:         0
Service Account:  default
Node:             vectra-worker2/172.18.0.2
Start Time:       Mon, 06 Mar 2023 11:04:30 +0000
Labels:           app=nginx
                  pod-template-hash=585449566
Annotations:      
Status:           Running
IP:               10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/nginx-585449566
Containers:
  nginx:
    Container ID:   containerd://470ecc0771e2fd3a828e228016677c1084792d8a26ad9d100337d9dcc6086597
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 06 Mar 2023 11:04:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h4xk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6h4xk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m24s  default-scheduler  Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
  Normal  Pulling    8m24s  kubelet            Pulling image "nginx:latest"
  Normal  Pulled     8m7s   kubelet            Successfully pulled image "nginx:latest" in 17.21068267s
  Normal  Created    8m7s   kubelet            Created container nginx
  Normal  Started    8m7s   kubelet            Started container nginx
# 

Check cluster events: If the container configuration looks correct, the next step is to check the cluster events. You can use the kubectl get events command to view the events in the cluster. This can help you identify any issues or changes in the cluster that may be affecting the pod.

kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                       MESSAGE
13m         Normal    Scheduled                 pod/nginx-585449566-4rqvm    Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
13m         Normal    Pulling                   pod/nginx-585449566-4rqvm    Pulling image "nginx:latest"
13m         Normal    Pulled                    pod/nginx-585449566-4rqvm    Successfully pulled image "nginx:latest" in 17.21068267s
13m         Normal    Created                   pod/nginx-585449566-4rqvm    Created container nginx
13m         Normal    Started                   pod/nginx-585449566-4rqvm    Started container nginx
13m         Normal    SuccessfulCreate          replicaset/nginx-585449566   Created pod: nginx-585449566-4rqvm
13m         Normal    ScalingReplicaSet         deployment/nginx             Scaled up replica set nginx-585449566 to 1
18m         Normal    Starting                  node/vectra-control-plane    Starting kubelet.
18m         Normal    NodeHasSufficientMemory   node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasSufficientMemory
18m         Normal    NodeHasNoDiskPressure     node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasNoDiskPressure
18m         Normal    NodeHasSufficientPID      node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-control-plane    Updated Node Allocatable limit across pods
17m         Normal    Starting                  node/vectra-control-plane    Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-control-plane    Node vectra-control-plane event: Registered Node vectra-control-plane in Controller
18m         Normal    Starting                  node/vectra-worker           Starting kubelet.
17m         Normal    NodeHasSufficientMemory   node/vectra-worker           Node vectra-worker status is now: NodeHasSufficientMemory
17m         Normal    NodeHasNoDiskPressure     node/vectra-worker           Node vectra-worker status is now: NodeHasNoDiskPressure
17m         Normal    NodeHasSufficientPID      node/vectra-worker           Node vectra-worker status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-worker           Updated Node Allocatable limit across pods
17m         Warning   Rebooted                  node/vectra-worker           Node vectra-worker has been rebooted, boot id: fd9ad342-d276-46dd-a64e-c852092e755b
17m         Normal    Starting                  node/vectra-worker           Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-worker           Node vectra-worker event: Registered Node vectra-worker in Controller
18m         Normal    Starting                  node/vectra-worker2          Starting kubelet.
17m         Normal    NodeHasSufficientMemory   node/vectra-worker2          Node vectra-worker2 status is now: NodeHasSufficientMemory
17m         Normal    NodeHasNoDiskPressure     node/vectra-worker2          Node vectra-worker2 status is now: NodeHasNoDiskPressure
17m         Normal    NodeHasSufficientPID      node/vectra-worker2          Node vectra-worker2 status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-worker2          Updated Node Allocatable limit across pods
17m         Warning   Rebooted                  node/vectra-worker2          Node vectra-worker2 has been rebooted, boot id: fd9ad342-d276-46dd-a64e-c852092e755b
17m         Normal    Starting                  node/vectra-worker2          Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-worker2          Node vectra-worker2 event: Registered Node vectra-worker2 in Controller

Check network connectivity: If the pod is still experiencing issues, it may be a network connectivity issue. You can use the kubectl exec -it — /bin/bash command to access the pod shell and perform network connectivity tests. For example, you can use ping to test connectivity to other pods or services.

kubectl exec -it  -- bash
# kubectl exec -it nginx-585449566-h8htf bash

nginx-585449566-h8htf:/# ls -al
total 88
drwxr-xr-x   1 root root 4096 Mar  6 11:52 .
drwxr-xr-x   1 root root 4096 Mar  6 11:52 ..
drwxr-xr-x   2 root root 4096 Feb 27 00:00 bin
drwxr-xr-x   2 root root 4096 Dec  9 19:15 boot
drwxr-xr-x   5 root root  360 Mar  6 11:52 dev
drwxr-xr-x   1 root root 4096 Mar  1 18:43 docker-entrypoint.d
-rwxrwxr-x   1 root root 1616 Mar  1 18:42 docker-entrypoint.sh
drwxr-xr-x   1 root root 4096 Mar  6 11:52 etc
drwxr-xr-x   2 root root 4096 Dec  9 19:15 home
drwxr-xr-x   1 root root 4096 Feb 27 00:00 lib
drwxr-xr-x   2 root root 4096 Feb 27 00:00 lib64
drwxr-xr-x   2 root root 4096 Feb 27 00:00 media
drwxr-xr-x   2 root root 4096 Feb 27 00:00 mnt
drwxr-xr-x   2 root root 4096 Feb 27 00:00 opt
dr-xr-xr-x 432 root root    0 Mar  6 11:52 proc
drwx------   1 root root 4096 Mar  6 11:57 root
drwxr-xr-x   1 root root 4096 Mar  6 11:52 run
drwxr-xr-x   2 root root 4096 Feb 27 00:00 sbin
drwxr-xr-x   2 root root 4096 Feb 27 00:00 srv
dr-xr-xr-x  13 root root    0 Mar  6 11:52 sys
drwxrwxrwt   1 root root 4096 Mar  1 18:43 tmp
drwxr-xr-x   1 root root 4096 Feb 27 00:00 usr
drwxr-xr-x   1 root root 4096 Feb 27 00:00 var

Check storage: Finally, if the pod is using storage, make sure that the storage is correctly mounted and accessible by the container. You can use the kubectl describe pod command to view the pod’s volume mounts and their associated storage classes.

kubectl describe pod 
# kubectl describe pod nginx-585449566-4rqvm
Name:             nginx-585449566-4rqvm
Namespace:        default
Priority:         0
Service Account:  default
Node:             vectra-worker2/172.18.0.2
Start Time:       Mon, 06 Mar 2023 11:04:30 +0000
Labels:           app=nginx
                  pod-template-hash=585449566
Annotations:      
Status:           Running
IP:               10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/nginx-585449566
Containers:
  nginx:
    Container ID:   containerd://470ecc0771e2fd3a828e228016677c1084792d8a26ad9d100337d9dcc6086597
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 06 Mar 2023 11:04:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h4xk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6h4xk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  29m   default-scheduler  Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
  Normal  Pulling    29m   kubelet            Pulling image "nginx:latest"
  Normal  Pulled     28m   kubelet            Successfully pulled image "nginx:latest" in 17.21068267s
  Normal  Created    28m   kubelet            Created container nginx
  Normal  Started    28m   kubelet            Started container nginx
# 

By following these steps, you should be able to identify and resolve most issues with Kubernetes pods.

]]>