Containers – Expert Network Consultant https://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Tue, 18 Apr 2023 11:23:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.2 Understanding DevSecOps https://www.expertnetworkconsultant.com/security/understanding-devsecops/ Tue, 11 Apr 2023 10:03:57 +0000 http://www.expertnetworkconsultant.com/?p=6016 Continue readingUnderstanding DevSecOps]]> DevSecOps is a software development methodology that emphasizes the integration of security practices into the software development process, with the goal of delivering secure and resilient software products to users.

In the traditional software development process, security is often an afterthought and addressed only during the later stages of development or in a separate security testing phase. This approach can lead to security vulnerabilities that are expensive and time-consuming to fix, and can also put users’ data and systems at risk.

DevSecOps, on the other hand, integrates security practices into the development process from the very beginning, making security an integral part of the development pipeline. This involves automating security testing, using security-focused code reviews, and implementing security controls and best practices throughout the development process.

Here’s an example of how DevSecOps might work in practice:

Suppose a team of developers is building a new web application for a financial institution. As part of the DevSecOps process, the team implements automated security testing tools that scan the code for common vulnerabilities such as cross-site scripting (XSS) and SQL injection. These tests are run every time new code is committed to the repository, ensuring that any security issues are caught early in the development cycle.

In addition, the team conducts security-focused code reviews, with a particular emphasis on authentication and authorization mechanisms to protect against unauthorized access to the system. They also implement security controls such as encryption and access controls to safeguard user data and prevent data breaches.

Throughout the development process, the team works closely with the security team to ensure that the application is designed and built with security in mind. By following a DevSecOps approach, the team is able to deliver a secure and resilient application that meets the needs of the financial institution and its customers, while reducing the risk of security breaches and other vulnerabilities.

Secure Kubernetes Deployment Configuration: One of the key practices in securing Kubernetes is to ensure that the deployment configurations are secure. You should apply best practices in configuring Kubernetes resources like namespaces, services, and network policies. For example, you can use Kubernetes network policies to restrict network traffic between different services in your cluster, reducing the potential attack surface.

Deny all ingress traffic: This policy will block all incoming traffic to a service.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Allow traffic only from specific sources: This policy will allow incoming traffic only from a specific set of sources.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-sources
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: allowed-source-app
    ports:
    - protocol: TCP
      port: 80

Deny egress traffic to specific destinations: This policy will block outgoing traffic from a service to a specific set of destinations.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 80

Allow traffic only to specific ports: This policy will allow outgoing traffic only to specific ports.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: allowed-destination-app
    ports:
    - protocol: TCP
      port: 80

Note that these policies are just examples, and may need to be adapted to your specific use case. Additionally, it’s important to thoroughly test any network policies before implementing them in a production environment.

Use Kubernetes Secrets: Kubernetes Secrets is a native way to store and manage sensitive information, like passwords or tokens, in your Kubernetes cluster. Instead of storing these secrets in plain text, you can use Kubernetes Secrets to encrypt and protect them. This makes it more difficult for attackers to access sensitive data in the event of a breach.

Implement Kubernetes RBAC: Kubernetes Role-Based Access Control (RBAC) lets you control access to Kubernetes resources at a granular level. By implementing RBAC, you can limit access to your cluster to only the users and services that need it, reducing the risk of unauthorized access.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-service-account

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["extensions"]
    resources: ["deployments"]
    verbs: ["get", "watch", "list"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role-binding
subjects:
  - kind: ServiceAccount
    name: example-service-account
roleRef:
  kind: Role
  name: example-role
  apiGroup: rbac.authorization.k8s.io

In this manifest, we first define a service account named “example-service-account”. We then define a role named “example-role” that specifies the permissions to access pods and deployments. Finally, we define a role binding named “example-role-binding” that binds the service account to the role. This means that any pod that is associated with the service account will have the permissions specified in the role.

Regularly Update and Patch Kubernetes: Regularly updating and patching Kubernetes is a critical aspect of DevSecOps. Updates and patches often include important security fixes and vulnerability patches. Make sure to follow the Kubernetes security updates and patch your cluster regularly.

Use Kubernetes Admission Controllers: Kubernetes Admission Controllers is a security feature that allows you to define policies that must be enforced before any request to Kubernetes is processed. These policies can be used to ensure that all containers running in the cluster are using only approved images and other resources.

Integrate Security into the CI/CD Pipeline: Integrating security into the CI/CD pipeline is a key aspect of DevSecOps. You can use tools like container scanning

DevSecOps practices can be applied to Kubernetes, a popular container orchestration platform, to ensure the security of the applications running on it. Here are some best practices for DevSecOps with Kubernetes, along with examples:

Secure Kubernetes cluster setup: The first step in securing Kubernetes is to ensure that the cluster is set up securely. This involves applying security best practices such as enabling role-based access control (RBAC) and using secure network policies.
Example: Use Kubernetes’ built-in RBAC features to grant permissions only to users who need them. For example, a developer should not have the same level of access as an administrator. Limiting the permissions of each user can help reduce the risk of a security breach.

Continuous security testing: Just as with any software development process, continuous security testing is essential for Kubernetes applications. This includes running automated security scans to detect vulnerabilities in Kubernetes resources, such as deployments and pods.
Example: Use security testing tools like Aqua Security or Sysdig to scan Kubernetes resources for security vulnerabilities, such as misconfigurations or exposed credentials. These tools can help identify vulnerabilities early in the development process, allowing teams to fix them before deployment.

Container image security: The container images used to run Kubernetes applications should be secure and free from vulnerabilities. This involves scanning container images for security vulnerabilities before deployment.
Example: Use container image scanning tools like Clair or Trivy to scan container images for known vulnerabilities. These tools can be integrated into the Kubernetes pipeline to scan images automatically before deployment.

Network security: Kubernetes network security involves securing the communication between Kubernetes resources and ensuring that they are only accessible by authorized users and services.
Example: Use Kubernetes network policies to define and enforce rules around how resources can communicate with each other. For example, you can create a policy that only allows traffic between specific pods or namespaces.

Secure secrets management: Kubernetes allows you to store and manage secrets such as passwords and API keys. It’s important to ensure that these secrets are encrypted and secured.
Example: Use Kubernetes secrets to store sensitive data, such as database credentials, and encrypt them at rest. Use RBAC to ensure that only authorized users and services can access these secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mysecrets
type: Opaque
data:
  username: 
  password: 

In this example, we are creating a secret called “mysecrets” with two key-value pairs: “username” and “password”. The values are base64-encoded to ensure that they are not stored in plain text.

You can create this manifest file and apply it using the kubectl command line tool. Here is an example of how to create the secret from the manifest file:

kubectl apply -f mysecrets.yaml

Once the secret is created, you can use it in your application by referencing it in your deployment or pod configuration file. For example, if you wanted to use the “username” and “password” values in your application’s environment variables, you could include the following lines in your deployment or pod manifest:

spec:
  containers:
  - name: myapp
    image: myapp:latest
    env:
    - name: MY_USERNAME
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: username
    - name: MY_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: password

This will inject the values from the “mysecrets” secret into your application’s environment variables at runtime, allowing you to securely access sensitive information without exposing it in your code or configuration files.
By following these DevSecOps best practices, teams can ensure that their Kubernetes applications are secure and resilient, and can reduce the risk of security breaches and other vulnerabilities.

Red Hat as usual has a great overview on the subject here – https://www.redhat.com/en/topics/devops/what-is-devsecops

These are some other very useful links;
OWASP: https://owasp.org/
NIST: https://www.nist.gov/
DevSecOps.org: https://www.devsecops.org/
SANS Institute: https://www.sans.org/
Jenkins: https://www.jenkins.io/

]]>
How to Create and Use a Dockerized DHCP Server for Your Applications and Networks https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/how-to-create-and-use-a-dockerized-dhcp-server-for-your-applications-and-networks/ Thu, 23 Mar 2023 16:45:39 +0000 http://www.expertnetworkconsultant.com/?p=6022 Continue readingHow to Create and Use a Dockerized DHCP Server for Your Applications and Networks]]> Docker is a powerful platform for containerizing and deploying applications, and its networking capabilities allow for the creation of isolated test networks and the management of containerized applications.

In some cases, however, containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications, and create virtual networks for practicing networking concepts and configurations. In this article, we will walk through the steps for creating and using a Dockerized DHCP server for your applications and networks.

We will cover how to create a bridge network, run the DHCP server container, and configure your host and other containers to use the DHCP server to obtain IP addresses.

Choose a base image: You will need a base image for your DHCP server. In this example, we will use the Alpine Linux base image, which is a lightweight distribution of Linux that is popular for Docker images.

Install DHCP server software: Next, you will need to install the DHCP server software on your image. In this example, we will use the ISC DHCP server software, which is a widely used and well-supported DHCP server.

Configure the DHCP server: Once you have installed the DHCP server software, you will need to configure it to lease IPs. You will need to specify the range of IP addresses that can be leased, the subnet mask, and other network settings.

Create a Docker Network – I have called mine <my>-<network>

docker network create my-network

Create the DHCPD.CONF file in the build directory.

##########dhcpd.conf###########

default-lease-time 259200;
max-lease-time 777600;
option domain-name "your-domain.com";

subnet 192.168.2.0 netmask 255.255.255.0{
range 192.168.2.2 192.168.2.250;
option broadcast-address 192.168.2.255;
option routers 192.168.2.1;
option domain-name-servers 192.168.1.1;
}

Create a Dockerfile: With the base image and DHCP server software installed and configured, you can now create a Dockerfile that will build the image. Here is an example Dockerfile:

Create a Dockerfile

FROM alpine:latest

RUN apk add --no-cache dhcp

COPY dhcpd.conf /etc/dhcpd.conf

EXPOSE 67/udp

ENTRYPOINT ["dhcpd", "-f", "-d", "--no-pid"]

In this Dockerfile, we start with the latest Alpine Linux image, then we install the ISC DHCP server software using the apk package manager. We copy a pre-configured dhcpd.conf file to the /etc directory, which contains the configuration settings for the DHCP server. We expose port 67/udp, which is the port used by DHCP servers to lease IP addresses. Finally, we set the ENTRYPOINT to start the dhcpd daemon with the specified options.

Build the image: Once you have created the Dockerfile, you can build the image using the docker build command:

docker build -t dhcp-server .

Run the container: With the image built, you can now run a container from the image using the docker run command:

docker run -d --name dhcp-server --net=host dhcp-server

In this command, we run the container in detached mode (-d), give it a name (–name dhcp-server), and use the host network (–net=host) so that the DHCP server can lease IPs to devices on the same network as the host. We specify the name of the image we built in the previous step (dhcp-server) as the container to run.

Your DHCP server container should now be running and leasing IPs to devices on your network. You can view the logs of the container using the docker logs command:

docker logs dhcp-server

And you can stop and remove the container using the docker stop and docker rm commands:

docker stop dhcp-server
docker rm dhcp-server

There are several use cases for having a Docker image running as a DHCP server:

Development and testing: Developers and testers can use a Dockerized DHCP server to create isolated test networks for their applications or services. This allows them to test network configurations and connectivity without interfering with the production network.

Containerized applications: Some containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications.

Education and training: DHCP servers are commonly used in networking courses and training programs. By running a Dockerized DHCP server, educators and students can create virtual networks for practicing networking concepts and configurations.

To get hosts to connect to the network served by the Dockerized DHCP server, you will need to configure the hosts to use DHCP to obtain an IP address. This can usually be done by configuring the network interface of the host to use DHCP. The exact steps to do this will depend on the operating system of the host.

For example, on a Linux host, you can configure the network interface to use DHCP by editing the /etc/network/interfaces file and adding the following lines

auto eth0
iface eth0 inet dhcp

On a Windows host, you can configure the network interface to use DHCP by going to the Control Panel, selecting Network and Sharing Center, selecting Change adapter settings, right-clicking on the network adapter, selecting Properties, selecting Internet Protocol Version 4 (TCP/IPv4), and selecting Obtain an IP address automatically.

Once the host is configured to use DHCP, it will automatically obtain an IP address from the Dockerized DHCP server when it is connected to the network.

You might rightly ask how these other containers or hosts get an IP address from the above DHCP server container.

Well, below is the answer to your question.

You would need to create a Docker network to add containers in there before they can receive IP addresses from the DHCP server. When you create a Docker network, you can specify that it is a bridge network, which is the default network type for Docker. Containers connected to a bridge network can communicate with each other using their IP addresses.

To create a bridge network, you can use the docker network create command. Here’s an example:

docker network create my-network

This command creates a bridge network named my-network. You can then start your DHCP server container on this network by using the –network option when running the container:

docker run -d --name dhcp-server --network my-network dhcp-server

This command starts the DHCP server container in detached mode (-d), names the container dhcp-server, and connects it to the my-network network.

Once your DHCP server container is running on the my-network network, you can start other containers on the same network by using the –network option:

docker run -d --name my-container --network my-network my-image

This command starts a container named my-container from the my-image image, and connects it to the my-network network.

When the container starts up, it will obtain an IP address from the DHCP server running on the my-network network. You can view the IP address of the container by using the docker inspect command:

docker inspect my-container

In the output, look for the IPAddress field under the NetworkSettings section. This will show you the IP address that was assigned to the container by the DHCP server.

Ubuntu has a good guide on DHCP – https://ubuntu.com/server/docs/network-dhcp

]]>
Efficiently Troubleshoot Kubernetes Pods https://www.expertnetworkconsultant.com/network-troubleshooting-steps-and-techniques/efficiently-troubleshoot-kubernetes-pods/ Mon, 06 Mar 2023 17:00:19 +0000 http://www.expertnetworkconsultant.com/?p=5875 Continue readingEfficiently Troubleshoot Kubernetes Pods]]> Kubernetes is an incredibly powerful platform that allows developers to deploy and manage containerized applications at scale. While Kubernetes has many benefits, it can be challenging to troubleshoot issues that arise with individual pods. Fortunately, there are several best practices that developers can follow to quickly and efficiently troubleshoot Kubernetes pods.

Use kubectl describe to gather information
When troubleshooting Kubernetes pods, the first step is to gather as much information as possible. The kubectl describe command is an invaluable tool for this task. This command provides detailed information about the state of a particular pod, including its current status, any events that have occurred, and any containers that are running within the pod.

To use the kubectl describe command, simply run the following command:

kubectl describe pod 

This will provide you with a wealth of information about the current state of the pod, including any error messages that may be present.

Check the logs
The next step in troubleshooting Kubernetes pods is to check the logs of the container running within the pod. Kubernetes provides a centralized logging system that allows developers to access logs from all containers within a cluster.

To view the logs of a particular container, you can use the kubectl logs command. For example, to view the logs of the container running within a pod named my-pod, you can run the following command:

kubectl logs my-pod 

This will display the logs of the specified container, allowing you to identify any issues that may be present.

Check resource allocation
Kubernetes allows developers to allocate resources to individual pods, including CPU and memory. If a pod is experiencing issues, it’s important to check the resource allocation to ensure that the pod has enough resources to operate correctly.

To check the resource allocation of a particular pod, you can use the kubectl top command. This command provides real-time information about the CPU and memory usage of each pod within a cluster.

For example, to view the resource allocation of a pod named my-pod, you can run the following command:

kubectl top pod my-pod

This will display the CPU and memory usage of the specified pod, allowing you to identify any resource allocation issues.

Check network connectivity
Finally, it’s important to check the network connectivity of a pod if it’s experiencing issues. Kubernetes provides several networking options, including service discovery and load balancing, that can be used to ensure that pods can communicate with each other.

To check the network connectivity of a pod, you can use the kubectl exec command to execute commands within the pod’s container. For example, to check the network connectivity of a pod named my-pod, you can run the following command:

kubectl exec my-pod -- curl http://<service-name>

This will execute the curl command within the specified container, allowing you to check the connectivity of the pod.

In conclusion, Kubernetes is a powerful platform for managing containerized applications, but troubleshooting individual pods can be challenging. By following these best practices, developers can quickly and efficiently troubleshoot Kubernetes pods, ensuring that their applications are running smoothly and without interruption.

Based on the guideline above, let us walk through the steps;

Everyone wants a healthy Pod. Your applications rely on a healthy Pod state for successful delivery of services for consumers. Just as life has its challenges, sometimes you may experience issues with your pods which could put them in any of these states.

Pending: Pods can be in a pending state if there are insufficient resources in the cluster to schedule the pod, or if there is a scheduling issue due to resource constraints, node affinity/anti-affinity, or pod affinity/anti-affinity.

CrashLoopBackOff: Pods can be in a CrashLoopBackOff state if the container in the pod is crashing repeatedly. This can be due to issues with the container image, configuration, dependencies, or resources.

Error: Pods can be in an Error state if there is an issue with the pod’s configuration or if the container is unable to start or run due to issues with the container image, configuration, or dependencies.

Check pod status: The first step in troubleshooting Kubernetes pods is to check the pod status. You can use the kubectl get pods command to view the status of all pods in a given namespace. If a pod is in a Pending, CrashLoopBackOff, or Error state, it indicates that there is an issue that needs to be resolved.

kubectl get pods
# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-585449566-4rqvm   1/1     Running   0          59s

Check container logs: Once you have identified the pod with issues, the next step is to check the container logs. You can use the kubectl logs command to view the logs of a specific container in a pod. This can help you identify any errors or issues that the container is encountering.

kubectl logs 
# kubectl logs nginx-585449566-4rqvm
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/03/06 11:04:49 [notice] 1#1: using the "epoll" event method
2023/03/06 11:04:49 [notice] 1#1: nginx/1.23.3
2023/03/06 11:04:49 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/03/06 11:04:49 [notice] 1#1: OS: Linux 5.15.0-56-generic
2023/03/06 11:04:49 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/03/06 11:04:49 [notice] 1#1: start worker processes
2023/03/06 11:04:49 [notice] 1#1: start worker process 29
2023/03/06 11:04:49 [notice] 1#1: start worker process 30
# 

Check container configuration: If the container logs do not provide any clues, the next step is to check the container configuration. You can use the kubectl describe pod command to view the pod configuration, including container image, environment variables, and resource limits. Make sure that the container configuration is correct and that all required resources are available.

kubectl describe pod  
Name:             nginx-585449566-4rqvm
Namespace:        default
Priority:         0
Service Account:  default
Node:             vectra-worker2/172.18.0.2
Start Time:       Mon, 06 Mar 2023 11:04:30 +0000
Labels:           app=nginx
                  pod-template-hash=585449566
Annotations:      
Status:           Running
IP:               10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/nginx-585449566
Containers:
  nginx:
    Container ID:   containerd://470ecc0771e2fd3a828e228016677c1084792d8a26ad9d100337d9dcc6086597
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 06 Mar 2023 11:04:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h4xk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6h4xk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m24s  default-scheduler  Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
  Normal  Pulling    8m24s  kubelet            Pulling image "nginx:latest"
  Normal  Pulled     8m7s   kubelet            Successfully pulled image "nginx:latest" in 17.21068267s
  Normal  Created    8m7s   kubelet            Created container nginx
  Normal  Started    8m7s   kubelet            Started container nginx
# 

Check cluster events: If the container configuration looks correct, the next step is to check the cluster events. You can use the kubectl get events command to view the events in the cluster. This can help you identify any issues or changes in the cluster that may be affecting the pod.

kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                       MESSAGE
13m         Normal    Scheduled                 pod/nginx-585449566-4rqvm    Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
13m         Normal    Pulling                   pod/nginx-585449566-4rqvm    Pulling image "nginx:latest"
13m         Normal    Pulled                    pod/nginx-585449566-4rqvm    Successfully pulled image "nginx:latest" in 17.21068267s
13m         Normal    Created                   pod/nginx-585449566-4rqvm    Created container nginx
13m         Normal    Started                   pod/nginx-585449566-4rqvm    Started container nginx
13m         Normal    SuccessfulCreate          replicaset/nginx-585449566   Created pod: nginx-585449566-4rqvm
13m         Normal    ScalingReplicaSet         deployment/nginx             Scaled up replica set nginx-585449566 to 1
18m         Normal    Starting                  node/vectra-control-plane    Starting kubelet.
18m         Normal    NodeHasSufficientMemory   node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasSufficientMemory
18m         Normal    NodeHasNoDiskPressure     node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasNoDiskPressure
18m         Normal    NodeHasSufficientPID      node/vectra-control-plane    Node vectra-control-plane status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-control-plane    Updated Node Allocatable limit across pods
17m         Normal    Starting                  node/vectra-control-plane    Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-control-plane    Node vectra-control-plane event: Registered Node vectra-control-plane in Controller
18m         Normal    Starting                  node/vectra-worker           Starting kubelet.
17m         Normal    NodeHasSufficientMemory   node/vectra-worker           Node vectra-worker status is now: NodeHasSufficientMemory
17m         Normal    NodeHasNoDiskPressure     node/vectra-worker           Node vectra-worker status is now: NodeHasNoDiskPressure
17m         Normal    NodeHasSufficientPID      node/vectra-worker           Node vectra-worker status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-worker           Updated Node Allocatable limit across pods
17m         Warning   Rebooted                  node/vectra-worker           Node vectra-worker has been rebooted, boot id: fd9ad342-d276-46dd-a64e-c852092e755b
17m         Normal    Starting                  node/vectra-worker           Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-worker           Node vectra-worker event: Registered Node vectra-worker in Controller
18m         Normal    Starting                  node/vectra-worker2          Starting kubelet.
17m         Normal    NodeHasSufficientMemory   node/vectra-worker2          Node vectra-worker2 status is now: NodeHasSufficientMemory
17m         Normal    NodeHasNoDiskPressure     node/vectra-worker2          Node vectra-worker2 status is now: NodeHasNoDiskPressure
17m         Normal    NodeHasSufficientPID      node/vectra-worker2          Node vectra-worker2 status is now: NodeHasSufficientPID
18m         Normal    NodeAllocatableEnforced   node/vectra-worker2          Updated Node Allocatable limit across pods
17m         Warning   Rebooted                  node/vectra-worker2          Node vectra-worker2 has been rebooted, boot id: fd9ad342-d276-46dd-a64e-c852092e755b
17m         Normal    Starting                  node/vectra-worker2          Starting kube-proxy.
17m         Normal    RegisteredNode            node/vectra-worker2          Node vectra-worker2 event: Registered Node vectra-worker2 in Controller

Check network connectivity: If the pod is still experiencing issues, it may be a network connectivity issue. You can use the kubectl exec -it — /bin/bash command to access the pod shell and perform network connectivity tests. For example, you can use ping to test connectivity to other pods or services.

kubectl exec -it  -- bash
# kubectl exec -it nginx-585449566-h8htf bash

nginx-585449566-h8htf:/# ls -al
total 88
drwxr-xr-x   1 root root 4096 Mar  6 11:52 .
drwxr-xr-x   1 root root 4096 Mar  6 11:52 ..
drwxr-xr-x   2 root root 4096 Feb 27 00:00 bin
drwxr-xr-x   2 root root 4096 Dec  9 19:15 boot
drwxr-xr-x   5 root root  360 Mar  6 11:52 dev
drwxr-xr-x   1 root root 4096 Mar  1 18:43 docker-entrypoint.d
-rwxrwxr-x   1 root root 1616 Mar  1 18:42 docker-entrypoint.sh
drwxr-xr-x   1 root root 4096 Mar  6 11:52 etc
drwxr-xr-x   2 root root 4096 Dec  9 19:15 home
drwxr-xr-x   1 root root 4096 Feb 27 00:00 lib
drwxr-xr-x   2 root root 4096 Feb 27 00:00 lib64
drwxr-xr-x   2 root root 4096 Feb 27 00:00 media
drwxr-xr-x   2 root root 4096 Feb 27 00:00 mnt
drwxr-xr-x   2 root root 4096 Feb 27 00:00 opt
dr-xr-xr-x 432 root root    0 Mar  6 11:52 proc
drwx------   1 root root 4096 Mar  6 11:57 root
drwxr-xr-x   1 root root 4096 Mar  6 11:52 run
drwxr-xr-x   2 root root 4096 Feb 27 00:00 sbin
drwxr-xr-x   2 root root 4096 Feb 27 00:00 srv
dr-xr-xr-x  13 root root    0 Mar  6 11:52 sys
drwxrwxrwt   1 root root 4096 Mar  1 18:43 tmp
drwxr-xr-x   1 root root 4096 Feb 27 00:00 usr
drwxr-xr-x   1 root root 4096 Feb 27 00:00 var

Check storage: Finally, if the pod is using storage, make sure that the storage is correctly mounted and accessible by the container. You can use the kubectl describe pod command to view the pod’s volume mounts and their associated storage classes.

kubectl describe pod 
# kubectl describe pod nginx-585449566-4rqvm
Name:             nginx-585449566-4rqvm
Namespace:        default
Priority:         0
Service Account:  default
Node:             vectra-worker2/172.18.0.2
Start Time:       Mon, 06 Mar 2023 11:04:30 +0000
Labels:           app=nginx
                  pod-template-hash=585449566
Annotations:      
Status:           Running
IP:               10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/nginx-585449566
Containers:
  nginx:
    Container ID:   containerd://470ecc0771e2fd3a828e228016677c1084792d8a26ad9d100337d9dcc6086597
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 06 Mar 2023 11:04:48 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h4xk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6h4xk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  29m   default-scheduler  Successfully assigned default/nginx-585449566-4rqvm to vectra-worker2
  Normal  Pulling    29m   kubelet            Pulling image "nginx:latest"
  Normal  Pulled     28m   kubelet            Successfully pulled image "nginx:latest" in 17.21068267s
  Normal  Created    28m   kubelet            Created container nginx
  Normal  Started    28m   kubelet            Started container nginx
# 

By following these steps, you should be able to identify and resolve most issues with Kubernetes pods.

]]>
Create a Kubernetes Multi-Node Cluster with Kind https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/create-a-kubernetes-multi-node-cluster-with-kind/ Mon, 26 Sep 2022 23:00:28 +0000 http://www.expertnetworkconsultant.com/?p=5335 Continue readingCreate a Kubernetes Multi-Node Cluster with Kind]]> Did you know you could create a kubernetes multinode cluster with Kind without much bother?

The power of clustering goes a long way to enforce technical knowledge and hands-on application of technologies, it is essential for any serious engineer to build full scale labs covering best known architectures. Learning Kubernetes is best when you are able to build a production grade cluster replica. Minikube has always been helpful but the true benefit of a real world architecture does not come with Minikubing. This is where KIND comes in, bringing the power of real hands on without dedicating much hardware and resources as you would in using virtual machines to create a Kubernetes cluster.

Install Prerequisites

apt-get install curl

Install Docker


sudo apt-get update

	
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Give user permissions to query Docker

sudo usermod -aG docker $USER
Restart your hosts (Nodes) for the permissions to take effect.

Install Kind on Linux

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin

Install Kubectl

We’ll need kubectl to work with Kubernetes cluster, in case its not already installed. For this, we can use below commands:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Set Permissions

chmod +x ./kubectl

Move kubectl to local

sudo mv ./kubectl /usr/local/bin/kubectl

Create Multi-Node Clusters – 1 Master and 2 Worker Nodes

Create Cluster Manifest File – cluster-config-yaml.yaml


# A sample multi-node cluster config file
# A three node (two workers, one controller) cluster config
# To add more worker nodes, add another role: worker to the list
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: 
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker


root@cluster-vm:/home/cluster# kind create cluster --name=azvmms-node --config=single-cluster.yaml
Creating cluster "azvmms-node" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-azvmms-node"
You can now use your cluster with:

kubectl cluster-info --context kind-azvmms-node

Creates the Control Plane

- role: control-plane

Creates the 2 Worker Nodes

- role: worker
- role: worker

Create Cluster

kind create cluster --config=<cluster-config-yaml.yaml>

Check Pods

$ kubectl get pods -ns -A -o wide
NAMESPACE            NAME                                            READY   STATUS    RESTARTS   AGE   IP           NODE                    NOMINATED NODE   READINESS GATES
kube-system          coredns-558bd4d5db-2gszr                        1/1     Running   0          91m   10.244.0.3   spacers-control-plane              
kube-system          coredns-558bd4d5db-46rkp                        1/1     Running   0          91m   10.244.0.2   spacers-control-plane              
kube-system          etcd-spacers-control-plane                      1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-9jmwv                                   1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kindnet-c2jrx                                   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-hlhmx                                   1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-apiserver-spacers-control-plane            1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kube-controller-manager-spacers-control-plane   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-97q94                                1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-proxy-t4ltb                                1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-xrd5l                                1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kube-scheduler-spacers-control-plane            1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
local-path-storage   local-path-provisioner-547f784dff-5dgp6         1/1     Running   0          91m   10.244.0.4   spacers-control-plane              

Deploy a Sample App

kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml

Test Access

https://localhost:30779/
kubectl run my-nginx --image=nginx --replicas=2 --port=8080

Delete Cluster
kind delete clusters <cluster-name>

]]>
How to Create Azure AKS Cluster https://www.expertnetworkconsultant.com/cloud/how-to-create-azure-aks-cluster/ Wed, 22 Sep 2021 23:00:28 +0000 http://www.expertnetworkconsultant.com/?p=4667 Continue readingHow to Create Azure AKS Cluster]]> How to Create AKS Cluster

Kubernetes has come to change the world of Microservices. Azure makes Kubernetes Orchestration a breeze with their Azure Kubernetes Services. In this step by step tutorial, I show you how to create your first Kubernetes Cluster on Azure. Before we proceed, may I indulge you with a post I created a little while ago on Azure Networking?

Step 1: create an azure kubernetes service resource on azure
create an azure kubernetes service resource on azure

Step 2: create an azure kubernetes service cluster
create an azure kubernetes service cluster

Step 3: create a kubernetes cluster
create a kubernetes cluster

Step 3: create a kubernetes cluster specify resource group
create a kubernetes cluster specify resource group

Step 4: create a kubernetes cluster name
create a kubernetes cluster name

Step 5: create a kubernetes cluster kubernetes version
create a kubernetes cluster kubernetes version

Step 6: create a kubernetes cluster choose a vm size
create a kubernetes cluster choose a vm size

Step 7: create a kubernetes cluster enable virtual machine scale sets
create a kubernetes cluster enable virtual machine scale sets

Step 8: create a kubernetes cluster – validation passed
create a kubernetes cluster - validation passed

Step 9: create a kubernetes cluster – deployment
create a kubernetes cluster - deployment

Step 10: create a kubernetes cluster – deployment complete
create a kubernetes cluster - deployment complete

How to Create AKS Cluster – Working via the Shell

Connect to your cluster using command line tooling to interact directly with cluster using kubectl, the command line tool for Kubernetes. Kubectl is available within the Azure Cloud Shell by default and can also be installed locally

az account set --subscription 938f58d6-a922-40d0-b7b2-7068c5392eaf 

az aks get-credentials --resource-group learn-503b25e2-82da-40c1-a257-35aeaa9614ae --name aks-workload-westus

# List all deployments in all namespaces


kubectl get --all-namespaces


kubectl get deployments --all-namespaces=true

kubectl get deployments --namespace kube-system

# List all deployments in a specific namespace
# Format :kubectl get deployments –namespace


kubectl get deployments --namespace kube-system

# List details about a specific deployment
# Format :kubectl describe deployment –namespace


kubectl describe deployment my-dep --namespace kube-system

# List pods using a specific label
# Format :kubectl get pods -l = –all-namespaces=true


kubectl get pods -l app=nginx --all-namespaces=true

# Get logs for all pods with a specific label
# Format :kubectl logs -l =


kubectl logs -l app=nginx --namespace kube-system

With your AKS Cluster now deployed, kubernetes commands can now be issued.


azure_portal@Azure:~$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
azure-ip-masq-agent-cfz8r             1/1     Running   0          53m
coredns-autoscaler-54d55c8b75-d7xjm   1/1     Running   0          54m
coredns-d4866bcb7-4wzr8               1/1     Running   0          54m
coredns-d4866bcb7-n4jf8               1/1     Running   0          53m
kube-proxy-5xpvw                      1/1     Running   0          53m
metrics-server-569f6547dd-k5l97       1/1     Running   0          54m
tunnelfront-9bfd7cd94-9hh2c           1/1     Running   0          54m
azure_portal@Azure:~$

azure_portal@Azure:~$ kubectl get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-agentpool-29375834-vmss000001   Ready    agent   55m   v1.20.9

azure_portal@Azure:~$ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1             443/TCP   57m

azure_portal@Azure:~$ kubectl get deployments -n kube-system
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
coredns              2/2     2            2           57m
coredns-autoscaler   1/1     1            1           57m
metrics-server       1/1     1            1           57m
tunnelfront          1/1     1            1           57m
azure_portal@Azure:~$

azure_portal@Azure:~$ kubectl get rs  -n kube-system
NAME                            DESIRED   CURRENT   READY   AGE
coredns-autoscaler-54d55c8b75   1         1         1       58m
coredns-d4866bcb7               2         2         2       58m
metrics-server-569f6547dd       1         1         1       58m
tunnelfront-9bfd7cd94           1         1         1       58m

azure_portal@Azure:~$ kubectl get cm  -n kube-system
NAME                                 DATA   AGE
azure-ip-masq-agent-config           1      59m
cluster-autoscaler-status            1      59m
coredns                              1      59m
coredns-autoscaler                   1      58m
coredns-custom                       0      59m
extension-apiserver-authentication   6      59m
kube-root-ca.crt                     1      59m
overlay-upgrade-data                 4      59m
tunnelfront-kubecfg                  1      59m
azure_portal@Azure:~$

Follow Microsoft’s Lab on creating Azure Kubernetes Services here.

]]>
Docker Communication Between Containers https://www.expertnetworkconsultant.com/design/docker-communication-between-containers/ Wed, 23 Sep 2020 12:00:15 +0000 http://www.expertnetworkconsultant.com/?p=3982 Continue readingDocker Communication Between Containers]]> Docker Communication Between Containers

If you want to be able to ping or basically access a running docker container from another container by simply using the docker name rather than an IP address, DNS must work well. Docker natively provides DNS capability to get containers in the same network to communicate between containers over their DNS names as IP addressing changes as containers go in and out.

Basics of Docker Networking;

Docker Network Defaults;
Each container connected to a private virtual network “bridge”
Each virtual network routes through NAT firewall on host IP
All containers on a virtual network can talk to each other without -p

Docker Network Best Practices;
Create a new virtual network for each app:

  • network “web_app_network” for mysql and php or apache containers
  • network “web_api_network” for mongo and nodejs containers
  • Step 1:
    Let us begin by creating two containers; I am using the NGINX Image.

    You can download the nginx to the Local Cache.

    $>docker image ls 
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    httpd               latest              417af7dc28bc        7 days ago          138MB
    nginx               latest              7e4d58f0e5f3        12 days ago         133MB
    mysql               latest              e1d7dc9731da        13 days ago         544MB
    alpine              latest              a24bb4013296        3 months ago        5.57MB
    

    Get the image by typing “docker container pull nginx”

    docker container run -d --name container1  -p 8080:80 nginx
    docker container run -d --name container2  -p 8088:80 nginx
    

    Let us verify

    $docker container ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
    a1147ca12e97        nginx               "/docker-entrypoint.…"   1 hours ago          Up 10 minutes       0.0.0.0:8080->80/tcp   container1
    0e364de8f313        nginx               "/docker-entrypoint.…"   1 hours ago          Up 10 minutes       0.0.0.0:8088->80/tcp   container2
    

    docker communication between containers

    Important note: it is of utmost importance to explicitly specify a name with –name for your containers. The reason being that it will not work with the auto-generated names that Docker assigns to your container(s).

    Step 2:
    Create a new network:

    docker network create nginx-network

    Verify if this network is listed in the docker networks

    C:\>docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    25d4ec1592eb        bridge              bridge              local
    ba0017e88f28        host                host                local
    06658aee8f0c        nginx-network       bridge              local
    70be58caf984        none                null                local
    33668dd3b17f        webservices         bridge              local
    

    Step 3:
    Connect your containers to the network:

    docker network connect nginx-network container1
    docker network connect nginx-network container2
    

    Step 4:
    Verify if your containers are part of the newly created network (nginx-network):

    docker network inspect nginx-network
    
      "ConfigOnly": false,
            "Containers": {
                "0e364de8f3134e242e513a6cf3da4b69bb38fb7ef17213a309a7bda27b423b3a": {
                    "Name": "container1",
                    "EndpointID": "fdf973b2840adea185bec76c8684bb1c404a21ccb9947c16c58119b350aebb36",
                    "MacAddress": "02:42:ac:12:00:03",
                    "IPv4Address": "172.18.0.3/16",
                    "IPv6Address": ""
                },
                "a1147ca12e97cb054af40ab67255d9dd5817d7197695e3756ee5fd614195de77": {
                    "Name": "container2",
                    "EndpointID": "6edb537acdc3b1ec6ee233993d9e6d28cd8a62055600300d0e77e48c94ee9a88",
                    "MacAddress": "02:42:ac:12:00:02",
                    "IPv4Address": "172.18.0.2/16",
                    "IPv6Address": ""
                }
    

    Install ping for nginx as not all images come prepackaged with the ping utility

    Run docker container1 and install ping. you can do so by going to the bash of the container by typing;

    These two commands are needed. You can go ahead line by line as per below or in one single line as per the instruction 2 below;

    Instruction 1:

     
    apt-get update
    apt-get install iputils-ping
    

    Instruction 2:

    $docker container exec -it container1 bash
    root@a1147ca12e97:/#
    root@a1147ca12e97:/# apt-get update && apt-get install iputils-ping
    

    Repeat above step for container2

    Final Step:
    Finally test the connection between container1 and container2.

    $docker container exec -it container1 ping container2
    PING container2 (172.18.0.3) 56(84) bytes of data.
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=1 ttl=64 time=0.050 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=2 ttl=64 time=0.043 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=3 ttl=64 time=0.142 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=4 ttl=64 time=0.145 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=5 ttl=64 time=0.142 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=6 ttl=64 time=0.066 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=7 ttl=64 time=0.047 ms
    ^C
    --- container2 ping statistics ---
    7 packets transmitted, 7 received, 0% packet loss, time 129ms
    rtt min/avg/max/mdev = 0.043/0.090/0.145/0.047 ms
    

    Hope you have enjoyed this article? Look out for more on this website. Bookmark by pressing (CTRL + D)

    Follow this link to learn more about the amazing nginx docker image.

    ]]>