Docker – Expert Network Consultant https://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Tue, 18 Apr 2023 11:23:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.2 Understanding DevSecOps https://www.expertnetworkconsultant.com/security/understanding-devsecops/ Tue, 11 Apr 2023 10:03:57 +0000 http://www.expertnetworkconsultant.com/?p=6016 Continue readingUnderstanding DevSecOps]]> DevSecOps is a software development methodology that emphasizes the integration of security practices into the software development process, with the goal of delivering secure and resilient software products to users.

In the traditional software development process, security is often an afterthought and addressed only during the later stages of development or in a separate security testing phase. This approach can lead to security vulnerabilities that are expensive and time-consuming to fix, and can also put users’ data and systems at risk.

DevSecOps, on the other hand, integrates security practices into the development process from the very beginning, making security an integral part of the development pipeline. This involves automating security testing, using security-focused code reviews, and implementing security controls and best practices throughout the development process.

Here’s an example of how DevSecOps might work in practice:

Suppose a team of developers is building a new web application for a financial institution. As part of the DevSecOps process, the team implements automated security testing tools that scan the code for common vulnerabilities such as cross-site scripting (XSS) and SQL injection. These tests are run every time new code is committed to the repository, ensuring that any security issues are caught early in the development cycle.

In addition, the team conducts security-focused code reviews, with a particular emphasis on authentication and authorization mechanisms to protect against unauthorized access to the system. They also implement security controls such as encryption and access controls to safeguard user data and prevent data breaches.

Throughout the development process, the team works closely with the security team to ensure that the application is designed and built with security in mind. By following a DevSecOps approach, the team is able to deliver a secure and resilient application that meets the needs of the financial institution and its customers, while reducing the risk of security breaches and other vulnerabilities.

Secure Kubernetes Deployment Configuration: One of the key practices in securing Kubernetes is to ensure that the deployment configurations are secure. You should apply best practices in configuring Kubernetes resources like namespaces, services, and network policies. For example, you can use Kubernetes network policies to restrict network traffic between different services in your cluster, reducing the potential attack surface.

Deny all ingress traffic: This policy will block all incoming traffic to a service.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Allow traffic only from specific sources: This policy will allow incoming traffic only from a specific set of sources.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-sources
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: allowed-source-app
    ports:
    - protocol: TCP
      port: 80

Deny egress traffic to specific destinations: This policy will block outgoing traffic from a service to a specific set of destinations.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 80

Allow traffic only to specific ports: This policy will allow outgoing traffic only to specific ports.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: allowed-destination-app
    ports:
    - protocol: TCP
      port: 80

Note that these policies are just examples, and may need to be adapted to your specific use case. Additionally, it’s important to thoroughly test any network policies before implementing them in a production environment.

Use Kubernetes Secrets: Kubernetes Secrets is a native way to store and manage sensitive information, like passwords or tokens, in your Kubernetes cluster. Instead of storing these secrets in plain text, you can use Kubernetes Secrets to encrypt and protect them. This makes it more difficult for attackers to access sensitive data in the event of a breach.

Implement Kubernetes RBAC: Kubernetes Role-Based Access Control (RBAC) lets you control access to Kubernetes resources at a granular level. By implementing RBAC, you can limit access to your cluster to only the users and services that need it, reducing the risk of unauthorized access.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-service-account

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["extensions"]
    resources: ["deployments"]
    verbs: ["get", "watch", "list"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role-binding
subjects:
  - kind: ServiceAccount
    name: example-service-account
roleRef:
  kind: Role
  name: example-role
  apiGroup: rbac.authorization.k8s.io

In this manifest, we first define a service account named “example-service-account”. We then define a role named “example-role” that specifies the permissions to access pods and deployments. Finally, we define a role binding named “example-role-binding” that binds the service account to the role. This means that any pod that is associated with the service account will have the permissions specified in the role.

Regularly Update and Patch Kubernetes: Regularly updating and patching Kubernetes is a critical aspect of DevSecOps. Updates and patches often include important security fixes and vulnerability patches. Make sure to follow the Kubernetes security updates and patch your cluster regularly.

Use Kubernetes Admission Controllers: Kubernetes Admission Controllers is a security feature that allows you to define policies that must be enforced before any request to Kubernetes is processed. These policies can be used to ensure that all containers running in the cluster are using only approved images and other resources.

Integrate Security into the CI/CD Pipeline: Integrating security into the CI/CD pipeline is a key aspect of DevSecOps. You can use tools like container scanning

DevSecOps practices can be applied to Kubernetes, a popular container orchestration platform, to ensure the security of the applications running on it. Here are some best practices for DevSecOps with Kubernetes, along with examples:

Secure Kubernetes cluster setup: The first step in securing Kubernetes is to ensure that the cluster is set up securely. This involves applying security best practices such as enabling role-based access control (RBAC) and using secure network policies.
Example: Use Kubernetes’ built-in RBAC features to grant permissions only to users who need them. For example, a developer should not have the same level of access as an administrator. Limiting the permissions of each user can help reduce the risk of a security breach.

Continuous security testing: Just as with any software development process, continuous security testing is essential for Kubernetes applications. This includes running automated security scans to detect vulnerabilities in Kubernetes resources, such as deployments and pods.
Example: Use security testing tools like Aqua Security or Sysdig to scan Kubernetes resources for security vulnerabilities, such as misconfigurations or exposed credentials. These tools can help identify vulnerabilities early in the development process, allowing teams to fix them before deployment.

Container image security: The container images used to run Kubernetes applications should be secure and free from vulnerabilities. This involves scanning container images for security vulnerabilities before deployment.
Example: Use container image scanning tools like Clair or Trivy to scan container images for known vulnerabilities. These tools can be integrated into the Kubernetes pipeline to scan images automatically before deployment.

Network security: Kubernetes network security involves securing the communication between Kubernetes resources and ensuring that they are only accessible by authorized users and services.
Example: Use Kubernetes network policies to define and enforce rules around how resources can communicate with each other. For example, you can create a policy that only allows traffic between specific pods or namespaces.

Secure secrets management: Kubernetes allows you to store and manage secrets such as passwords and API keys. It’s important to ensure that these secrets are encrypted and secured.
Example: Use Kubernetes secrets to store sensitive data, such as database credentials, and encrypt them at rest. Use RBAC to ensure that only authorized users and services can access these secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mysecrets
type: Opaque
data:
  username: 
  password: 

In this example, we are creating a secret called “mysecrets” with two key-value pairs: “username” and “password”. The values are base64-encoded to ensure that they are not stored in plain text.

You can create this manifest file and apply it using the kubectl command line tool. Here is an example of how to create the secret from the manifest file:

kubectl apply -f mysecrets.yaml

Once the secret is created, you can use it in your application by referencing it in your deployment or pod configuration file. For example, if you wanted to use the “username” and “password” values in your application’s environment variables, you could include the following lines in your deployment or pod manifest:

spec:
  containers:
  - name: myapp
    image: myapp:latest
    env:
    - name: MY_USERNAME
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: username
    - name: MY_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: password

This will inject the values from the “mysecrets” secret into your application’s environment variables at runtime, allowing you to securely access sensitive information without exposing it in your code or configuration files.
By following these DevSecOps best practices, teams can ensure that their Kubernetes applications are secure and resilient, and can reduce the risk of security breaches and other vulnerabilities.

Red Hat as usual has a great overview on the subject here – https://www.redhat.com/en/topics/devops/what-is-devsecops

These are some other very useful links;
OWASP: https://owasp.org/
NIST: https://www.nist.gov/
DevSecOps.org: https://www.devsecops.org/
SANS Institute: https://www.sans.org/
Jenkins: https://www.jenkins.io/

]]>
How to Create and Use a Dockerized DHCP Server for Your Applications and Networks https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/how-to-create-and-use-a-dockerized-dhcp-server-for-your-applications-and-networks/ Thu, 23 Mar 2023 16:45:39 +0000 http://www.expertnetworkconsultant.com/?p=6022 Continue readingHow to Create and Use a Dockerized DHCP Server for Your Applications and Networks]]> Docker is a powerful platform for containerizing and deploying applications, and its networking capabilities allow for the creation of isolated test networks and the management of containerized applications.

In some cases, however, containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications, and create virtual networks for practicing networking concepts and configurations. In this article, we will walk through the steps for creating and using a Dockerized DHCP server for your applications and networks.

We will cover how to create a bridge network, run the DHCP server container, and configure your host and other containers to use the DHCP server to obtain IP addresses.

Choose a base image: You will need a base image for your DHCP server. In this example, we will use the Alpine Linux base image, which is a lightweight distribution of Linux that is popular for Docker images.

Install DHCP server software: Next, you will need to install the DHCP server software on your image. In this example, we will use the ISC DHCP server software, which is a widely used and well-supported DHCP server.

Configure the DHCP server: Once you have installed the DHCP server software, you will need to configure it to lease IPs. You will need to specify the range of IP addresses that can be leased, the subnet mask, and other network settings.

Create a Docker Network – I have called mine <my>-<network>

docker network create my-network

Create the DHCPD.CONF file in the build directory.

##########dhcpd.conf###########

default-lease-time 259200;
max-lease-time 777600;
option domain-name "your-domain.com";

subnet 192.168.2.0 netmask 255.255.255.0{
range 192.168.2.2 192.168.2.250;
option broadcast-address 192.168.2.255;
option routers 192.168.2.1;
option domain-name-servers 192.168.1.1;
}

Create a Dockerfile: With the base image and DHCP server software installed and configured, you can now create a Dockerfile that will build the image. Here is an example Dockerfile:

Create a Dockerfile

FROM alpine:latest

RUN apk add --no-cache dhcp

COPY dhcpd.conf /etc/dhcpd.conf

EXPOSE 67/udp

ENTRYPOINT ["dhcpd", "-f", "-d", "--no-pid"]

In this Dockerfile, we start with the latest Alpine Linux image, then we install the ISC DHCP server software using the apk package manager. We copy a pre-configured dhcpd.conf file to the /etc directory, which contains the configuration settings for the DHCP server. We expose port 67/udp, which is the port used by DHCP servers to lease IP addresses. Finally, we set the ENTRYPOINT to start the dhcpd daemon with the specified options.

Build the image: Once you have created the Dockerfile, you can build the image using the docker build command:

docker build -t dhcp-server .

Run the container: With the image built, you can now run a container from the image using the docker run command:

docker run -d --name dhcp-server --net=host dhcp-server

In this command, we run the container in detached mode (-d), give it a name (–name dhcp-server), and use the host network (–net=host) so that the DHCP server can lease IPs to devices on the same network as the host. We specify the name of the image we built in the previous step (dhcp-server) as the container to run.

Your DHCP server container should now be running and leasing IPs to devices on your network. You can view the logs of the container using the docker logs command:

docker logs dhcp-server

And you can stop and remove the container using the docker stop and docker rm commands:

docker stop dhcp-server
docker rm dhcp-server

There are several use cases for having a Docker image running as a DHCP server:

Development and testing: Developers and testers can use a Dockerized DHCP server to create isolated test networks for their applications or services. This allows them to test network configurations and connectivity without interfering with the production network.

Containerized applications: Some containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications.

Education and training: DHCP servers are commonly used in networking courses and training programs. By running a Dockerized DHCP server, educators and students can create virtual networks for practicing networking concepts and configurations.

To get hosts to connect to the network served by the Dockerized DHCP server, you will need to configure the hosts to use DHCP to obtain an IP address. This can usually be done by configuring the network interface of the host to use DHCP. The exact steps to do this will depend on the operating system of the host.

For example, on a Linux host, you can configure the network interface to use DHCP by editing the /etc/network/interfaces file and adding the following lines

auto eth0
iface eth0 inet dhcp

On a Windows host, you can configure the network interface to use DHCP by going to the Control Panel, selecting Network and Sharing Center, selecting Change adapter settings, right-clicking on the network adapter, selecting Properties, selecting Internet Protocol Version 4 (TCP/IPv4), and selecting Obtain an IP address automatically.

Once the host is configured to use DHCP, it will automatically obtain an IP address from the Dockerized DHCP server when it is connected to the network.

You might rightly ask how these other containers or hosts get an IP address from the above DHCP server container.

Well, below is the answer to your question.

You would need to create a Docker network to add containers in there before they can receive IP addresses from the DHCP server. When you create a Docker network, you can specify that it is a bridge network, which is the default network type for Docker. Containers connected to a bridge network can communicate with each other using their IP addresses.

To create a bridge network, you can use the docker network create command. Here’s an example:

docker network create my-network

This command creates a bridge network named my-network. You can then start your DHCP server container on this network by using the –network option when running the container:

docker run -d --name dhcp-server --network my-network dhcp-server

This command starts the DHCP server container in detached mode (-d), names the container dhcp-server, and connects it to the my-network network.

Once your DHCP server container is running on the my-network network, you can start other containers on the same network by using the –network option:

docker run -d --name my-container --network my-network my-image

This command starts a container named my-container from the my-image image, and connects it to the my-network network.

When the container starts up, it will obtain an IP address from the DHCP server running on the my-network network. You can view the IP address of the container by using the docker inspect command:

docker inspect my-container

In the output, look for the IPAddress field under the NetworkSettings section. This will show you the IP address that was assigned to the container by the DHCP server.

Ubuntu has a good guide on DHCP – https://ubuntu.com/server/docs/network-dhcp

]]>
Create a Kubernetes Multi-Node Cluster with Kind https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/create-a-kubernetes-multi-node-cluster-with-kind/ Mon, 26 Sep 2022 23:00:28 +0000 http://www.expertnetworkconsultant.com/?p=5335 Continue readingCreate a Kubernetes Multi-Node Cluster with Kind]]> Did you know you could create a kubernetes multinode cluster with Kind without much bother?

The power of clustering goes a long way to enforce technical knowledge and hands-on application of technologies, it is essential for any serious engineer to build full scale labs covering best known architectures. Learning Kubernetes is best when you are able to build a production grade cluster replica. Minikube has always been helpful but the true benefit of a real world architecture does not come with Minikubing. This is where KIND comes in, bringing the power of real hands on without dedicating much hardware and resources as you would in using virtual machines to create a Kubernetes cluster.

Install Prerequisites

apt-get install curl

Install Docker


sudo apt-get update

	
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Give user permissions to query Docker

sudo usermod -aG docker $USER
Restart your hosts (Nodes) for the permissions to take effect.

Install Kind on Linux

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin

Install Kubectl

We’ll need kubectl to work with Kubernetes cluster, in case its not already installed. For this, we can use below commands:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Set Permissions

chmod +x ./kubectl

Move kubectl to local

sudo mv ./kubectl /usr/local/bin/kubectl

Create Multi-Node Clusters – 1 Master and 2 Worker Nodes

Create Cluster Manifest File – cluster-config-yaml.yaml


# A sample multi-node cluster config file
# A three node (two workers, one controller) cluster config
# To add more worker nodes, add another role: worker to the list
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: 
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker


root@cluster-vm:/home/cluster# kind create cluster --name=azvmms-node --config=single-cluster.yaml
Creating cluster "azvmms-node" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-azvmms-node"
You can now use your cluster with:

kubectl cluster-info --context kind-azvmms-node

Creates the Control Plane

- role: control-plane

Creates the 2 Worker Nodes

- role: worker
- role: worker

Create Cluster

kind create cluster --config=<cluster-config-yaml.yaml>

Check Pods

$ kubectl get pods -ns -A -o wide
NAMESPACE            NAME                                            READY   STATUS    RESTARTS   AGE   IP           NODE                    NOMINATED NODE   READINESS GATES
kube-system          coredns-558bd4d5db-2gszr                        1/1     Running   0          91m   10.244.0.3   spacers-control-plane              
kube-system          coredns-558bd4d5db-46rkp                        1/1     Running   0          91m   10.244.0.2   spacers-control-plane              
kube-system          etcd-spacers-control-plane                      1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-9jmwv                                   1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kindnet-c2jrx                                   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-hlhmx                                   1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-apiserver-spacers-control-plane            1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kube-controller-manager-spacers-control-plane   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-97q94                                1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-proxy-t4ltb                                1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-xrd5l                                1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kube-scheduler-spacers-control-plane            1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
local-path-storage   local-path-provisioner-547f784dff-5dgp6         1/1     Running   0          91m   10.244.0.4   spacers-control-plane              

Deploy a Sample App

kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml

Test Access

https://localhost:30779/
kubectl run my-nginx --image=nginx --replicas=2 --port=8080

Delete Cluster
kind delete clusters <cluster-name>

]]>
Connecting to GitHub Step by Step https://www.expertnetworkconsultant.com/configuring/connecting-to-github-step-by-step/ Tue, 14 Jun 2022 15:28:21 +0000 http://www.expertnetworkconsultant.com/?p=5057 Continue readingConnecting to GitHub Step by Step]]> Connect Your Environment to GitHub
It is that time of the day to overcome the dread of Git. So can we please ask the all important question right now?

What is Git?
According to Wikipedia;

Git is software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows.

A successful connection to GitHub does require an established trust between the client {your working host} and the server {GitHub}.

An SSH key relies upon the use of two related keys, a public key and a private key, that together create a key pair that is used as the secure access credential. The private key is secret, known only to the user, and should be encrypted and stored safely. The public key can be shared freely with any SSH server to which the user wishes to connect.

Public Key authentication – what and why?

The motivation for using public key authentication over simple passwords is security. Public key authentication provides cryptographic strength that even extremely long passwords can not offer. With SSH, public key authentication improves security considerably as it frees the users from remembering complicated passwords (or worse yet, writing them down).

In addition to security public key authentication also offers usability benefits – it allows users to implement single sign-on across the SSH servers they connect to. Public key authentication also allows automated, passwordless login that is a key enabler for the countless secure automation processes that execute within enterprise networks globally.

Checking for existing SSH keys
Before you generate an SSH key, you can check to see if you have any existing SSH keys.

  • Open Git Bash.
  • Enter ls -al ~/.ssh to see if existing SSH keys are present.
  • ls -al ~/.ssh
    total 27
    drwxr-xr-x 1 Cloud Architect 197121    0 May 16 09:39 ./
    drwxr-xr-x 1 Cloud Architect 197121    0 May 10 14:40 ../
    -rw-r--r-- 1 Cloud Architect 197121  419 May 16 09:39 githubaccount
    -rw-r--r-- 1 Cloud Architect 197121  107 May 16 09:39 githubaccount.pub
    -rw-r--r-- 1 Cloud Architect 197121 2675 Nov 11  2021 id_rsa
    -rw-r--r-- 1 Cloud Architect 197121  585 Nov 11  2021 id_rsa.pub
    -rw-r--r-- 1 Cloud Architect 197121 1422 Mar 18 16:50 known_hosts
    

    The known_hosts file contains details of your remote hosts and their public keys.

    Generating a new SSH key

    $ ssh-keygen -t ed25519 -C "your_email@domain.com"

    This creates a new SSH key, using the provided email as a label.

    > Generating public/private algorithm key pair.
    

    When you’re prompted to “Enter a file in which to save the key,” press Enter. This accepts the default file location.

    > Enter a file in which to save the key (/c/Users/you/.ssh/id_algorithm):[Press enter]
    
    Generating public/private ed25519 key pair.
    Enter file in which to save the key (/c/userPath/.ssh/id_ed25519): /c/userPath/.ssh/githubaccount
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /c/userPath/.ssh/githubaccount
    Your public key has been saved in /c/userPath/.ssh/githubaccount.pub
    The key fingerprint is:
    SHA256:CZaWiynl94Vk6dkqXXIBCnewmbgY6vhGnqeq8PrRmulQ git.user@domain.com
    The key's randomart image is:
    +--[ED25519 256]--+
    |    .o..o+. o.   |
    |. o .oo.+o o..   |
    |oo +. .oo....o.  |
    |=o..  .  + .o..  |
    |o.=E   .S..o..   |
    | *.     . + o    |
    |o.+        o     |
    |+*               |
    |@*o              |
    +----[SHA256]-----+
    
    Cloud Architect@DESKTOP-ATCRUJV MINGW64 /c/Workdir/terraform
    $
    
    

    At the prompt, type a secure passphrase. For more information, see “Working with SSH key passphrases.”

    > Enter passphrase (empty for no passphrase): [Type a passphrase]
    > Enter same passphrase again: [Type passphrase again]
    

    Copy the SSH public key to your clipboard.

    If your SSH public key file has a different name than the example code, modify the filename to match your current setup. When copying your key, don’t add any newlines or whitespace.

    Adding your SSH key to the ssh-agent
    Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key.
    If you have GitHub Desktop installed, you can use it to clone repositories and not deal with SSH keys.

    Ensure the ssh-agent is running. You can use the “Auto-launching the ssh-agent” instructions in “Working with SSH key passphrases”, or start it manually:

    # start the ssh-agent in the background
    $ eval "$(ssh-agent -s)"
    Agent pid 1304
    
    $ clip < ~/.ssh/githubaccount.pub
    # Copies the contents of the id_ed25519.pub file to your clipboard
    

    There are no SSH keys associated with your account

    In the "Access" section of the sidebar, click SSH and GPG keys.
    Click New SSH key or Add SSH key.

    add public ssh key to github
    add public ssh key to github

    added new public ssh key to github successfully
    added new public ssh key to github successfully

    create new repository on github
    create new repository on github

    Create new Public Repository on GitHub
    create new public repository on github

    Create new Public Repository on GitHub
    created a new public repository on github

    Testing your SSH connection
    After you've set up your SSH key and added it to your account on GitHub.com, you can test your connection.

    Enter the following:

    $ ssh -T git@github.com
    # Attempts to ssh to GitHub
    

    You may see a warning like this:

    > The authenticity of host 'github.com (IP ADDRESS)' can't be established.
    > RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
    > Are you sure you want to continue connecting (yes/no)?

    Troubleshooting

    $ ssh -vT git@github.com:remote-git-repo/azterraform.git
    OpenSSH_8.8p1, OpenSSL 1.1.1l  24 Aug 2021
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: resolve_canonicalize: hostname github.com:remote-git-repo/azterraform.git is an unrecognised address
    ssh: Could not resolve hostname github.com:remote-git-repo/azterraform.git: Name or service not known
    

    Solutions are always in the problems

    $ ssh -vT git@github.com:remote-git-repo/azterraform.git
    OpenSSH_8.8p1, OpenSSL 1.1.1l  24 Aug 2021
    debug1: Reading configuration data /c/userPath/.ssh/config
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: resolve_canonicalize: hostname github.com:remote-git-repo/azterraform.git is an unrecognised address
    ssh: Could not resolve hostname github.com:remote-git-repo/azterraform.git: Name or service not known
    

    Add the following line to /etc/ssh/ssh_config file.

    add Identity File to ssh_config file to tell ssh where to find your private key

    # Host *
    #   StrictHostKeyChecking ask
    #   IdentityFile ~/.ssh/id_rsa
    #   IdentityFile ~/.ssh/id_dsa
    #   IdentityFile ~/.ssh/id_ecdsa
    #   IdentityFile ~/.ssh/id_ed25519
        IdentityFile ~/.ssh/githubaccount
    #   Port 22
    #   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
    #   MACs hmac-md5,hmac-sha1,umac-64@openssh.com
    

    Create the config file under ~/.ssh/ and add the following lines

    Setup Your Local Working Environment

    Clone your remote Repository - use git clone

    $ git clone git@github.com:remote-git-repo/azterraform.git
    Cloning into 'azterraform'...
    remote: Enumerating objects: 6, done.
    remote: Counting objects: 100% (6/6), done.
    remote: Compressing objects: 100% (2/2), done.
    Receiving objects: 100% (6/6), done.
    remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
    Cloud Architect@DESKTOP-ATCRUJV MINGW64 /c/Workdir/terraform (master)
    $
    

    Now begin to perform your functions

    touch filename.tf
    

    Add all items in your working directory using git add .

    $ git add .
    

    Commit your changes to your repo.

    $ git commit -m "Version 1 - Files Created on Local PC"
    [main 2f462f4] Version 1 - Files Created on Local PC
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 main.tf
    

    Remove Git from Folder

    rm -rf .git
    
    ]]>
    Docker Communication Between Containers https://www.expertnetworkconsultant.com/design/docker-communication-between-containers/ Wed, 23 Sep 2020 12:00:15 +0000 http://www.expertnetworkconsultant.com/?p=3982 Continue readingDocker Communication Between Containers]]> Docker Communication Between Containers

    If you want to be able to ping or basically access a running docker container from another container by simply using the docker name rather than an IP address, DNS must work well. Docker natively provides DNS capability to get containers in the same network to communicate between containers over their DNS names as IP addressing changes as containers go in and out.

    Basics of Docker Networking;

    Docker Network Defaults;
    Each container connected to a private virtual network “bridge”
    Each virtual network routes through NAT firewall on host IP
    All containers on a virtual network can talk to each other without -p

    Docker Network Best Practices;
    Create a new virtual network for each app:

  • network “web_app_network” for mysql and php or apache containers
  • network “web_api_network” for mongo and nodejs containers
  • Step 1:
    Let us begin by creating two containers; I am using the NGINX Image.

    You can download the nginx to the Local Cache.

    $>docker image ls 
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    httpd               latest              417af7dc28bc        7 days ago          138MB
    nginx               latest              7e4d58f0e5f3        12 days ago         133MB
    mysql               latest              e1d7dc9731da        13 days ago         544MB
    alpine              latest              a24bb4013296        3 months ago        5.57MB
    

    Get the image by typing “docker container pull nginx”

    docker container run -d --name container1  -p 8080:80 nginx
    docker container run -d --name container2  -p 8088:80 nginx
    

    Let us verify

    $docker container ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
    a1147ca12e97        nginx               "/docker-entrypoint.…"   1 hours ago          Up 10 minutes       0.0.0.0:8080->80/tcp   container1
    0e364de8f313        nginx               "/docker-entrypoint.…"   1 hours ago          Up 10 minutes       0.0.0.0:8088->80/tcp   container2
    

    docker communication between containers

    Important note: it is of utmost importance to explicitly specify a name with –name for your containers. The reason being that it will not work with the auto-generated names that Docker assigns to your container(s).

    Step 2:
    Create a new network:

    docker network create nginx-network

    Verify if this network is listed in the docker networks

    C:\>docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    25d4ec1592eb        bridge              bridge              local
    ba0017e88f28        host                host                local
    06658aee8f0c        nginx-network       bridge              local
    70be58caf984        none                null                local
    33668dd3b17f        webservices         bridge              local
    

    Step 3:
    Connect your containers to the network:

    docker network connect nginx-network container1
    docker network connect nginx-network container2
    

    Step 4:
    Verify if your containers are part of the newly created network (nginx-network):

    docker network inspect nginx-network
    
      "ConfigOnly": false,
            "Containers": {
                "0e364de8f3134e242e513a6cf3da4b69bb38fb7ef17213a309a7bda27b423b3a": {
                    "Name": "container1",
                    "EndpointID": "fdf973b2840adea185bec76c8684bb1c404a21ccb9947c16c58119b350aebb36",
                    "MacAddress": "02:42:ac:12:00:03",
                    "IPv4Address": "172.18.0.3/16",
                    "IPv6Address": ""
                },
                "a1147ca12e97cb054af40ab67255d9dd5817d7197695e3756ee5fd614195de77": {
                    "Name": "container2",
                    "EndpointID": "6edb537acdc3b1ec6ee233993d9e6d28cd8a62055600300d0e77e48c94ee9a88",
                    "MacAddress": "02:42:ac:12:00:02",
                    "IPv4Address": "172.18.0.2/16",
                    "IPv6Address": ""
                }
    

    Install ping for nginx as not all images come prepackaged with the ping utility

    Run docker container1 and install ping. you can do so by going to the bash of the container by typing;

    These two commands are needed. You can go ahead line by line as per below or in one single line as per the instruction 2 below;

    Instruction 1:

     
    apt-get update
    apt-get install iputils-ping
    

    Instruction 2:

    $docker container exec -it container1 bash
    root@a1147ca12e97:/#
    root@a1147ca12e97:/# apt-get update && apt-get install iputils-ping
    

    Repeat above step for container2

    Final Step:
    Finally test the connection between container1 and container2.

    $docker container exec -it container1 ping container2
    PING container2 (172.18.0.3) 56(84) bytes of data.
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=1 ttl=64 time=0.050 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=2 ttl=64 time=0.043 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=3 ttl=64 time=0.142 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=4 ttl=64 time=0.145 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=5 ttl=64 time=0.142 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=6 ttl=64 time=0.066 ms
    64 bytes from container2.nginx-network (172.18.0.3): icmp_seq=7 ttl=64 time=0.047 ms
    ^C
    --- container2 ping statistics ---
    7 packets transmitted, 7 received, 0% packet loss, time 129ms
    rtt min/avg/max/mdev = 0.043/0.090/0.145/0.047 ms
    

    Hope you have enjoyed this article? Look out for more on this website. Bookmark by pressing (CTRL + D)

    Follow this link to learn more about the amazing nginx docker image.

    ]]>