Linux – Expert Network Consultant https://www.expertnetworkconsultant.com Networking | Cloud | DevOps | IaC Tue, 18 Apr 2023 11:23:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.3.3 Understanding DevSecOps https://www.expertnetworkconsultant.com/security/understanding-devsecops/ Tue, 11 Apr 2023 10:03:57 +0000 http://www.expertnetworkconsultant.com/?p=6016 Continue readingUnderstanding DevSecOps]]> DevSecOps is a software development methodology that emphasizes the integration of security practices into the software development process, with the goal of delivering secure and resilient software products to users.

In the traditional software development process, security is often an afterthought and addressed only during the later stages of development or in a separate security testing phase. This approach can lead to security vulnerabilities that are expensive and time-consuming to fix, and can also put users’ data and systems at risk.

DevSecOps, on the other hand, integrates security practices into the development process from the very beginning, making security an integral part of the development pipeline. This involves automating security testing, using security-focused code reviews, and implementing security controls and best practices throughout the development process.

Here’s an example of how DevSecOps might work in practice:

Suppose a team of developers is building a new web application for a financial institution. As part of the DevSecOps process, the team implements automated security testing tools that scan the code for common vulnerabilities such as cross-site scripting (XSS) and SQL injection. These tests are run every time new code is committed to the repository, ensuring that any security issues are caught early in the development cycle.

In addition, the team conducts security-focused code reviews, with a particular emphasis on authentication and authorization mechanisms to protect against unauthorized access to the system. They also implement security controls such as encryption and access controls to safeguard user data and prevent data breaches.

Throughout the development process, the team works closely with the security team to ensure that the application is designed and built with security in mind. By following a DevSecOps approach, the team is able to deliver a secure and resilient application that meets the needs of the financial institution and its customers, while reducing the risk of security breaches and other vulnerabilities.

Secure Kubernetes Deployment Configuration: One of the key practices in securing Kubernetes is to ensure that the deployment configurations are secure. You should apply best practices in configuring Kubernetes resources like namespaces, services, and network policies. For example, you can use Kubernetes network policies to restrict network traffic between different services in your cluster, reducing the potential attack surface.

Deny all ingress traffic: This policy will block all incoming traffic to a service.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Allow traffic only from specific sources: This policy will allow incoming traffic only from a specific set of sources.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-sources
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: allowed-source-app
    ports:
    - protocol: TCP
      port: 80

Deny egress traffic to specific destinations: This policy will block outgoing traffic from a service to a specific set of destinations.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 80

Allow traffic only to specific ports: This policy will allow outgoing traffic only to specific ports.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-egress
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: allowed-destination-app
    ports:
    - protocol: TCP
      port: 80

Note that these policies are just examples, and may need to be adapted to your specific use case. Additionally, it’s important to thoroughly test any network policies before implementing them in a production environment.

Use Kubernetes Secrets: Kubernetes Secrets is a native way to store and manage sensitive information, like passwords or tokens, in your Kubernetes cluster. Instead of storing these secrets in plain text, you can use Kubernetes Secrets to encrypt and protect them. This makes it more difficult for attackers to access sensitive data in the event of a breach.

Implement Kubernetes RBAC: Kubernetes Role-Based Access Control (RBAC) lets you control access to Kubernetes resources at a granular level. By implementing RBAC, you can limit access to your cluster to only the users and services that need it, reducing the risk of unauthorized access.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-service-account

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["extensions"]
    resources: ["deployments"]
    verbs: ["get", "watch", "list"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-role-binding
subjects:
  - kind: ServiceAccount
    name: example-service-account
roleRef:
  kind: Role
  name: example-role
  apiGroup: rbac.authorization.k8s.io

In this manifest, we first define a service account named “example-service-account”. We then define a role named “example-role” that specifies the permissions to access pods and deployments. Finally, we define a role binding named “example-role-binding” that binds the service account to the role. This means that any pod that is associated with the service account will have the permissions specified in the role.

Regularly Update and Patch Kubernetes: Regularly updating and patching Kubernetes is a critical aspect of DevSecOps. Updates and patches often include important security fixes and vulnerability patches. Make sure to follow the Kubernetes security updates and patch your cluster regularly.

Use Kubernetes Admission Controllers: Kubernetes Admission Controllers is a security feature that allows you to define policies that must be enforced before any request to Kubernetes is processed. These policies can be used to ensure that all containers running in the cluster are using only approved images and other resources.

Integrate Security into the CI/CD Pipeline: Integrating security into the CI/CD pipeline is a key aspect of DevSecOps. You can use tools like container scanning

DevSecOps practices can be applied to Kubernetes, a popular container orchestration platform, to ensure the security of the applications running on it. Here are some best practices for DevSecOps with Kubernetes, along with examples:

Secure Kubernetes cluster setup: The first step in securing Kubernetes is to ensure that the cluster is set up securely. This involves applying security best practices such as enabling role-based access control (RBAC) and using secure network policies.
Example: Use Kubernetes’ built-in RBAC features to grant permissions only to users who need them. For example, a developer should not have the same level of access as an administrator. Limiting the permissions of each user can help reduce the risk of a security breach.

Continuous security testing: Just as with any software development process, continuous security testing is essential for Kubernetes applications. This includes running automated security scans to detect vulnerabilities in Kubernetes resources, such as deployments and pods.
Example: Use security testing tools like Aqua Security or Sysdig to scan Kubernetes resources for security vulnerabilities, such as misconfigurations or exposed credentials. These tools can help identify vulnerabilities early in the development process, allowing teams to fix them before deployment.

Container image security: The container images used to run Kubernetes applications should be secure and free from vulnerabilities. This involves scanning container images for security vulnerabilities before deployment.
Example: Use container image scanning tools like Clair or Trivy to scan container images for known vulnerabilities. These tools can be integrated into the Kubernetes pipeline to scan images automatically before deployment.

Network security: Kubernetes network security involves securing the communication between Kubernetes resources and ensuring that they are only accessible by authorized users and services.
Example: Use Kubernetes network policies to define and enforce rules around how resources can communicate with each other. For example, you can create a policy that only allows traffic between specific pods or namespaces.

Secure secrets management: Kubernetes allows you to store and manage secrets such as passwords and API keys. It’s important to ensure that these secrets are encrypted and secured.
Example: Use Kubernetes secrets to store sensitive data, such as database credentials, and encrypt them at rest. Use RBAC to ensure that only authorized users and services can access these secrets.

apiVersion: v1
kind: Secret
metadata:
  name: mysecrets
type: Opaque
data:
  username: 
  password: 

In this example, we are creating a secret called “mysecrets” with two key-value pairs: “username” and “password”. The values are base64-encoded to ensure that they are not stored in plain text.

You can create this manifest file and apply it using the kubectl command line tool. Here is an example of how to create the secret from the manifest file:

kubectl apply -f mysecrets.yaml

Once the secret is created, you can use it in your application by referencing it in your deployment or pod configuration file. For example, if you wanted to use the “username” and “password” values in your application’s environment variables, you could include the following lines in your deployment or pod manifest:

spec:
  containers:
  - name: myapp
    image: myapp:latest
    env:
    - name: MY_USERNAME
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: username
    - name: MY_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysecrets
          key: password

This will inject the values from the “mysecrets” secret into your application’s environment variables at runtime, allowing you to securely access sensitive information without exposing it in your code or configuration files.
By following these DevSecOps best practices, teams can ensure that their Kubernetes applications are secure and resilient, and can reduce the risk of security breaches and other vulnerabilities.

Red Hat as usual has a great overview on the subject here – https://www.redhat.com/en/topics/devops/what-is-devsecops

These are some other very useful links;
OWASP: https://owasp.org/
NIST: https://www.nist.gov/
DevSecOps.org: https://www.devsecops.org/
SANS Institute: https://www.sans.org/
Jenkins: https://www.jenkins.io/

]]>
How to Create and Use a Dockerized DHCP Server for Your Applications and Networks https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/how-to-create-and-use-a-dockerized-dhcp-server-for-your-applications-and-networks/ Thu, 23 Mar 2023 16:45:39 +0000 http://www.expertnetworkconsultant.com/?p=6022 Continue readingHow to Create and Use a Dockerized DHCP Server for Your Applications and Networks]]> Docker is a powerful platform for containerizing and deploying applications, and its networking capabilities allow for the creation of isolated test networks and the management of containerized applications.

In some cases, however, containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications, and create virtual networks for practicing networking concepts and configurations. In this article, we will walk through the steps for creating and using a Dockerized DHCP server for your applications and networks.

We will cover how to create a bridge network, run the DHCP server container, and configure your host and other containers to use the DHCP server to obtain IP addresses.

Choose a base image: You will need a base image for your DHCP server. In this example, we will use the Alpine Linux base image, which is a lightweight distribution of Linux that is popular for Docker images.

Install DHCP server software: Next, you will need to install the DHCP server software on your image. In this example, we will use the ISC DHCP server software, which is a widely used and well-supported DHCP server.

Configure the DHCP server: Once you have installed the DHCP server software, you will need to configure it to lease IPs. You will need to specify the range of IP addresses that can be leased, the subnet mask, and other network settings.

Create a Docker Network – I have called mine <my>-<network>

docker network create my-network

Create the DHCPD.CONF file in the build directory.

##########dhcpd.conf###########

default-lease-time 259200;
max-lease-time 777600;
option domain-name "your-domain.com";

subnet 192.168.2.0 netmask 255.255.255.0{
range 192.168.2.2 192.168.2.250;
option broadcast-address 192.168.2.255;
option routers 192.168.2.1;
option domain-name-servers 192.168.1.1;
}

Create a Dockerfile: With the base image and DHCP server software installed and configured, you can now create a Dockerfile that will build the image. Here is an example Dockerfile:

Create a Dockerfile

FROM alpine:latest

RUN apk add --no-cache dhcp

COPY dhcpd.conf /etc/dhcpd.conf

EXPOSE 67/udp

ENTRYPOINT ["dhcpd", "-f", "-d", "--no-pid"]

In this Dockerfile, we start with the latest Alpine Linux image, then we install the ISC DHCP server software using the apk package manager. We copy a pre-configured dhcpd.conf file to the /etc directory, which contains the configuration settings for the DHCP server. We expose port 67/udp, which is the port used by DHCP servers to lease IP addresses. Finally, we set the ENTRYPOINT to start the dhcpd daemon with the specified options.

Build the image: Once you have created the Dockerfile, you can build the image using the docker build command:

docker build -t dhcp-server .

Run the container: With the image built, you can now run a container from the image using the docker run command:

docker run -d --name dhcp-server --net=host dhcp-server

In this command, we run the container in detached mode (-d), give it a name (–name dhcp-server), and use the host network (–net=host) so that the DHCP server can lease IPs to devices on the same network as the host. We specify the name of the image we built in the previous step (dhcp-server) as the container to run.

Your DHCP server container should now be running and leasing IPs to devices on your network. You can view the logs of the container using the docker logs command:

docker logs dhcp-server

And you can stop and remove the container using the docker stop and docker rm commands:

docker stop dhcp-server
docker rm dhcp-server

There are several use cases for having a Docker image running as a DHCP server:

Development and testing: Developers and testers can use a Dockerized DHCP server to create isolated test networks for their applications or services. This allows them to test network configurations and connectivity without interfering with the production network.

Containerized applications: Some containerized applications require a DHCP server to lease IP addresses to the containers running on the same network. By running a Dockerized DHCP server, you can simplify the deployment and management of your containerized applications.

Education and training: DHCP servers are commonly used in networking courses and training programs. By running a Dockerized DHCP server, educators and students can create virtual networks for practicing networking concepts and configurations.

To get hosts to connect to the network served by the Dockerized DHCP server, you will need to configure the hosts to use DHCP to obtain an IP address. This can usually be done by configuring the network interface of the host to use DHCP. The exact steps to do this will depend on the operating system of the host.

For example, on a Linux host, you can configure the network interface to use DHCP by editing the /etc/network/interfaces file and adding the following lines

auto eth0
iface eth0 inet dhcp

On a Windows host, you can configure the network interface to use DHCP by going to the Control Panel, selecting Network and Sharing Center, selecting Change adapter settings, right-clicking on the network adapter, selecting Properties, selecting Internet Protocol Version 4 (TCP/IPv4), and selecting Obtain an IP address automatically.

Once the host is configured to use DHCP, it will automatically obtain an IP address from the Dockerized DHCP server when it is connected to the network.

You might rightly ask how these other containers or hosts get an IP address from the above DHCP server container.

Well, below is the answer to your question.

You would need to create a Docker network to add containers in there before they can receive IP addresses from the DHCP server. When you create a Docker network, you can specify that it is a bridge network, which is the default network type for Docker. Containers connected to a bridge network can communicate with each other using their IP addresses.

To create a bridge network, you can use the docker network create command. Here’s an example:

docker network create my-network

This command creates a bridge network named my-network. You can then start your DHCP server container on this network by using the –network option when running the container:

docker run -d --name dhcp-server --network my-network dhcp-server

This command starts the DHCP server container in detached mode (-d), names the container dhcp-server, and connects it to the my-network network.

Once your DHCP server container is running on the my-network network, you can start other containers on the same network by using the –network option:

docker run -d --name my-container --network my-network my-image

This command starts a container named my-container from the my-image image, and connects it to the my-network network.

When the container starts up, it will obtain an IP address from the DHCP server running on the my-network network. You can view the IP address of the container by using the docker inspect command:

docker inspect my-container

In the output, look for the IPAddress field under the NetworkSettings section. This will show you the IP address that was assigned to the container by the DHCP server.

Ubuntu has a good guide on DHCP – https://ubuntu.com/server/docs/network-dhcp

]]>
Create a Kubernetes Multi-Node Cluster with Kind https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/create-a-kubernetes-multi-node-cluster-with-kind/ Mon, 26 Sep 2022 23:00:28 +0000 http://www.expertnetworkconsultant.com/?p=5335 Continue readingCreate a Kubernetes Multi-Node Cluster with Kind]]> Did you know you could create a kubernetes multinode cluster with Kind without much bother?

The power of clustering goes a long way to enforce technical knowledge and hands-on application of technologies, it is essential for any serious engineer to build full scale labs covering best known architectures. Learning Kubernetes is best when you are able to build a production grade cluster replica. Minikube has always been helpful but the true benefit of a real world architecture does not come with Minikubing. This is where KIND comes in, bringing the power of real hands on without dedicating much hardware and resources as you would in using virtual machines to create a Kubernetes cluster.

Install Prerequisites

apt-get install curl

Install Docker


sudo apt-get update

	
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Give user permissions to query Docker

sudo usermod -aG docker $USER
Restart your hosts (Nodes) for the permissions to take effect.

Install Kind on Linux

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin

Install Kubectl

We’ll need kubectl to work with Kubernetes cluster, in case its not already installed. For this, we can use below commands:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

Set Permissions

chmod +x ./kubectl

Move kubectl to local

sudo mv ./kubectl /usr/local/bin/kubectl

Create Multi-Node Clusters – 1 Master and 2 Worker Nodes

Create Cluster Manifest File – cluster-config-yaml.yaml


# A sample multi-node cluster config file
# A three node (two workers, one controller) cluster config
# To add more worker nodes, add another role: worker to the list
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: 
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker


root@cluster-vm:/home/cluster# kind create cluster --name=azvmms-node --config=single-cluster.yaml
Creating cluster "azvmms-node" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-azvmms-node"
You can now use your cluster with:

kubectl cluster-info --context kind-azvmms-node

Creates the Control Plane

- role: control-plane

Creates the 2 Worker Nodes

- role: worker
- role: worker

Create Cluster

kind create cluster --config=<cluster-config-yaml.yaml>

Check Pods

$ kubectl get pods -ns -A -o wide
NAMESPACE            NAME                                            READY   STATUS    RESTARTS   AGE   IP           NODE                    NOMINATED NODE   READINESS GATES
kube-system          coredns-558bd4d5db-2gszr                        1/1     Running   0          91m   10.244.0.3   spacers-control-plane              
kube-system          coredns-558bd4d5db-46rkp                        1/1     Running   0          91m   10.244.0.2   spacers-control-plane              
kube-system          etcd-spacers-control-plane                      1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-9jmwv                                   1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kindnet-c2jrx                                   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kindnet-hlhmx                                   1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-apiserver-spacers-control-plane            1/1     Running   0          92m   172.18.0.4   spacers-control-plane              
kube-system          kube-controller-manager-spacers-control-plane   1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-97q94                                1/1     Running   0          91m   172.18.0.3   spacers-worker                     
kube-system          kube-proxy-t4ltb                                1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
kube-system          kube-proxy-xrd5l                                1/1     Running   0          91m   172.18.0.2   spacers-worker2                    
kube-system          kube-scheduler-spacers-control-plane            1/1     Running   0          91m   172.18.0.4   spacers-control-plane              
local-path-storage   local-path-provisioner-547f784dff-5dgp6         1/1     Running   0          91m   10.244.0.4   spacers-control-plane              

Deploy a Sample App

kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml

Test Access

https://localhost:30779/
kubectl run my-nginx --image=nginx --replicas=2 --port=8080

Delete Cluster
kind delete clusters <cluster-name>

]]>
How to Create Azure Standard Load Balancer with Backend Pools in Terraform https://www.expertnetworkconsultant.com/design/how-to-create-azure-standard-load-balancer-with-backend-pools-in-terraform/ Wed, 24 Aug 2022 09:00:36 +0000 http://www.expertnetworkconsultant.com/?p=5354 Continue readingHow to Create Azure Standard Load Balancer with Backend Pools in Terraform]]>
create azure standard load balancer with backend pools in terraform
Image Reference: https://docs.microsoft.com/en-us/azure/load-balancer/media/load-balancer-overview/load-balancer.svg
Building infrastructure with code is where majority of future cloud deployments will go. In this architecture of how to create azure standard load balancer with backend pools in terraform, I have created an Azure standard loadbalancer with backend pools to accomodate two linux virtual machines.

Configure a Linux virtual machine in Azure using Terraform

How to Create Azure Standard Load Balancer with Backend Pools in Terraform

Below is a list of parts which constitutes this build.

  • Resource Group
  • Virtual Machines
  • Network Interfaces
  • Standard Loadbalancer
  • Availability Sets

As it appears in Azure
moving parts to creating backend address pool addition of nics with terraform

Open your IDE and create the following Terraform files;
providers.tf
network.tf
loadbalancer.tf
virtualmachines.tf

Clone the Git Code Repository

git clone https://github.com/expertcloudconsultant/createazureloadbalancer.git

#Create the providers providers.tf

#IaC on Azure Cloud Platform | Declare Azure as the Provider
# Configure the Microsoft Azure Provider
terraform {

  required_version = ">=0.12"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>2.0"
    }
  }


}

provider "azurerm" {
  features {}
}

#Create the virutal network and subnets with with Terraform. network.tf

#Create Resource Groups
resource "azurerm_resource_group" "corporate-production-rg" {
  name     = "corporate-production-rg"
  location = var.avzs[0] #Avaialability Zone 0 always marks your Primary Region.
}



#Create Virtual Networks > Create Spoke Virtual Network
resource "azurerm_virtual_network" "corporate-prod-vnet" {
  name                = "corporate-prod-vnet"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  address_space       = ["10.20.0.0/16"]

  tags = {
    environment = "Production Network"
  }
}


#Create Subnet
resource "azurerm_subnet" "business-tier-subnet" {
  name                 = "business-tier-subnet"
  resource_group_name  = azurerm_resource_group.corporate-production-rg.name
  virtual_network_name = azurerm_virtual_network.corporate-prod-vnet.name
  address_prefixes     = ["10.20.10.0/24"]
}

#Create Private Network Interfaces
resource "azurerm_network_interface" "corpnic" {
  name                = "corpnic-${count.index + 1}"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  count               = 2

  ip_configuration {
    name                          = "ipconfig-${count.index + 1}"
    subnet_id                     = azurerm_subnet.business-tier-subnet.id
    private_ip_address_allocation = "Dynamic"

  }
}

#Create the standard load balancer with Terraform. loadbalancer.tf

#Create Load Balancer
resource "azurerm_lb" "business-tier-lb" {
  name                = "business-tier-lb"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name

  frontend_ip_configuration {
    name                          = "businesslbfrontendip"
    subnet_id                     = azurerm_subnet.business-tier-subnet.id
    private_ip_address            = var.env == "Static" ? var.private_ip : null
    private_ip_address_allocation = var.env == "Static" ? "Static" : "Dynamic"
  }
}

create loadbalancer with terraform

#Create Loadbalancing Rules

#Create Loadbalancing Rules
resource "azurerm_lb_rule" "production-inbound-rules" {
  loadbalancer_id                = azurerm_lb.business-tier-lb.id
  resource_group_name            = azurerm_resource_group.corporate-production-rg.name
  name                           = "ssh-inbound-rule"
  protocol                       = "Tcp"
  frontend_port                  = 22
  backend_port                   = 22
  frontend_ip_configuration_name = "businesslbfrontendip"
  probe_id                       = azurerm_lb_probe.ssh-inbound-probe.id
  backend_address_pool_ids        = ["${azurerm_lb_backend_address_pool.business-backend-pool.id}"]
 

}

create loadbalancing rules with terraform

#Create Probe

#Create Probe
resource "azurerm_lb_probe" "ssh-inbound-probe" {
  resource_group_name = azurerm_resource_group.corporate-production-rg.name
  loadbalancer_id     = azurerm_lb.business-tier-lb.id
  name                = "ssh-inbound-probe"
  port                = 22
}

create loadbalancing probes with terraform

created loadbalancing probes with terraform

#Create Backend Address Pool

#Create Backend Address Pool
resource "azurerm_lb_backend_address_pool" "business-backend-pool" {
  loadbalancer_id = azurerm_lb.business-tier-lb.id
  name            = "business-backend-pool"
}

create backend address pool with terraform

#Automated Backend Pool Addition

#Automated Backend Pool Addition > Gem Configuration to add the network interfaces of the VMs to the backend pool.
resource "azurerm_network_interface_backend_address_pool_association" "business-tier-pool" {
  count                   = 2
  network_interface_id    = azurerm_network_interface.corpnic.*.id[count.index]
  ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]
  backend_address_pool_id = azurerm_lb_backend_address_pool.business-backend-pool.id

}

This line of configuration is what intelligently adds the network interfaces to the backendpool. I call it a gem because it took me quite sometime to figure it all out.

 ip_configuration_name   = azurerm_network_interface.corpnic.*.ip_configuration.0.name[count.index]

create backend address pool addition of nics with terraform

created backend address pool addition of nics with terraform

Create the Linux Virtual Machines virtualmachines.tf

# Create (and display) an SSH key
resource "tls_private_key" "linuxvmsshkey" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

#Custom Data Insertion Here
data "template_cloudinit_config" "webserverconfig" {
  gzip          = true
  base64_encode = true

  part {

    content_type = "text/cloud-config"
    content      = "packages: ['nginx']"
  }
}



# Create Network Security Group and rule
resource "azurerm_network_security_group" "corporate-production-nsg" {
  name                = "corporate-production-nsg"
  location            = azurerm_resource_group.corporate-production-rg.location
  resource_group_name = azurerm_resource_group.corporate-production-rg.name


  #Add rule for Inbound Access
  security_rule {
    name                       = "SSH"
    priority                   = 1001
    direction                  = "Inbound"
    access                     = "Allow"
    protocol                   = "Tcp"
    source_port_range          = "*"
    destination_port_range     = var.ssh_access_port # Referenced SSH Port 22 from vars.tf file.
    source_address_prefix      = "*"
    destination_address_prefix = "*"
  }
}


#Connect NSG to Subnet
resource "azurerm_subnet_network_security_group_association" "corporate-production-nsg-assoc" {
  subnet_id                 = azurerm_subnet.business-tier-subnet.id
  network_security_group_id = azurerm_network_security_group.corporate-production-nsg.id
}



#Availability Set - Fault Domains [Rack Resilience]
resource "azurerm_availability_set" "vmavset" {
  name                         = "vmavset"
  location                     = azurerm_resource_group.corporate-production-rg.location
  resource_group_name          = azurerm_resource_group.corporate-production-rg.name
  platform_fault_domain_count  = 2
  platform_update_domain_count = 2
  managed                      = true
  tags = {
    environment = "Production"
  }
}


#Create Linux Virtual Machines Workloads
resource "azurerm_linux_virtual_machine" "corporate-business-linux-vm" {

  name                  = "${var.corp}linuxvm${count.index}"
  location              = azurerm_resource_group.corporate-production-rg.location
  resource_group_name   = azurerm_resource_group.corporate-production-rg.name
  availability_set_id   = azurerm_availability_set.vmavset.id
  network_interface_ids = ["${element(azurerm_network_interface.corpnic.*.id, count.index)}"]
  size                  =  "Standard_B1s"  # "Standard_D2ads_v5" # "Standard_DC1ds_v3" "Standard_D2s_v3"
  count                 = 2


  #Create Operating System Disk
  os_disk {
    name                 = "${var.corp}disk${count.index}"
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS" #Consider Storage Type
  }


  #Reference Source Image from Publisher
  source_image_reference {
    publisher = "Canonical"                    #az vm image list -p "Canonical" --output table
    offer     = "0001-com-ubuntu-server-focal" # az vm image list -p "Canonical" --output table
    sku       = "20_04-lts-gen2"               #az vm image list -s "20.04-LTS" --output table
    version   = "latest"
  }


  #Create Computer Name and Specify Administrative User Credentials
  computer_name                   = "corporate-linux-vm${count.index}"
  admin_username                  = "linuxsvruser${count.index}"
  disable_password_authentication = true



  #Create SSH Key for Secured Authentication - on Windows Management Server [Putty + PrivateKey]
  admin_ssh_key {
    username   = "linuxsvruser${count.index}"
    public_key = tls_private_key.linuxvmsshkey.public_key_openssh
  }

  #Deploy Custom Data on Hosts
  custom_data = data.template_cloudinit_config.webserverconfig.rendered

}

If you are interested in using the UI to create a solution as above, then follow Microsoft’s Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and two virtual machines.

]]>
Connecting to GitHub Step by Step https://www.expertnetworkconsultant.com/configuring/connecting-to-github-step-by-step/ Tue, 14 Jun 2022 15:28:21 +0000 http://www.expertnetworkconsultant.com/?p=5057 Continue readingConnecting to GitHub Step by Step]]> Connect Your Environment to GitHub
It is that time of the day to overcome the dread of Git. So can we please ask the all important question right now?

What is Git?
According to Wikipedia;

Git is software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows.

A successful connection to GitHub does require an established trust between the client {your working host} and the server {GitHub}.

An SSH key relies upon the use of two related keys, a public key and a private key, that together create a key pair that is used as the secure access credential. The private key is secret, known only to the user, and should be encrypted and stored safely. The public key can be shared freely with any SSH server to which the user wishes to connect.

Public Key authentication – what and why?

The motivation for using public key authentication over simple passwords is security. Public key authentication provides cryptographic strength that even extremely long passwords can not offer. With SSH, public key authentication improves security considerably as it frees the users from remembering complicated passwords (or worse yet, writing them down).

In addition to security public key authentication also offers usability benefits – it allows users to implement single sign-on across the SSH servers they connect to. Public key authentication also allows automated, passwordless login that is a key enabler for the countless secure automation processes that execute within enterprise networks globally.

Checking for existing SSH keys
Before you generate an SSH key, you can check to see if you have any existing SSH keys.

  • Open Git Bash.
  • Enter ls -al ~/.ssh to see if existing SSH keys are present.
  • ls -al ~/.ssh
    total 27
    drwxr-xr-x 1 Cloud Architect 197121    0 May 16 09:39 ./
    drwxr-xr-x 1 Cloud Architect 197121    0 May 10 14:40 ../
    -rw-r--r-- 1 Cloud Architect 197121  419 May 16 09:39 githubaccount
    -rw-r--r-- 1 Cloud Architect 197121  107 May 16 09:39 githubaccount.pub
    -rw-r--r-- 1 Cloud Architect 197121 2675 Nov 11  2021 id_rsa
    -rw-r--r-- 1 Cloud Architect 197121  585 Nov 11  2021 id_rsa.pub
    -rw-r--r-- 1 Cloud Architect 197121 1422 Mar 18 16:50 known_hosts
    

    The known_hosts file contains details of your remote hosts and their public keys.

    Generating a new SSH key

    $ ssh-keygen -t ed25519 -C "your_email@domain.com"

    This creates a new SSH key, using the provided email as a label.

    > Generating public/private algorithm key pair.
    

    When you’re prompted to “Enter a file in which to save the key,” press Enter. This accepts the default file location.

    > Enter a file in which to save the key (/c/Users/you/.ssh/id_algorithm):[Press enter]
    
    Generating public/private ed25519 key pair.
    Enter file in which to save the key (/c/userPath/.ssh/id_ed25519): /c/userPath/.ssh/githubaccount
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /c/userPath/.ssh/githubaccount
    Your public key has been saved in /c/userPath/.ssh/githubaccount.pub
    The key fingerprint is:
    SHA256:CZaWiynl94Vk6dkqXXIBCnewmbgY6vhGnqeq8PrRmulQ git.user@domain.com
    The key's randomart image is:
    +--[ED25519 256]--+
    |    .o..o+. o.   |
    |. o .oo.+o o..   |
    |oo +. .oo....o.  |
    |=o..  .  + .o..  |
    |o.=E   .S..o..   |
    | *.     . + o    |
    |o.+        o     |
    |+*               |
    |@*o              |
    +----[SHA256]-----+
    
    Cloud Architect@DESKTOP-ATCRUJV MINGW64 /c/Workdir/terraform
    $
    
    

    At the prompt, type a secure passphrase. For more information, see “Working with SSH key passphrases.”

    > Enter passphrase (empty for no passphrase): [Type a passphrase]
    > Enter same passphrase again: [Type passphrase again]
    

    Copy the SSH public key to your clipboard.

    If your SSH public key file has a different name than the example code, modify the filename to match your current setup. When copying your key, don’t add any newlines or whitespace.

    Adding your SSH key to the ssh-agent
    Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key.
    If you have GitHub Desktop installed, you can use it to clone repositories and not deal with SSH keys.

    Ensure the ssh-agent is running. You can use the “Auto-launching the ssh-agent” instructions in “Working with SSH key passphrases”, or start it manually:

    # start the ssh-agent in the background
    $ eval "$(ssh-agent -s)"
    Agent pid 1304
    
    $ clip < ~/.ssh/githubaccount.pub
    # Copies the contents of the id_ed25519.pub file to your clipboard
    

    There are no SSH keys associated with your account

    In the "Access" section of the sidebar, click SSH and GPG keys.
    Click New SSH key or Add SSH key.

    add public ssh key to github
    add public ssh key to github

    added new public ssh key to github successfully
    added new public ssh key to github successfully

    create new repository on github
    create new repository on github

    Create new Public Repository on GitHub
    create new public repository on github

    Create new Public Repository on GitHub
    created a new public repository on github

    Testing your SSH connection
    After you've set up your SSH key and added it to your account on GitHub.com, you can test your connection.

    Enter the following:

    $ ssh -T git@github.com
    # Attempts to ssh to GitHub
    

    You may see a warning like this:

    > The authenticity of host 'github.com (IP ADDRESS)' can't be established.
    > RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
    > Are you sure you want to continue connecting (yes/no)?

    Troubleshooting

    $ ssh -vT git@github.com:remote-git-repo/azterraform.git
    OpenSSH_8.8p1, OpenSSL 1.1.1l  24 Aug 2021
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: resolve_canonicalize: hostname github.com:remote-git-repo/azterraform.git is an unrecognised address
    ssh: Could not resolve hostname github.com:remote-git-repo/azterraform.git: Name or service not known
    

    Solutions are always in the problems

    $ ssh -vT git@github.com:remote-git-repo/azterraform.git
    OpenSSH_8.8p1, OpenSSL 1.1.1l  24 Aug 2021
    debug1: Reading configuration data /c/userPath/.ssh/config
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: resolve_canonicalize: hostname github.com:remote-git-repo/azterraform.git is an unrecognised address
    ssh: Could not resolve hostname github.com:remote-git-repo/azterraform.git: Name or service not known
    

    Add the following line to /etc/ssh/ssh_config file.

    add Identity File to ssh_config file to tell ssh where to find your private key

    # Host *
    #   StrictHostKeyChecking ask
    #   IdentityFile ~/.ssh/id_rsa
    #   IdentityFile ~/.ssh/id_dsa
    #   IdentityFile ~/.ssh/id_ecdsa
    #   IdentityFile ~/.ssh/id_ed25519
        IdentityFile ~/.ssh/githubaccount
    #   Port 22
    #   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
    #   MACs hmac-md5,hmac-sha1,umac-64@openssh.com
    

    Create the config file under ~/.ssh/ and add the following lines

    Setup Your Local Working Environment

    Clone your remote Repository - use git clone

    $ git clone git@github.com:remote-git-repo/azterraform.git
    Cloning into 'azterraform'...
    remote: Enumerating objects: 6, done.
    remote: Counting objects: 100% (6/6), done.
    remote: Compressing objects: 100% (2/2), done.
    Receiving objects: 100% (6/6), done.
    remote: Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
    Cloud Architect@DESKTOP-ATCRUJV MINGW64 /c/Workdir/terraform (master)
    $
    

    Now begin to perform your functions

    touch filename.tf
    

    Add all items in your working directory using git add .

    $ git add .
    

    Commit your changes to your repo.

    $ git commit -m "Version 1 - Files Created on Local PC"
    [main 2f462f4] Version 1 - Files Created on Local PC
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 main.tf
    

    Remove Git from Folder

    rm -rf .git
    
    ]]>
    Configure a Linux virtual machine in Azure using Terraform https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/configure-a-linux-virtual-machine-in-azure-using-terraform/ Tue, 24 May 2022 23:00:46 +0000 http://www.expertnetworkconsultant.com/?p=5101 Continue readingConfigure a Linux virtual machine in Azure using Terraform]]> Infrastructure as Code has become the order of the day. In this article, “Configure a Linux virtual machine in Azure using Terraform”, I seek to guide you to building your first Linux Virtual Machine in Azure. Consider these set of steps as a project to enforce your terraform knowledge.

    Configure Your Environment

  • Create providers.tf file
  • Create main.tf file
  • Create vars.tf file
  • Configure Deployment Parts

  • Create a virtual network
  • Create a subnet
  • Create a public IP address
  • Create a network security group and SSH inbound rule
  • Create a virtual network interface card
  • Connect the network security group to the network interface
  • Create a storage account for boot diagnostics
  • Create SSH key
  • Create a virtual machine
  • Use SSH to connect to virtual machine
  • Create your vars.tf file

    #Variable file used to store details of repetitive references
    variable "location" {
      description = "availability zone that is a string type variable"
      type    = string
      default = "eastus2"
    }
    
    variable "prefix" {
      type    = string
      default = "emc-eus2-corporate"
    }
    

    Create your providers.tf file

    #Variable file used to store details of repetitive references
    variable "location" {
      type    = string
      default = "eastus2"
    }
    
    variable "prefix" {
      type    = string
      default = "emc-eus2-corporate"
    }
    

    In the next steps, we create the main.tf file and add the following cmdlets.

    Create a virtual network

    #Create virtual network and subnets
    resource "azurerm_virtual_network" "emc-eus2-corporate-network-vnet" {
      name                = "emc-eus2-corporate-network-vnet"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      address_space       = ["172.20.0.0/16"]
    
      tags = {
        environment = "Production"
      }
    }
    

    Create a subnet

    #Create subnet - presentation tier
    resource "azurerm_subnet" "presentation-subnet" {
      name                 = "presentation-subnet"
      resource_group_name  = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      virtual_network_name = azurerm_virtual_network.emc-eus2-corporate-network-vnet.name
      address_prefixes     = ["172.20.1.0/24"]
    }
    
    #Create subnet - data access tier
    resource "azurerm_subnet" "data-access-subnet" {
      name                 = "data-access-subnet"
      resource_group_name  = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      virtual_network_name = azurerm_virtual_network.emc-eus2-corporate-network-vnet.name
      address_prefixes     = ["172.20.2.0/24"]
    }
    

    Create a public IP address

    #Create Public IP Address
    resource "azurerm_public_ip" "emc-eus2-corporate-nic-01-pip" {
      name                = "emc-eus2-corporate-nic-01-pip"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      allocation_method   = "Dynamic"
    }
    

    Create a network security group and SSH inbound rule

    # Create Network Security Group and rule
    resource "azurerm_network_security_group" "emc-eus2-corporate-nsg" {
      name                = "emc-eus2-corporate-nsg"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
    
      security_rule {
        name                       = "SSH"
        priority                   = 1001
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = "22"
        source_address_prefix      = "*"
        destination_address_prefix = "*"
      }
    }
    
    

    Create a virtual network interface card

    # Create network interface
    resource "azurerm_network_interface" "corporate-webserver-vm-01-nic" {
      name                = "corporate-webserver-vm-01-nic"
      location            = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
    
      ip_configuration {
        name                          = "corporate-webserver-vm-01-nic-ip"
        subnet_id                     = azurerm_subnet.presentation-subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.corporate-webserver-vm-01-ip.id
      }
    }
    

    Connect the network security group to the network interface

    # Connect the security group to the network interface
    resource "azurerm_network_interface_security_group_association" "corporate-webserver-vm-01-nsg-link" {
      network_interface_id      = azurerm_network_interface.corporate-webserver-vm-01-nic.id
      network_security_group_id = azurerm_network_security_group.emc-eus2-corporate-nsg.id
    }
    

    Create a storage account for boot diagnostics

    # Generate random text for a unique storage account name
    resource "random_id" "randomId" {
      keepers = {
        # Generate a new ID only when a new resource group is defined
        resource_group = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      }
      byte_length = 8
    }
    

    Create a storage account for boot diagnostics

    # Create storage account for boot diagnostics
    resource "azurerm_storage_account" "corpwebservervm01storage" {
      name                     = "diag${random_id.randomId.hex}"
      location                 = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name      = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      account_tier             = "Standard"
      account_replication_type = "LRS"
    }
    

    Create SSH Key

    # Create (and display) an SSH key
    resource "tls_private_key" "linuxsrvuserprivkey" {
      algorithm = "RSA"
      rsa_bits  = 4096
    }
    

    Create a virtual machine

    # Create virtual machine
    resource "azurerm_linux_virtual_machine" "emc-eus2-corporate-webserver-vm-01" {
      name                  = "emc-eus2-corporate-webserver-vm-01"
      location              = azurerm_resource_group.emc-eus2-corporate-resources-rg.location
      resource_group_name   = azurerm_resource_group.emc-eus2-corporate-resources-rg.name
      network_interface_ids = [azurerm_network_interface.corporate-webserver-vm-01-nic.id]
      size                  = "Standard_DC1ds_v3"
    
      os_disk {
        name                 = "corpwebservervm01disk"
        caching              = "ReadWrite"
        storage_account_type = "Premium_LRS"
      }
    
      source_image_reference {
        publisher = "Canonical"
        offer     = "0001-com-ubuntu-server-focal"
        sku       = "20_04-lts-gen2"
        version   = "latest"
      }
    
      computer_name                   = "corporate-webserver-vm-01"
      admin_username                  = "linuxsrvuser"
      disable_password_authentication = true
    
      admin_ssh_key {
        username   = "linuxsrvuser"
        public_key = tls_private_key.linuxsrvuserprivkey.public_key_openssh
      }
    }
    

    Terraform Plan

    The terraform plan command evaluates a Terraform configuration to determine the desired state of all the resources it declares, then compares that desired state to the real infrastructure objects being managed with the current working directory and workspace. It uses state data to determine which real objects correspond to which declared resources, and checks the current state of each resource using the relevant infrastructure provider’s API.

    terraform plan
    

    Terraform Apply

    The terraform apply command performs a plan just like terraform plan does, but then actually carries out the planned changes to each resource using the relevant infrastructure provider’s API. It asks for confirmation from the user before making any changes, unless it was explicitly told to skip approval.

    terraform apply
    

    Command to find an image based on the SKU.

    samuel@Azure:~$ az vm image list -s "2019-Datacenter" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer          Publisher               Sku              Urn                                                          UrnAlias           Version
    -------------  ----------------------  ---------------  -----------------------------------------------------------  -----------------  ---------
    WindowsServer  MicrosoftWindowsServer  2019-Datacenter  MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest  Win2019Datacenter  latest
    samuel@Azure:~$ 
    
    samuel@Azure:~$ az vm image list -s "18.04-LTS" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer         Publisher    Sku        Urn                                      UrnAlias    Version
    ------------  -----------  ---------  ---------------------------------------  ----------  ---------
    UbuntuServer  Canonical    18.04-LTS  Canonical:UbuntuServer:18.04-LTS:latest  UbuntuLTS   latest
    

    Command to find an image based on the Publisher.

    samuel@Azure:~$ az vm image list -p "Microsoft" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer          Publisher               Sku                                 Urn                                                                             UrnAlias                 Version
    -------------  ----------------------  ----------------------------------  ------------------------------------------------------------------------------  -----------------------  ---------
    WindowsServer  MicrosoftWindowsServer  2022-Datacenter                     MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest                     Win2022Datacenter        latest
    WindowsServer  MicrosoftWindowsServer  2022-datacenter-azure-edition-core  MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest  Win2022AzureEditionCore  latest
    WindowsServer  MicrosoftWindowsServer  2019-Datacenter                     MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest                     Win2019Datacenter        latest
    
    samuel@Azure:~$ az vm image list -p "Canonical" --output table
    You are viewing an offline list of images, use --all to retrieve an up-to-date list
    Offer         Publisher    Sku        Urn                                      UrnAlias    Version
    ------------  -----------  ---------  ---------------------------------------  ----------  ---------
    UbuntuServer  Canonical    18.04-LTS  Canonical:UbuntuServer:18.04-LTS:latest  UbuntuLTS   latest
    

    At this point, the required pieces to build a Linux Virtual Machine on Azure is complete. It’s time to test your code.

    You can learn more from Hashicorp by visiting the following link.
    This article was helpful in troubleshooting issues with the Ubuntu SKU.

    ]]>
    How to Route Network Traffic with a Linux Network Virtual Appliance on Azure https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/how-to-route-network-traffic-with-a-linux-network-virtual-appliance-on-azure/ Mon, 09 May 2022 23:00:01 +0000 http://www.expertnetworkconsultant.com/?p=5009 Continue readingHow to Route Network Traffic with a Linux Network Virtual Appliance on Azure]]> How to Route Network Traffic with a Linux Network Virtual Appliance on Azure

    Enable the IP Forwarding on the Network Interface of the VM in Azure
    enable ip forwarding on azure virtual machine network interface

    Enable the IP Forwarding in the VM

    sudo sed -i 's/#net.ipv4.ip_forward=/net.ipv4.ip_forward=/' /etc/sysctl.conf
    sudo sed -i 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/' /etc/sysctl.conf
    sudo sed -i 's/#net.ipv6.conf.all.forwarding=/net.ipv6.conf.all.forwarding=/' /etc/sysctl.conf
    sudo sed -i 's/net.ipv6.conf.all.forwarding=0/net.ipv6.conf.all.forwarding=1/' /etc/sysctl.conf
    sudo sysctl -p
    

    enable ip forawrding on ubuntu virtual machine


    Add route on Route Table for outbound traffic via NVA on Azure

    add route on route table for outbound traffic via nva on azure

    added route on route table for outbound traffic via nva on azure

    Associate Subnet to Route Table
    associate subnet to route table on azure

    Assess Network Topology to understand how traffic flow works
    assess network topology to understand how traffic flow works

    Check effective routes from the associate subnet network interfaces – in this guide, the presentation-tier vm
    check effective routes on network interface towards the nva

    It appears that a route has been injected into our effective routes. But from the topology diagram above, there isn’t a known connection between the hub virtual network and the production virtual network.

    Verify IP Flow with Azure Network Watcher
    use network watcher ip flow verify to check if traffic can get to a destination

    Access is found to be denied because there is no physical connection between the VNets. I will now go ahead to create a VNet to VNet Peering so the two networks can begin communicating.

    create a virtual network peer from the hub to the spoke networks
    create a virtual network peer from the hub to the spoke networks

    successfully created a virtual network peer from the hub to the spoke networks

    Review the topology
    network topology depicting vnet peering between vnets

    Verify IP Flow to the NVA with Azure Network Watcher
    use network watcher ip flow verify to check one more time if traffic can get to the nva

    used network watcher ip flow verify to check one more time if traffic can get to the nva

    Route Network Traffic with a Route Table and Network Virtual Appliance

    In Windows

    PS C:\> Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters -Name IpEnableRouter -Value 1
    PS C:\> New-NetFirewallRule -DisplayName "Allow ICMPv4-In" -Protocol ICMPv4
    
    
    Name                  : {c66932ef-d397-4efc-83cd-75155dad403e}
    DisplayName           : Allow ICMPv4-In
    Description           :
    DisplayGroup          :
    Group                 :
    Enabled               : True
    Profile               : Any
    Platform              : {}
    Direction             : Inbound
    Action                : Allow
    EdgeTraversalPolicy   : Block
    LooseSourceMapping    : False
    LocalOnlyMapping      : False
    Owner                 :
    PrimaryStatus         : OK
    Status                : The rule was parsed successfully from the store. (65536)
    EnforcementStatus     : NotApplicable
    PolicyStoreSource     : PersistentStore
    PolicyStoreSourceType : Local
    
    
    
    PS C:\>
    
    ]]>
    Create Load Balanced Linux Webservers in Azure Step by Step https://www.expertnetworkconsultant.com/expert-approach-in-successfully-networking-devices/create-load-balanced-linux-webservers-in-azure-step-by-step/ Sat, 30 Apr 2022 23:00:11 +0000 http://www.expertnetworkconsultant.com/?p=4900 Continue readingCreate Load Balanced Linux Webservers in Azure Step by Step]]> Create Load Balanced Linux Webservers in Azure Step by Step is what this article aims to help you achieve. There are real benefits to load balancing and IBM puts it this way;

    As strain increases on a website or business application, eventually, a single server cannot support the full workload. To meet demand, organizations spread the workload over multiple servers. Called “load balancing,” this practice prevents a single server from becoming overworked, which could cause it to slow down, drop requests, and even crash.

    Load balancing lets you evenly distribute network traffic to prevent failure caused by overloading a particular resource. This strategy improves the performance and availability of applications, websites, databases, and other computing resources. It also helps process user requests quickly and accurately.

    Pre-requisites

    • Resource Group
    • Virtual Network
    • Subnet
    • Network Security Group

    Create Resource Group

    create azure resource group

    az group create --name resouceGroupName --location $location

    Create Virtual Network and Subnet
    create virtual network and subnets on azure

    az network vnet create \
    		--resource-group resouceGroupName \
    		--name virtualnetworkName \
    		--address-prefixes 172.16.0.0/16 \
    		--subnet-name subnetName \
    		--subnet-prefixes 172.16.10.0/24
    

    Create NSG on Azure and Inbound Rule
    create network security group and inbound rule for http

    az network nsg create \
    --resource-group resouceGroupName \
    --name myNSG
    

    Create NSG Inbound Rule

    az network nsg rule create \
    		--resource-group resouceGroupName \
    		--nsg-name myNSG \
    		--name myNSGRuleHTTP \
    		--protocol '*' \
    		--direction inbound \
    		--source-address-prefix '*' \
    		--source-port-range '*' \
    		--destination-address-prefix '*' \
    		--destination-port-range 80 \
    		--access allow \
    		--priority 200
    

    Associate NSG to Subnet

    associate subnet to network security group

    azure network vnet subnet set --resource-group resouceGroupName \
    		--vnet-name virtualnetworkName \
    		--name subnetName \
    		--network-security-group-name myNSG
    	

    Create Load Balancer
    create loadbalancer on azure

    az network lb create \
    		--resource-group resouceGroupName \
    		--name myLoadBalancer \
    		--sku Standard \
    		--public-ip-address myPublicIP \
    		--frontend-ip-name myFrontEnd \
    		--backend-pool-name myBackEndPool
    

    Create Frontend IP
    configure azure loadbalancer frontend ip

    az network public-ip create \
    		--resource-group resouceGroupName \
    		--name myPublicIP \
    		--sku Standard \
    		--zone 1 2 3
    

    Create Backend Pool
    configure azure loadbalancer backendpool

    array=(myNicVM1 myNicVM2)
      for vmnic in "${array[@]}"
      do
        az network nic ip-config address-pool add \
    		--address-pool myBackendPool \
    		--ip-config-name ipconfig1 \
    		--nic-name $vmnic \
    		--resource-group resouceGroupName \
    		--lb-name myLoadBalancer
      done
    

    Create Load Balancing Rules – Port 80 allows access to the webservers over HTTP
    configure azure loadbalancer load balancing rules

    az network lb rule create \
    		--resource-group resouceGroupName \
    		--lb-name myLoadBalancer \
    		--name myHTTPRule \
    		--protocol tcp \
    		--frontend-port 80 \
    		--backend-port 80 \
    		--frontend-ip-name myFrontEnd \
    		--backend-pool-name myBackEndPool \
    		--probe-name myHealthProbe \
    		--disable-outbound-snat true \
    		--idle-timeout 15 \
    		--enable-tcp-reset true
    

    Create Health Probes
    configure azure loadbalancer health probes

    az network lb probe create \
    		--resource-group resouceGroupName \
    		--lb-name myLoadBalancer \
    		--name myHealthProbe \
    		--protocol tcp \
    		--port 80
    

    Create Network Interfaces for VMs

    array=(myNicVM1 myNicVM2)
      for vmnic in "${array[@]}"
      do
        az network nic create \
            --resource-group resouceGroupName \
            --name $vmnic \
            --vnet-name myVNet \
            --subnet myBackEndSubnet \
            --network-security-group myNSG
      done
    

    Create Virtual Machines

    create linux virtual machines on azure

    Create Virtual Machine #1

    az vm create --resource-group resouceGroupName \
    		--name webservervm1 \
    		--vnet-name virtualnetworkName  \
    		--subnet subnetName \
    		--nics myNicVM1 \
    		--image "Canonical:UbuntuServer:20.04-LTS:latest" \
    		--admin-username azureuser \
    		--generate-ssh-keys
    

    Create Virtual Machine #2

    az vm create --resource-group resouceGroupName \
    		--name webservervm1 \
    		--vnet-name virtualnetworkName  \
    		--subnet subnetName \
    		--nics myNicVM2 \
    		--image "Canonical:UbuntuServer:20.04-LTS:latest" \
    		--admin-username azureuser \
    		--generate-ssh-keys
    

    Add virtual machines to load balancer backend pool

    array=(myNicVM1 myNicVM2)
      for vmnic in "${array[@]}"
      do
        az network nic ip-config address-pool add \
    		--address-pool myBackendPool \
    		--ip-config-name ipconfig1 \
    		--nic-name $vmnic \
    		--resource-group resouceGroupName \
    		--lb-name myLoadBalancer
      done
    

    Option for Outbound Connectivity for your Virtual Machines – Outbound Rules

    Outbound rules enable you to explicitly define SNAT (source network address translation) for a standard SKU public load balancer. This configuration allows you to use the public IP or IPs of your load balancer for outbound connectivity of the backend instances.

    Use the frontend IP address of a load balancer for outbound via outbound rules

    Create NAT gateway

    az network public-ip create \
    		--resource-group resouceGroupName \
    		--name myNATgatewayIP \
    		--sku Standard \
    		--zone 1 2 3
    

    Create NAT gateway resource
    Use az network nat gateway create to create the NAT gateway resource. The public IP created in the previous step is associated with the NAT gateway.

    az network nat gateway create \
    		--resource-group resouceGroupName \
    		--name myNATgateway \
    		--public-ip-addresses myNATgatewayIP \
    		--idle-timeout 10
    

    Associate NAT gateway with subnet
    Configure the source subnet in virtual network to use a specific NAT gateway resource with az network vnet subnet update.

    az network vnet subnet update \
    		--resource-group resouceGroupName \
    		--vnet-name myVNet \
    		--name myBackendSubnet \
    		--nat-gateway myNATgateway
    


    Install NGINX on Ubuntu Servers

    sudo apt-get -y update
    sudo apt-get -y install nginx
    

    Use vim to edit the default debian page on Ubuntu
    Browse to the location of the index file. Make sure you are a super user in order to make the changes.

    Customise your nginx webserver for both virtual machines

    cd /var/www/html/
    alpha-weu-production-webservers-vm01:/var/www/html$ ls -ltra
    total 12
    drwxr-xr-x 3 root root 4096 Apr 23 16:18 ..
    -rw-r--r-- 1 root root  672 Apr 23 17:08 index.nginx-debian.html
    drwxr-xr-x 2 root root 4096 Apr 23 17:08 .
    alpha-weu-production-webservers-vm01:/var/www/html$
    

    Modify the Index file

    modify index file on nginx

    ubuntu virtual machine vm02

    Topology of your perfectly load balanced linux webservers
    loadbalanced webservers in azure

    ]]>
    Using Python and pyFirmata to Control Arduino Boards on Ubuntu https://www.expertnetworkconsultant.com/python-programming-foundations-for-the-network-engineer/using-python-and-pyfirmata-to-control-arduino-boards-on-ubuntu/ Sat, 10 Oct 2020 05:00:18 +0000 http://www.expertnetworkconsultant.com/?p=4156 Continue readingUsing Python and pyFirmata to Control Arduino Boards on Ubuntu]]> Why am I Using Python and pyFirmata to Control Arduino Boards on Ubuntu?

    So here is the deal, I had a great affinity for Arduino until I fell in love with Python. My life has never been the same again. Aaahhh, do I need to ditch my Ardunio control boards to use a Raspberry Pi instead? How about all the interesting projects I made back then in Arduino? Could I not learn to code them in Python and just use my Python scripts to control them?

    Things you’ll need;

    • Personal Computer Linux | MAC | Windows with Arduino IDE.
    • Arduino Board (I have used the Elegoo Uno Version here)
    • USB cable for Arduino.
    • Breadboard
    • Jumper Wires
    • LED Light
    • Resistor

    Step 1: Upload Standard Firmata on your Arduino board (Firmata is a serial communication protocol that can control the Arduino’s GPIO)
    Files > Examples > Firmata > StandardFirmata (You will only need to upload this once).
    Using Python and pyFirmata  to Control Arduino Boards on Ubuntu - Install Standard Firmata on Arduino

    What is Firmata

    Firmata is an intermediate protocol that connects an embedded system to a host computer, and the protocol channel uses a serial port by default. The Arduino platform is the standard reference implementation for Firmata. The Arduino IDE comes with the support for Firmata.

    This could work perfectly with Odyssey-X86 with its onboard Arduino Core meaning that you can control the Arduino Core simply using Firmata protocol with different programming languages too!

    The Firmata library implements the Firmata protocol for communicating with software on the host computer. This allows you to write custom firmware without having to create your own protocol and objects for the programming environment that you are using.

    To use this library
    #include <Firmata.h>
    

    Step 2:
    Install PyFirmata on Ubuntu

    pip3 install pyfirmata
    

    Using Python and pyFirmata  to Control Arduino Boards on Ubuntu - pip3 install pyfirmata


    Step 3:

    Build your circuit. Roll your sleeves and get your jumper wires, resistor and LED together in a very simple circuit like the one below;

    Blinking LED Circuit

    Schematic: How to wire up your circuit
    simple led blink ardunio circuit

    Step 4: Write Your Python Blink LED Program

    from pyfirmata import Arduino
    import time
    
    
    if __name__ == '__main__':
        board = Arduino('/dev/ttyACM1')
        print("Communication Successfully Initiated ")
    

    Arduino Uno on DEV TTY ACM1

        while True:
            board.digital[13].write(1)
            time.sleep(0.5)
            board.digital[13].write(0)
            time.sleep(0.5)
    

    Full Code: Blink LED Python Program
    blink led python code


    from pyfirmata import Arduino, util
    import time

    if __name__ == '__main__':
    board = Arduino('/dev/ttyACM1')
    print("Communication Successfully Initiated")

    while True:
    board.digital[9].write(1)
    time.sleep(0.5)
    board.digital[9].write(0)
    time.sleep(0.5)

    Step 5 Hit Run and see it in action
    Blinking LED Circuit in action

    Troubleshooting
    Your USB connection to your Arduino board is disconnected if you see this sudden error;

    raise SerialException('write failed: {}'.format(e))
    serial.serialutil.SerialException: write failed: [Errno 5] Input/output error
    

    Here you are, a step by step guide to get Python to control your GPIOs on Arduino. Hope this little project helps you on your journey to great electronics?

    See more python projects like this fading led light project.

    ]]>
    How to Install VMWare Workstation 15 on Ubuntu 20.04 https://www.expertnetworkconsultant.com/installing-and-configuring-network-devices/how-to-install-vmware-workstation-15-on-ubuntu-20-04/ Wed, 30 Sep 2020 11:00:46 +0000 http://www.expertnetworkconsultant.com/?p=4022 Continue readingHow to Install VMWare Workstation 15 on Ubuntu 20.04]]> How to Install VMWare Workstation 15 on ubuntu 20.04

    Network engineers do not only deal with physical network elements but in recent times have had to build virtual network infrastructure thanks to virtualisation to complement their network architecture and operations. Imagine you need to build a small Data Centre running DHCP using Microsoft Servers or use a Hypervisor to run a network appliance of any favourite vendor like Cisco, F5, Checkpoint, Palo and Fortigate, you need to have knowledge of virtualisation and today our focus is purely on the worlds renowned virtualisation platform, VMWare.

    In this step by step guide, I am going to show you how to install VMWare Workstation on Ubuntu. This tutorial works for Ubuntu versions 12 and up. I am running the Desktop version of Ubuntu 20.04 in this post.

    You will learn:

    • How to install VMware Workstation prerequisites
    • How to download VMware Workstation
    • How to install VMware Workstation
    • How to start VMware Workstation

    Let us begin by downloading VMWare Workstation for Linux: Download VMWare Workstation for Linux

    Figure 1.0 – Download VMWare Workstation Pro.
     how to install vmware workstation 15 on ubuntu 20.04 - download vmware workstation pro for linux

    Figure 1.1 – Download VMWare Workstation Pro for Linux.
     how to install vmware workstation 15 on ubuntu 16.04 - download vmware workstation full bundle

    Step 1:
    Always start with an update

    $ sudo apt update
    

    how to install vmware workstation 15 on ubuntu 18.04 - sudo apt-get update

    Step 2:
    Install the Essentials

    $ sudo apt install build-essential
    

    Figure 1.1 – Install Build Essentials
    how to install vmware workstation 15 on ubuntu 18 - sudo apt install build-essential

    Step 3:
    Install all the required libraries

    sudo apt install libcanberra-gtk-module
    sudo apt install libaio1 libcanberra-gtk-module build-essential linux-headers &&  -$(uname -r)
    
    dpkg -l | grep linux-headers 
    

    install linux headers build essentials

    Now, press Y and then press to confirm the installation.

    How to Install VMWare Workstation 15 on ubuntu 20.04

    Step 4:
    Browse to the Downloads folder where the VMWare Workstation Installer was saved.

    ubuntu-20.04-Desktop:/$ cd ~/Downloads/
    VMware-Workstation-Full-15.5.2-15785246.x86_64.bundle
    

    Locate Installer
    Figure 1.2 – Once the VMware Workstation Pro installer is downloaded, navigate to the ~/Downloads directory with the following command:
    browse to the downloads folder

    Step 4b:

    As you can see, the VMware Workstation Pro installer file is here. Copy the filename.
    $ ls -lh
    

    Figure 1.3 – Copy VMWare Workstation Bundle filename for the remainder of the steps
    ls -lh

    Temporarily disable host access control with the following command:

    $ xhost +

    Figure 1.4 – Disable Host Access Control.
    xhost + access control disabled

    Step 5:
    Apply Permissions

    $ chmod +x VMware-Workstation-Full-15.5.2-15785246.x86_64.bundle
    

    Figure 1.5 – Use CHMOD +X to apply the appropriate permissions for the installer file VMware-Workstation-Full-15.5.2-15785246.x86_64.bundle
    chmod +x vmware-workstation

    Step 6:
    Begin Installation of VMWare Workstation Pro.
    Locate the previously downloaded VMware Workstation PRO for Linux bundle file and begin the installation. Please note that the file name might be different:

    $ sudo VMware-Workstation-Full-15.5.2-15785246.x86_64.bundle
    

    Figure 1.6 – Install VMWare Workstation on Ubuntu
    sudo vmware-workstation-full
    Begin the installation of the VMware Workstation PRO for Linux on Ubuntu 20.04
    Be patient. Wait for the installation to finish.

    Step 7:
    Launch VMware Workstation Pro.

    launch vmware workstation

    Congratulations, You have just successfully installed VMWare Workstation for Ubuntu.

    Use a Trial Evaluation or apply license to enjoy the amazing virtualisation platform.

    Install License for VMWare Workstation Pro

    Sometimes, you may encounter issues with VMWare Workstation not able to run your virtual machines and throwing up a series of errors. Below are some very popular ones you are likely to encounter.

    Error 1:
    “Could not open /dev/vmmon: No such file or directory.
    Please make sure that the kernel module `vmmon’ is loaded.”

    Could not open /dev/vmmon

    Error 2:
    “Failed to initialize monitor device.”

    unable to change.

    Error 3:
    “Unable to change virtual machine power state: Transport (VMDB) error -14: Pipe connection has been broken.”

    Failed to initialize monitor device.

    Apply the following commands to Troubleshoot these errors.

    Generate key pairs for vmmon and vmnet components

    1) sudo openssl req -new -x509 -newkey rsa:2048 -keyout VMWARE.priv -outform DER -out VMWARE.der -nodes -days 36500 -subj "/CN=VMware/"
    

    Attach the generated key to the vmmon and vmnet components

    2) sudo /usr/src/linux-headers-`uname -r`/scripts/sign-file sha256 ./VMWARE.priv ./VMWARE.der $(modinfo -n vmmon)
    3) sudo /usr/src/linux-headers-`uname -r`/scripts/sign-file sha256 ./VMWARE.priv ./VMWARE.der $(modinfo -n vmnet)
    
    4) sudo mokutil --import VMWARE.der
    

    Create and confirm a password:

    Test

    5) mokutil --test-key VMWARE.der
    

    A better fix as that has always worked for me is to Disable Secured Boot from Bios. That works very well.

    Reboot PC and Enter Bios Menu
    Disable Secured Boot

    Reboot and then run:
    sudo vmware-modconfig –console –install-all {Not always necessary but might be helpful}

    Now that you have a fully working VMWare Workstation, let us build a quick lab by spinning up a Virtual Machine and working on a Network Address Translation lab as per this guide here: Configure NAT on Cisco and VyOS

    ]]>