The cloud computing landscape is constantly evolving, with new technologies and tools emerging all the time. One such innovation that has caught the attention of developers and IT professionals alike is Azure Kubernetes Virtual Nodes. This powerful technology promises to revolutionize the way we think about scaling and managing our cloud applications. In this article, we’ll explore what Azure Kubernetes Virtual Nodes are, how they work, and why they have the potential to transform the cloud computing industry as we know it. So buckle up and get ready to discover a game-changing tool that can take your cloud operations to the next level!
Table of content:
- Understanding Virtual Nodes in Azure Kubernetes
- Setting and implementing Virtual Nodes
- How Virtual Nodes is the best solution for containerized workloads with other solutions?
- Advantages of Virtual Nodes
- Conclusion
Understanding Virtual Nodes in Azure Kubernetes
Azure Kubernetes Virtual Nodes is a service offered by Microsoft Azure that allows Kubernetes cluster managers to deploy and manage their containerized workloads on a serverless infrastructure, without the need to manage and provision virtual machines. These are created by integrating the Kubernetes cluster with a serverless computing platform such as Azure container Instances, AWS Fargate, or Google Cloud Run, which enables the creation of containers that can be easily created and destroyed as per the demand.
Azure Kubernetes virtual nodes work on an open-source Kubernetes Kubelet implementation called Virtual Kubelet. It is an open-source Kubernetes kubelet implementation that allows you to run your container workloads on various cloud providers, such as Azure Container Instances (ACI) and AWS Fargate, without having to manage the underlying infrastructure. Virtual Kubelet acts as a middle layer between Kubernetes and the cloud provider’s container service, allowing Kubernetes to schedule and manage pods on virtual nodes. This means that you can easily scale your Kubernetes cluster by adding additional capacity from various cloud providers, which can be especially useful for handling bursty workloads or temporarily increased demand for your application.
Setting and Implementing Virtual Nodes
To implement virtual nodes in Azure Kubernetes Services Kubernetes.Please follow these instructions:
Step 1: Create an Azure Kubernetes Service cluster
Step 2: Create a deployment in AKS.
Please note this part in the YAML. This shows that we are using a virtual Kubelet and in the toleration section, we are specifying ACI (Azure Kubernetes Instance). The rest of the Deployment is easy to understand that we are getting the NGINX image from the Docker hub and deploying it.
# To schedule pods on Azure Virtual Nodes nodeSelector: kubernetes.io/role: agent beta.kubernetes.io/os: linux type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule
Here is the full deployment file:
apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 2 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: nginx:latest ports: - containerPort: 80 # To schedule pods on Azure Virtual Nodes nodeSelector: kubernetes.io/role: agent beta.kubernetes.io/os: linux type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule
Now we will deploy the container with the kubectl command and the above deployment file:
kubectl apply -f virtual-node.yaml
Step 3: Verify if the container is deployed.
It clearly shows the container is deployed in the virtual node.
kubectl get nodes
Verify with the below command it will show that the pod is deployed in the ACI (Azure Kubernetes Services)
kubectl get podes -n kube-systems
Step 4: Verify the logs generated by the POD hosted in the ACI
# Verify logs of ACI Connector Linux kubectl logs -f $(kubectl get po -n kube-system | egrep -o 'aci-connector-linux-[A-Za-z0-9-]+') -n kube-system
This is how it shows:
Step 5: Verify whether the service is running or not.
kubectl get svc
Get the POD Info and it clearly shows that the node is Virtual Kubelet Azure ACI. Note down the IP address.
kubectl get pods -o wide
Step 6: Log in to the pod and install the CURL command.
We are creating a sample pod so we can run a curl command to access the private IP address of the NGINX container.
kubectl run -it --rm testvk --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 # Run this code when you log into the container in the above line apt-get update && apt-get install -y curl
Install the CURL command.
Step 7: Verify if we are able to use the IP address of the NGINX Pod and browse it. That is why we installed the CURL command.
# Access Application curl -L http://10.239.0.5
We can verify that the page is successfully visible.
The entire code of the command line is provided below for easy access
az login # Configure Command Line Credentials az aks get-credentials --name virtualnodedemo --resource-group AksVirtualNodePool # Verify Nodes kubectl get nodes kubectl get nodes -o wide # Verify aci-connector-linux kubectl get pods -n kube-system # Verify logs of ACI Connector Linux kubectl logs -f $(kubectl get po -n kube-system | egrep -o 'aci-connector-linux-[A-Za-z0-9-]+') -n kube-system # Deploy kubectl apply -f virtualnodeDeployment.yaml # Verify pods kubectl get pods -o wide # Get Public IP kubectl get svc kubectl run -it --rm testvk --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 # Run this code when you log into the container in the above line apt-get update && apt-get install -y curl # Access Application curl -L http://10.239.0.5 # Delete Application kubectl delete -f virtualnodeDeployment.yaml kubectl delete -f helloworlddeployment.yaml
How Virtual Nodes is the best solution for Containerized workloads with other solutions?
Here we have some other solutions for running containerized workload:
- Virtual Machines
- Bare Metal Servers
- Serverless Computing
- Function-as-a-Service
- Container-as-a-Service
1. Virtual Machines: Virtual machines provide virtual environments, including operating systems, libraries, and application runtime. But virtual machines require management and provisioning of the infrastructure.
On the other hand, Virtual Nodes allow cluster managers to run workloads on a serverless infrastructure without the need to manage and provision virtual machines.
2. Bare Metal Servers: They provide a low-level hardware interface that enables containerized workloads to run directly on the host. It also offers high performance and resource utilization, but it can be difficult to manage and scale, particularly in distributed environments.
In Kubernetes, virtual nodes offer the same benefits but without the need to manage and provision physical hardware.
3. Serverless Computing: Serverless computing platforms, such as AWS Lambda and Google Cloud Functions, enable developers to run code without the need to manage servers or infrastructure. Serverless platforms can be cost-effective and scalable, but they may not be suitable for all types of workloads, particularly those that require long-running processes or custom runtime environments.
Virtual Nodes are the most flexible environment for running containerized workloads, with the ability to customize the runtime environment and support long-running processes.
4. Function-as-a-Service (FaaS): Function-as-a-Service platforms enable developers to run code in response to specific events or triggers, such as an HTTP request or a database change. FaaS platforms can be cost-effective and scalable, but they may not be suitable for workloads, that require long-running processes or custom runtime environments.
On the other hand, virtual nodes offer a more flexible environment for running workloads with the ability to support long-running processes and custom runtime environments.
5. Container as a Service (CaaS): Container-as-a-Service platforms, such as Docker Swarm and Google Kubernetes Engine, provide a managed environment for running containerized workloads. CaaS platforms can simplify management and scaling, but they may not offer the same level of customization or control as other solutions.
Virtual nodes offer similar benefits, adding the advantage of a serverless infrastructure and greater flexibility for customizing the runtime environment.
Advantages of Virtual Nodes:
Here we have some of the advantages that are offered by virtual nodes in Kubernetes:
- Cost-effectiveness: It allows cluster managers to run workloads on a serverless infrastructure without the need to manage and provision virtual machines, reducing infrastructure costs.
- Scalability: It can scale up or down automatically based on demand, enabling cluster managers to support a growing number of workloads without the need for manual intervention.
- Flexibility: Virtual Nodes provide a flexible environment for running containerized workloads, with the ability to customize runtime environments and support long-running processes.
- Reduced complexity: Virtual Nodes simplify management and scaling, enabling cluster managers to focus on application development rather than infrastructure management.
- Improved performance: Virtual Nodes provide faster startup times and lower overhead than traditional virtual machines, allowing workloads to start and run more quickly and efficiently.
- Improved resource utilization: Virtual Nodes can maximize resource utilization by scheduling workloads based on available capacity, reducing the risk of underutilized resources or overprovisioning.
Conclusion
Virtual Nodes provides additional scaling options and flexibility for running containerized workloads in the Azure Kubernetes Service cluster. With virtual nodes, you can scale your workloads up and down without needing to provision and manage additional compute resources.