My understanding on "VMware vSphere with Tanzu"
- Abhishek Shukla
- Apr 7, 2021
- 7 min read
Updated: Aug 27, 2021
Hello Everyone,
I have been reading about Cloud native applications these days and is interesting to know the different cloud native platforms. But before that I would love to define what exactly it is. The Cloud Native definition is an architectural philosophy for designing the applications and infrastructure ‘Containers’ provide a way to package and run the application. To run such applications, container orchestrator is required. Kubernetes is an open-source container orchestrator for managing containerized workloads and services, that facilitate both declarative configuration and automation. It is portable, extensible, and scalable and has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available and these days the applications are constructed of multiple microservices that run a large number of Kubernetes pods and VMs. VMware vSphere with Tanzu is one of the great platform that helps in creating Kubernetes control plane directly on VMware ESXi by creating Kubernetes layer within ESXi that are part of the Kubernetes cluster. According to Tanzu Kubernetes cluster architecture, vSphere cluster (ESXi as worker node) has Supervisor clusters and Guest Clusters (TKG Clusters). The guest clusters have their own control plane VMs, management plane, worker nodes, networking, pods and namespaces and are isolated from each other. Supervisor Clusters and Guest clusters communicate via API servers.
Components
1. Workload
In vSphere with Tanzu, the workload is an application deployed that consists of containers running inside vSphere Pods, VMs or both. It is an application that run inside Tanzu Kubernetes cluster that are deployed by Tanzu Kubernetes Grid service
2. Supervisor Cluster
The Supervisor Cluster provides the management plane on which Tanzu Kubernetes clusters are built. The service called The Tanzu Kubernetes Grid (TKG) service is a controller manager that includes set of controllers which are subset of supervisor cluster. TKG service helps in provisioning Tanzu Kubernetes cluster.
3. Supervisor Namespace
When Tanzu Kubernetes clusters are provisioned, a resource pool and VM folder are created in a supervisor namespace. The resource quotas and storage policy are applied to a namespace and inherited by the Tanzu Kubernetes cluster deployed. The Tanzu Kubernetes cluster control plane and worker node VMs are placed within the resource pool and VM folder
4. Tanzu Kubernetes Cluster
The Tanzu Kubernetes cluster is distribution of open-source Kubernetes container platform that is built, signed, and supported by VMware. Tanzu Kubernetes clusters are built on top of supervisor cluster. It is defined in the supervisor namespace using custom resource. It uses the open-source Photon OS from VMware and is integrated with underlying vSphere infrastructure including storage, network, and authentication.
5. vSphere Pod
vSphere Pod is a VM with a small footprint that runs one or more containers. It is similar to Kubernetes Pod. Each pod is sized for the workload that has explicit resource reservations for that workload. It allocated exact amount of storage, memory and CPU required for the workload to run.
Prerequisites
To configure or to run Kubernetes workloads natively on vSphere, Workload management is required to be enabled that creates Supervisor cluster where the vSphere pods run and to provision Tanzu Kubernetes clusters or Guest clusters. There are few prerequisites for compute, network, and storage.
1. For vSphere Cluster
vSphere cluster is a collection of ESXi hosts managed by vCenter server. To enable Workload Management, at least 3 ESXi hosts are a must, if you are using VSAN, then a minimum 4 ESXi hosts are required.
vSphere cluster must be configured with High-Availability (HA) enabled
vSphere cluster must be configured with Distributed Resource Scheduler (DRS) enabled and must be set to fully automated mode.
The cluster must use shared storage for vSphere HA, DRS and for storage persistent volumes
2. Networking Stack
To enable workload management, the networking must be configured for the Supervisor Cluster. A Supervisor Cluster can either use the vSphere networking stack or VMware NSX-T™ Data Center to provide connectivity to Kubernetes control plane VMs, services, and workloads. When a Supervisor Cluster is configured with the vSphere networking stack, all hosts from the cluster are connected to a vSphere Distributed Switch (vDS) that provides connectivity to Kubernetes workloads and control plane VMs. A Supervisor Cluster that uses the vSphere networking stack requires a third-party load balancer that provides connectivity to DevOps users and external services. A Supervisor Cluster that is configured with VMware NSX-T™ Data Center, uses the software-based networks of the solution as well as an NSX Edge load balancer to provide connectivity to external services and DevOps users.
To use NSX-T Data Center networking for the Supervisor cluster, there are system requirements and topologies to be reviewed https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-B1388E77-2EEC-41E2-8681-5AE549D50C77.html
vCenter and ESXi that are part of work load management cluster needs to be prepared for NSX-T. Refer VMware documentation: https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-8D0E905F-9ABB-4CFB-A206-C027F847FAAC.html to install and configure NSX-T Data Center for vSphere with Tanzu.
3. Storage policy
Storage policies are created for the datastore placement for Kubernetes control plane VMs, containers and images. Storage policies are associated with different storage classes. Before enabling workload management, a storage policy is created for the placement of Kubernetes control plane VMs.
Make sure the datastore is shared between all ESXi hosts in thecluster
VM storage policies must be configured and updated
Storage policy creation with vSphere : https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-544286A2-A403-4CA5-9C73-8EFF261545E7.html#GUID-544286A2-A403-4CA5-9C73-8EFF261545E7
4. Content Library
Content library consists of distributions of Tanzu Kubernetes releases in the shape of OVA templates. You can create a Local Content Library where images are uploaded manually or can create Subscribed Content Library to pull the latest released images automatically.
Configuration
To configure and provision Tanzu Kubernetes cluster, lets create content library first which is required to be created in the vCenter server that manages the vSphere cluster where the Supervisor cluster runs
1. Create Content Library
The content library provides the distribution of Tanzu Kubernetes releases in the shape of OVA templates.
Login to the vCenter Server with administrator credentials.
Select Content Librariesunder Inventories tab.
Click on + Create and enter the required details.
Verify the identity of the subscription host and click YES to proceed
Select the storage location for the library contents and Click NEXT
Review content library settings and click FINISH
Library is created and is available under Content Libraries section
Click on recent created library and observe the details and OVAs available
Note: You can create a Subscribed Content Library to automatically pull the latest released images or you can a Local Content Library and upload the images manually. Subscription URL: https://wp-content.vmware.com/v2/latest/lib.json
2. Enable Workload Management
Enabling workload management on a vSphere cluster creates Supervisor cluster. Workload Management enables deploying and managing Kubernetes workloads in vSphere. By using workload management, you can leverage both Kubernetes and vSphere functionality. Once vSphere cluster for workload management is configured, namespaces can be created which provides compute networking and storage resources for Kubernetes applications.
Network Support: You can select between two networking stacks when configuring workload management such as NSX-T and vCenter server networks. You can check the checklist for the same by clicking Menu > Workload Management > Network Support
HA and DRS Support:HA and DRS must be enabled on the vSpherecluster in fully automated mode on the cluster where you set up workload management.
Storage Policy:Storage policies must be createdthat determines the datastore placement of the Kubernetes control plane VMs, containers and the images.
Load Balancer: A Supervisor Cluster that is configured with VMware NSX-T™ Data Center, uses the software-based networks of the solution as well as an NSX Edge load balancer to provide connectivity to external service. If the vCenter Server network is used, a load balancer must be configured to support the network connectivity to workloads from client networks and for load balancing the traffic between Tanzu Kubernetes clusters.
Tanzu Kubernetes Grid: The content library must be created on the vCenter server system. The VM image that is used for creating the nodes of Tanzu Kubernetes clusters is pulled from that library. This library will contain the latest distributions for Kubernetes and another OS. (https://wp- content.vmware.com/v2/latest/lib.json)
Steps:
Login to the vCenter server with administrator credentials.
Select WorkloadManagemen
Click on GET STARTED
vCenter Server and Network:Select a vCenterand then selecta networking stack option and click NEXT
Select a Cluster: Select the compatible cluster listed in the clusterdetails and click NEXT.
Control Plane Size: Allocate capacity for the Kubernetes control plane VMs. The amount of resources that you allocateto the control plane VMs determine the amount of Kubernetes workloads the cluster can support. Select from resource allocation size and click NEXT.
Storage: Select the storage policyto be used for datastore placement of Kubernetes control plane VMs and containers. This policy is associated with a datastore on the vSphere environment.
Management Network:The workload management consists of three Kubernetes controlplane VMs and the spherelet process on each host, which allows the host to be joined to a Kubernetes cluster. The cluster wherethe workload management is connected to management networksupporting traffic vCenter server.
Workload Network configuration: Configure the NSX-T capable vDS switch, NSX-T edge cluster, POD CIDR, Service CIDR, Ingress CIDR and Egress CIDR for TKG guest cluster VMs.
TKG configuration: Add content library to give the access to the workloads
Review and Confirm: Review all the detailsbefore confirming the setup for workload management on the cluster.
Click FINISH
Cluster is available under Menu > Workload Management > Clusters
Note: As mentioned above, the workload management consists of three Kubernetes control plane VMs which allows the ESXi hosts (Kubernetes nodes) to be joined in the Kubernetes cluster. Once the workload cluster is created, you would observe three SupervisorControlPlaneVM are created. These are the control plane VMs and interact with vSphere infrastructure to provide the services and capabilities for vSphere with Tanzu.
3. Create Supervisor Namespaces
Once the Supervisor cluster is deployed, configured and licensed, the Supervisor namespace can be deployed on the Supervisor cluster to run Kubernetes applications
Steps:
Login to the vCenter server with administrator credentials.
Select WorkloadManagement
Select the Supervisor cluster created
Click on Namespaces
Click on NEW NAMESPACE
Select a cluster where you would like to create the namespace
Provide the name and Click on CREATE
The namespace has been createdand available under Menu > WorkloadManagement > Namespaces
To accessthe namespace, you must have Kubernetes CLI tool installed as plugin. You can get that CLI tool by clicking on Copy Link or Open.
The resourcelimits and Object limits information are available throughvCenter server under Configure section
Storage Policies, ConfigMap, Secrets and Persistent Volume Claims are available under Storagesection.
Network Policies,Services, Ingress and Endpoint information is available under Networksection.
Download CLI plugin as per the operating system.
You can access the namespaces and create guest clusters using CLItool.
4. Create Tanzu Kubernetes Cluster
Tanzu Kubernetes cluster is created by invoking Tanzu Kubernetes Grid service declarative API. Once the cluster is created, you can manage and deploy workloads to it by using kubectl command.
Steps:
Download and install Kubernetes CLI tool for vSphere as mentioned in previous section.
Login to the namespace context using below command
kubectl-vsphere.exe login --insecure-skip-tls-verify --vsphere-username
<USERNAME> --server=<ip-address>.
Verify the control plane and storage class
kubectl get nodes
kubectl get sc
kubectl get virtualmachineimages
Switch context to the supervisor Namespace where you decide to provision Tanzu Kubernetes Cluster.
kubectl config get-contexts
kubectl config use-context <SUPERVISOR-NAMESPACE>
Construct the YAML file for provisioning Tanzu Kubernetes Clusterand save it as <cluster-name.yaml> The storageClass is populated with the previously configured storage policy and is backed by a datastore. For example:
apiVersion: run.tanzu.vmware.com/v1
kind: TanzuKubernetesCluster
metadata:
name: tkg-cluster-01
namespace: test-ns-1
spec:
distribution:
version: v1.18.5
topology: controlPlane:
count: 1
class: best-effort-small
storageClass: vwk-storage-policy
workers:
count: 3
class: best-effort-small
storageClass: vwk-storage-policy
Provision the cluster by running <apply> command.
kubectl apply -f <cluster-name>.yaml
Verify the cluster provisioned using below commands.
kubectl get tanzukubernetesclusters
kubectl describe tanzukubernetescluster CLUSTER-NAME
At step 5, yaml file describes controlplane count is 1 and workernodes are 3. This can be verifiedat vCenter under Namespaces
That is all for now. Hope this brief article provides you fair insight of VMware vSphere with Tanzu. Thanks for reading. See you soon. Till then Keep reading !
Comments