top of page
  • Writer's pictureAbhishek Shukla

VMware NSX-V vs NSX-T

Updated: Jan 11, 2021

Virtualization as technology has done wonders and is continuing to amaze users. It has evolved the way how data centers are made today . Virtualization made possible for physical servers and hardware to make multiple operating systems (OS) run instead of one. It created a layer of virtualization called "Hypervisors". Entire traditional data center got a new name with evolution called Software-defined Data Center or SDDC and the related components also got software defined too e.g. SDS (Software Defined Storage) and SDN (Software Defined Networking).

SDN is an architecture to make traditional networking flexible and virtualized for virtual platforms leveraging OSI layer concepts. SDN improves network control with ability to rapid change, scalability and agility. It's indeed a dynamic technology for the modern world of virtualization.

In this section, I would discuss about SDN powered by VMWare called NSX. In virtualization space, VMware is one of the biggest names who has contributed with wide range of products. NSX is one of them and is evolving with changing standards of modern era of SDDC.

What is NSX ?

As per VMware definition of NSX,

"VMware NSX® Data Center is a complete Layer 2–7 network virtualization and security platform that enables the virtual cloud network, a software-defined approach to networking that extends across data centers, clouds and application frameworks. "

NSX is successor of vCNS (VMware Cloud Networking and Security) and Nicira NVP.

(later VMware acquired Nicira and stepped in software defined networking and network function virtualization)

NSX replicates the network functions of physical devices. It recreated software defined routers, switches, firewalls, load balancers so that respective protocols could run the same way as they ran on physical devices. Traditionally, we could see the ports on switches and routers but it is all virtual now. I will talk more about this in components section of this article.



Microsegmentation

As simple definition, the segment is broken into small chunk of segments.

For an explanation, let's take traditional approach when access control lists (ACLs) used to write at the switch level to decide the traffic flow. There would be a firewall in between as well to control. With that approach for giving/controlling access between networks, a physical router or an edge device used to deploy which could have added complexity in existing network and it was not fast as it is expected for virtual environment. NSX made it simple, fast, convenient and most importantly easy to manage. NSX Microsegmentation is a concept which uses the distributed firewall (DFW )that has been built into kernel of ESXi and will be pushed to VMs.

Network interaction for IP address, MAC addresses, VMs and its applications are secured by Security policies which are all set in DFW. The policies will be configured by using objects i.e. active directory. Each VM has its own individual policy, group policy, default policy and even global policy.

DFW allows segmentation of datacenters at VM level . Policies can be based on IP , MAC, port groups, port numbers, name , VM-to-VM, switch-to-Switch and so on. Its set up for east-west traffic

Edge firewall is another component of microsegmentation which helps in the security requirements for North-south traffic. Security requirements i.e. DMZs based on IP/VLAN, tenant-to-tenant isolation, NAT, user based SSL VPNs.

If a VM is migrating to another ESXI, from one network to another, then access rules and security policies will not be changes as per the new location since these are inculcated on the VM level. Above figure has few VMs on different tiers i.e. Web and APP. Each VM has its own access and firewall policy set on VM level and need to bother external routers for access decisions. This is a classic example of improved flexibility and automated security when using NSX. NSX is especially useful for large virtual environment or cloud providers.

There are two editions of NSX which are NSX for vSphere (NSX-V) and NSX for Transformers (NSX-T)


1. NSX-V

It is integrated with VMware vSphere and runs with VMware vCenter for its management. NSX-V is specific to vSphere environment (based on ESXi). NSX-V uses the VXLAN encapsulation (MAC over IP) protocol where L2 frames are encapsulated in L3 packets and requires MTU of 1600 bytes or more.


2. NSX-T

NSX-T is the next innovation after NSX-V. It was designed for heterogeneous virtual platforms and mutil-hypervisor environments. Unlike NSX-V, NSX-T supports network virtualization stack on KVM<, Container technology like Dockers, Kubernetes and OpenStack moreover AWS native workloads too. Best part, it has no dependency on vCenter anymore, it can deployed without vCenter. NSX-T uses GENEV* encapsulation instead of VXLAN that adds another header on VXLAN and needs MTU of 1700 bytes which allows to modify the context differencing for processing information such as data tracking, encryption and security on data transferring layer.

*GENEVE is VMware's inhouse protocol developed in collaboration with intel, red hat and microsoft and is based on VXLAN and NVGRE concepts

Components of NSX:

NSX Manager, NSX controllers and NSX Edge are three main components of NSX.


NSX manager is a main component of NSX Management plane. Entire management of virtual networking is NSX manager's responsibility. NSX Manager is deployed as a VM from OVA template. NSX Manager for NSX-V can work with only one vCenter Server, whereas NSX Manager for NSX-T is deployed as an ESXi VM or KVM VM and will work with multiple vCenter servers at once.

Operating system for NSX Manager for NSX-V is based on the Photon OS (similar to the vCenter Server Appliance).

Wherein NSX-T Manager runs on the Ubuntu OS.

NSX controllers: The NSX controller is a distributed control system used for overlay tunneling and control virtual networking. The NSX controllers are deployed in 3 or more nodes. They are deployed as a VM on ESXi or KVM hypervisors. The main job of NSX Controller is to control all logical switching within the network and holds information about VMs, hosts, switches and VXLANs.

NSX Edge is a gateway service that helps virtual network to connect to physical world. NSX Edge is installed as a distributed logical router (DLR) or as a services gateway. It provides routing capabilities for East-West and North-South traffic. Dynamic routing, firewalls, Network Address Translation (NAT), Dynamic Host Configuration Protocol (DHCP), Virtual Private Network (VPN), Load Balancing, and High Availability are controlled by NSX Edge.

Architecture of NSX

The architecture of NSX defines and reproduces L2 to L7 networking functionalities and services i.e. switching, routing, firewalling, and load balancing on virtual platform. NSX architecture has built-in control, management and separation of data using traditional protocols.

NSX manager manages and serves as entry point for network virtualization. It provides network and security that enables administrator to configure and control NSX functionality. It deploys controllers and host preparation I will be explaining about how switching, routing and data flow happens in NSX environment.

Logical Switching

Before diving into the switching part, I would like to discuss about Transport Nodes (TN) and Virtual Switches. These are considered as NSX's data deferring components.

Transport Node is nothing but the NSX compatible device in the traffic transmission and overlaying networking. A node must contain a host switch which serve as transport node e.g. VMware VDS (Virtual Distributed switch) is required for NSX-V. Standard switch or vSS won't be compatible for transmission. For NSX-T, you need to deploy N-VDS*.

*N-VDS is a software component for NSX-T on a TN that forwards traffic and owns at least one physical NIC (p-NIC). The different transport nodes are independent but can be grouped by assigning the same names for centralized management.

Transport Zone

The Transport zone is grouping of ESXi hosts which can communicate with each other across physical network. This communication happens with the help of VTEPs. VTEP is VXLAN Tunneling EndPoint is the interface on ESXI. Each host will have atleast one VTEP. Transport Zones are available on NSX-V and NSX-T.

For NSX-V, The transport zone defines the communication limits of VXLAN only but for NSX-T, there are two the transport zones due to GENEV encapsulation : VXLAN or LAN

How Switching actually happens in virtual environment:

When two VMs residing on different hosts communicate directly, the unicast traffic is exchanged in the encapsulated mode between two VTEPs without flooding. Now, if VM doesn’t know the MAC address of the destination network interface. The Switch will broadcast on all its ports. It means that the same traffic (broadcast, unicast, multicast) must be sent to all VMs connected to the same logical switch. If VMs are residing on different hosts, traffic must be replicated to those hosts. Broadcast, unicast and multicast traffic is also known as BUM traffic.

NSX-v supports Unicast mode, Multicast mode and Hybrid mode.

Logically, When a VM1 sends an ARP request to know the MAC address of a VM2, the ARP request is intercepted by the logical switch. If the switch already has the ARP entry for the target network interface of the VM2, the ARP response is sent to the VM1 by the switch. Otherwise, the switch sends the ARP request to an NSX controller. If the NSX controller contains the information about VM IP to MAC binding, the controller sends the reply with that binding and then the logical switch sends the ARP response to the VM1. If there is no ARP entry on the NSX controller, then the ARP request is re-broadcasted on the logical switch.

Elaborated Steps :

  1. VM1 generates a BUM frame.

  2. ESXi-1 VXLAN-encapsulates the original frame.

  3. The destination IP address in the outer IP header is set to 239.1.1.1 (multicast IP address) and the multicast packet is sent into the physical network.

  4. The L2 switch receiving the multicast frame performs replication. Where IGMP snooping is configured on the switch, it will be able to replicate the frame to the relevant interfaces connecting to ESXi-2 and the L3 router. If IGMP snooping is not enabled or supported, the L2 switch treats the frame as an L2 broadcast packet and replicates it to all interfaces belonging to the same VLAN of the port where the packet was received.

  5. The L3 router performs L3 multicast replication and sends the packet into the transport subnet B.

  6. The L2 switch behaves similarly to what discussed at step 3 and replicates the frame.

  7. ESXi-2 and ESXI-3 decapsulate the received VXLAN packets, exposing the original Ethernet frames that are then delivered to VM2 and VM3.

Logical Routing

Routing is required whenever the network traffic flows from one network to another i.e. inter-network e.g. VM1 needs to talk to VM2 which is another network, routing is must. There are two types of traffic in routing model.

  1. East-West traffic: EW traffic refers to transferring the data over network within the data center

  2. North-South traffic: NS traffic refers to data flow across the data centers and a location outside one data center (external networks)

DLR: Distributed logical router is the router or I would say virtual router which can use static routes and dynamic routing protocols i.e. BGP. OSPF or IS-IS

How routing happens:

NSX uses DLR and each interface which belongs to DLR is called LIF (Logical interface). There is a routing kernel module on each hypervisor on which to perform routing between LIF's

Let’s consider, VM1 is in one segment and VM2 is in another. In order to communicate with each other, each VM should be connected to a distributed logical router (DLR) which is in turn connected to external networks via edge gateways (NSX Edge) with required routing protocols. The NSX Edge gateway connects isolated, stub networks to shared (uplink) networks by providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and Load Balancing. Common deployments of NSX Edge include in the DMZ, VPN Extranets, and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for each tenant.

If you need to transmit traffic from a VM1 of one segment to segment of the another VM2, traffic must pass through the NSX Edge gateway.



VM1 sends a packet to VM2 connected to a different VXLAN segment. The packet is sent to the VM1 default gateway interfaces located on the local DLR (172.16.10.1). A routing lookup is performed on the local DLR, which determines that the destination subnet is directly connected to DLR LIF2. A lookup is performed in the LIF2 ARP table to determine the MAC address associated with the VM2 IP address. If the ARP information is not available, the DLR will generate an ARP request on VXLAN 5002 to determine the required mapping information. An L2 lookup is performed in the local MAC table to determine how to reach VM2. The original packet is VXLAN encapsulated and sent to the VTEP of ESXi2 (20.20.20.20). ESXi-2 decapsulates the packet and performs an L2 lookup in the local MAC table associated to VXLAN 5002 segment. The packet is delivered to the destination VM2

Comparison in conclusion




0 comments

"An investment in knowledge
always pays the best interest"
                      -             
Benjamin Franklin

bottom of page