Installing NSX-T 2.1 Management and Control Plane

As I am sure many out there (including myself) are eagerly awaiting GA release of VMWare PKS. In late December 2017 VMWare released NSX-T 2.1 which will provide the Networking virtualization in PKS. NSX-T is the multi-cloud and multi-hypervisor environment version of NSX for vSphere. NSX-T can coexist with NSX for vSphere as NSX-T has a separate management plane which does not interact with vCenter which enables you to build NSX-T compute clusters under the same vCenter as clusters managed by NSX for vSphere. This post will walk through the initial installation of the Management and Control Planes and hopefully will be the first in a series about designing and configuring NSX-T for container workloads.

Before you begin some important points for NSX-T:

  1. Read the VMWare NSX-T 2.1 Installation Guide and plan the deployment
  2. I highly recommend that you check out Hany Michaels Kubernetes in the Enterprise: The Design Guide; this is an absolutely awesome write up on deploying Kubernetes on vRA leveraging NSX-T full of design guidance for NSX-T
  3. IPv6 is not supported
  4. When deploying on the ESXi hypervisor, NSX-T does not support the Host Profiles and Auto Deploy features
  5. If performing an installation on nested ESXi please note that VMXNET 3 vNIC is only supported for VM NSX Edge
  6. If an encryption rule is applied on a hypervisor, the virtual tunnel endpoint (VTEP) interface minimum MTU size must be 1700. MTU size 2000 or later is preferred.
  7. You must enable SSH to access on all hypervisors that will be running NSX-T.
  8. An NSX Manager must have a static IP address. You cannot change the IP address after installation.
  9. NSX Manager VM Configuration:  The OVA contains three deployment options: Small, Medium and Large; Small is for Lab/PoC and Medium and Large for Production. I have not found any official guidance on sizing guidance for “Medium” vs “Large” however the only difference is resources which can be changed as required; start with a Medium deployment and monitor load on the NSX Manager and increase as required.
  10. Control Plane clusters can contain only one or three members; no other configuration is possible (eg. Cannot have 5 node Controller Cluster)
  11. The OVF Template deployment does not work from the vSphere Client (H5) and fails with “Invalid value ‘true/false’ specified for property nsx_IsSSHEnabled : I had this issue deploying on vCenter 6.5 (Build 7312210) this appears to be an issue with the case issue with the translation of the Boolean value that the H5 client passes. Solution: The deployment must be made using the vSphere Web Client (Flash) or using the OVF tool

Step 1. Download the NSX Manager and NSX Controller OVA’s from My VMWare
Step 2. Deploy the NSX Manager VM

  1. Open the vSphere Web Client and deploy the OVF Template (nsx-unified-appliance-2.1.0.0.0.7395503.ova) 
  1. Provide the standard metrics (VM Name, Location, Storage, Networking
  2. At the Select Configuration screen select Medium and click Next
  1. Enter valid values for the appliance at the Customize template screen ensuring the Role Name is set to nsx-manager click Finish and Power On the appliance

NOTE: It is recommended that SSH is enabled on the appliance in order to ease configuration and troubleshooting if your security policy allows it. It can be disabled once the configuration is complete

  1. After the installation completes browse to https://<address of appliance>/ and logon using the admin credentials provided during template customization and agree to the EULA.
  1. Select System > Configuration > License and install your NSX License

Step 3. Configuring Automatic Backups

The following outlines the process for setting up an Automated backup of the NSX-T Manager. In addition to backing up the Virtual Machines/appliances it is recommended that application backups are taken on a regular interval as they consume very little space but can enable rollback in the event of an unauthorized/accidental misconfiguration. NSX-T provides an automated backup method to an SFTP server on a configurable schedule to allow for restore points to be created. Before we begin a couple of points:

  1. SFTP is the only configurable protocol in the GUI
  2. The target SFTP server must be configured to serve an ECDSA Key in order for the supported SSH thumbprint validation to work (requires a SHA-256 fingerprint generated from the ECDSA Key)
  3. The schedule allows you to set how oftern backups are taken but not when; i.e. can’t set run at 1:00pm only run every 5 minutes

1. Logon to your SSH/SFTP server and determine the thumbprint for the server using ssh-keygen -lf <keyfilename>

2. Logon the NSX-T Manager and select System > Utilities > Backup from the menu and click Edit

3. Check Enabled next to Automatic Backup and enter the details for the SFTP server and set the Passphrase to encrypt the backups; select the schedule tab and adjust the frequency as required and click Save

4. A backup will complete after the settings are saved; review the Output and ensure that the backups completed successfully.

Now when you or someone else breaks your Lab/Production you might have a way back to a known good state quickly.

Step 4. Install the NSX Controller Plane

NSX Controller is an advanced distributed state management system that provides control plane functions for NSX-T logical switching and routing functions. Provides control plane to distribute network information to the hypervisors, enables VXLAN dependency on multicast routing/PIM in the physical network to be removed and provides suppression of ARP broadcast traffic in VXLAN networks. The Control Plane can be installed as a single VM or as a cluster of three Controller Appliances.

NOTE: The Controller Appliances are deployed by default as 4vCPU, 16GB RAM (Fully Reserved) by default; for Lab Deployments if you don’t have the resources the following resourcing should be sufficient to get you going: 1vCPU/4GB Memory

1. Deploy the nsx-controller-2.1.0.0.0.7395493.ova OVA

2. If you are deploying a three node Control Cluster repeat this again for another two appliances

3. Logon to the NSX Manager via SSH/CLI and execute get certificate api thumbprint

 4. Now logon to the Controller Nodes from the CLI/SSH console and execute the following to join the node to the NSX-T Manager: join management-plane NSX-Manager username admin thumbprint <thumbprint>

Verify that the Controller Cluster nodes are Up and showing Manager connectivity from the NSX-T Manager (System > Overview)

5. Logon to your first Controller Appliance via SSH/CLI and set the cluster shared secret by executing set control-cluster security-model shared-secret and then initialize the cluster by executing initialize control-cluster 

You can verify the Cluster by executing get control-cluster status verbose and ensure that the node is reporting as master, is in majority, has a status of active, and the Zookeeper Server IP is reachable, ok.

6 (Optional – only if you are deploying a multi-node cluster). Logon to the secondary Controller Appliances via SSH/CLI and set the cluster shared secret by executing set control-cluster security-model shared-secret and record the output of get control-cluster certificate thumbprint

Next, log on the master NSX Controller configured in Step 5 and execute join control-cluster <Cluster Node X IP> thumbprint <Cluster Node X Thumbprint> for each of the controllers

 Verify that the control nodes are all showing as active by executing get control-cluster status  

Finally logon to each of the non-primary Controller Appliances via SSH/CLI and execute activate control-cluster (ensure that each controller has been activated successfully before moving to the next)

7. Verify that the cluster is online by executing get control-cluster status verbose on any node or from the NSX-T Manager (System > Overview)

8. Setup Anti-Affinity rules to ensure that the Control Plane cluster nodes do not run on the same hosts

Next I will look at configuring an NSX-T Fabric and configuring Edge Clusters and Hosts. Happy New Year.

NSX – BEWARE ! VTEP Port can NOT be used by ANY virtual machine on an ESXi host running NSX

So I would call this a bug/design issue however VMWare have just noted it in a KB but – BEWARE of use of the VXLAN Tunnel End Point Port (UDP 8472 or 4789 by default) by ANY virtual machine that is hosted on a NSX cluster (regardless of if it is on a VXLAN segment or a standard Port Group) as the traffic will be dropped by the host with a VLAN mismatch. This affects all outbound traffic (i.e. connections from machines inside ESXi with a destination Port that matches the VTEP Port e.g.. UDP 4789)
VMWare today have updated KB2079386 to state “VXLAN port 8472 is reserved or restricted for VMware use, any virtual machine cannot use this port for other purpose or for any other application.” This was the result of a very long running support call involving a VM on a VLAN backed Port Group was having traffic on UDP 8472 being silently dropped without explanation – the KB is not quite accurate; it should read “VTEP Port is reserved or restricted for VMware use, any virtual machine cannot use this port for other purpose or for any other application.” – this is because the hypervisor will drop outbound packets with the destination set to the VTEP Port regardless of if its 8472 or 9871 etc.

Why is this an issue ?

The VXLAN standard (described initially in RFC 7344 has been implemented by a number of vendors for things other than NSX; one such example is physical Sophos Wireless Access points which use the VXLAN standard for “Wireless Zones” and communicates with the Sophos UTM (which can be a virtual machine) on UDP Port 8472. If the UTM is running on a host that has NSX deployed it simply won’t work even if it is running on a Port Group that has nothing to do with NSX.

There are surely other products using this port which begs the question; as a cloud provider or even as an IT group how do you explain to customers that they can’t host that product on ESXi if NSX is deployed when NSX VXLAN even if the traffic is not even on the VLAN with the VXLAN encapsulation operating ?!? The feedback from VMWare support regarding this issue has been that these are reserved ports and should not be used…

What’s going On ?


As a proof of concept I ran up the following lab:

  • Host is ESXi 6.5a running NSX 6.3
  • My VTEP Port is set to 4789 (UDP)
  • NSX is enabled on cluster “NSX”
  • I have a VM “VLANTST-INSIDE” running on host labesx1 (which is part of NSX cluster) running on dvSwitch Port Group “dv-VLAN-101″ which is a VLAN backed (non-VXLAN) Port Group
  • I have a VM “VLANTEST” running outside of ESXi on the same VLAN

With a UDP Server running on the test machine inside ESXi on UDP 4789 the external machine can connect without dramas:

When the roles are reversed the behaviour changes; with a UDP Server running on the machine running External to ESXi on UDP 4789 the initial connection can be initially seen but no traffic observed:
 
When attempting on any other port; no issues:

So if we run pktcap-uw –capture Drop on the ESXi host labesx1.pigeonnuggets.com we can see that the packets are being dropped by the Hypervisor with the Drop Reason ‘VlanTag Mismatch’

It appears that the Network stack is inspecting packets for VTEP UDP Port and filtering them if they do not match the VLAN which is carrying VXLAN regardless of if the payload matches; if the Port Number is the VTEP Port and it’s a VXLAN packet it will be dropped.

What are the options ?

So the only option I have found to resolve this is to change your VTEP Port which is not ideal but there is not really many options at this time. So if a product is conflicting; logon to vCenter and select Networking & Security > Installation > Logical Network Preparation > VXLAN Transport > Change

This is a non-disruptive change and won’t affect your VXLAN payloads. Hopefully this will be fixed at some point….

vCloud Director Edge Gateway – High Availability

Ok so this is just a quick write explaining at a high level the process of enabling the High Availability feature of an Edge Gateways in vCloud Director and some things that you should know if deploying them. vCloud Edges are fundamentally the same as NSX edges however they are controlled by vCloud and are nowhere near (although they are catching up) as rich as full NSX edges. They do however have the High Availability flag exposed  allowing for device redundancy that is pretty essential for these devices. When enabled if a fault occurs and the Edge crashes or becomes unavailable; a redundant device will seamlessly take over after 15 seconds.

How do I enable it?

So to enable High Availability on an Edge Gateway this is done from the Properties of the Edge Gateway (Administration > Org VDC > Edge Gateways) and select Enable High Availability

How does it work ?

Edge Gateway peers communicate with each other for heartbeat messages using one of the internal interfaces; this is important as at least one internal interface/network be configured (discussed later). vCloud does not expose the HA configuration parameters and as such in NSX 6.3.0 the default dead time is 15 seconds which means that in the event of a failure it takes 15 seconds for the secondary to kick in.

When they are deployed the process that is happening behind the scenes is;

  1. vCloud Director makes an API call to NSX to enable High Availability on the Edge
  1. NSX will deploy an edge under the System vDC Resource Pool vse-EdgeGatwayName-1 which will initially be named based on the Edge Id in NSX 
  1. The Edge will be setup and Powered On in vCenter
  1. Finally the Edge will be renamed in vSphere and there will be two Edges in the System vDC in vCenter and labelled “-0″ and “-1″

So once HA has been enabled it is important to note that this does not mean that it is doing anything. There may be two VMs deployed but that doesn’t mean anything. The Edge Gateway HA Status of the Edge has three Status:

  • Disabled
  • Not Active – This means that High Availability checkbox has been checked but HA is disabled (discussed later)
  • Up – When HA is actually configured correctly  

Until the Org VDC Networks are added the Secondary node just sits there un-configured and will just consume CPU cycles.

After a Org VDC Network which is set as a Create a routed network through an existing Edge Gateway is added the status will change from Not Active to Up

At this point HA will be operating.

How do I verify that it is running?

Logon to the Edge Gateway Console and from the CLI execute show service highavailability this will show the status of the node (Standby for the non-Active Node and Active for the current master) as well as the status of the cluster and the configuration.

 When a failover occurs; after the dead time expires the surviving node will take over as per the below 

When no vApp Networks are present show service highavailability will show Disabled ; when its disabled if the Edge dies; the surviving Edge does not update its configuration and just sits there doing nothing.

High Availability does nothing unless a vApp Network is connected; why does this matter ?

Ok so this seems fairly logical right; if you just have External Networks attached to the Edge and no vApp Org Networks then you don’t need High Availability…but there is a use case for an Edge Gateway with External Networks and the way its displayed an admin might thing that HA is working even though its not ! The reason why it doesn’t operate is because the heart-beating is done via the Internal NICs and if there isn’t one then obviously it can’t operate.

In 99% of use case you will always have a vApp Network connected to an Edge Gateway however Edge Gateways have a bunch of awesome network features that can be leveraged without connection to an Org VDC network.

One such use case (which is how this post came about) is if a customer consuming IaaS using a vCloud has some requirement for some physical servers to be installed in VLAN backed physical networks to be plumbed into vCloud with a firewall. An Edge is a great use case for this as the customer can manage the firewall rules for this service and two Org Networks can be bound with the Edge acting as a firewall.  There are other ways to do this but an Edge is a cheap and easy way to achieve this.

So if you use Edges in this manner and require HA create a dummy vApp Org Network (eg. Just a dummy network labelled HA-Heartbeat) and attach it to the Edge.

Summary

  • Edge Gateway HA only operates if a VDC Org Network is attached to the Edge
  • Deadtime/failover in the event of a failure is by default 15 seconds in NSX 6.2.4/NSX 6.3.0
  • If you do need it; its pretty low maintenance set and forget
  • Don’t enable it if you don’t need it; consumes CPU and Memory