vCloud Director 9.1 – Tenant Portal displays “No Datacenters are Available” after upgrade

vCloud Director 9.1 continues to strengthen the functionality of the tenant portal and introduces new functionality including upload of OVA/media without the Client Integration plugin. I ran into an issue immediately after the upgrade however with the Tenant Portal displaying “No Datacenters are available”.

If you are using certificates signed by an internal CA when you upgrade you may run into this issue. The issue is due to changes in enforcement of certificates in the API/Tenant Portal. The fix is pretty straight forward; you must configure the Public Key certificate chain in Base64 format against the API/Tenant portal Public Address settings if you haven’t done this (like me).


Step 1. Using a Browser navigate to your vCloud Director instance and view the Public Key for the SSL Certificate (in Chrome Developer Tools > Security > View Certificate) that is assigned against the Installation and select Details > Copy to File

Step 2. Click Next and select Base-64 encoded X.509 (.CER) and click Next and save the certificate file.

Step 3. Next select the Certificate Path and repeat this process for any certificates in the Certificate Path

Step 4. Open the .cer files generated in Notepad and copy and paste all of the Certificates into one certificate chain.

Step 5. Logon to vCloud Director as a System Administrator (via the FlexUI) and select Administration > Public Addresses, under the API (and if relevant the Tenant Portal sections) paste the certificate chain created in Step 4 into the Certificate Chain and click Apply

Step 6. Now refresh the Tenant Portal and everything should be in order.

Thanks to jm13 on the VMTN forums and Daniel Paluszek from VMWare for the information regarding this issue and resolving it. Hopefully a KB article will be published by VMWare soon.

vSAN Health PRTG Sensor

I just wanted to create a quick post on a PRTG Network Monitoring Sensor that I created to provide some monitoring for vSAN deployed in my Lab. The script is not extensively tested as I am just using it in the Lab however hopefully you will get some value from it. The sensor performs an Invoke of a vSAN Health check against a VSAN cluster and reports any failures  or warnings for components.

The sensor is available from my Github and uses a module from Roman Gelman (@rgelman75) VSAN.psm1 which is available from here.

Sensor Setup

  1. Download PRTG-vSphere-vSANealth.ps1 from here and place it in C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML\ on your Probe
  2. Download the prtg.customlookups.vsan.status.ovl from here (Custom Lookup for PRTG) and place this file in C:\Program Files (x86)\PRTG Network Monitor\lookups\custom\ on your Central Probe
  3. Download VSAN.psm1 from here and place this in C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML\ on your Probe
  4. Logon to PRTG and select Setup > Administrative Tools > Load Lookup and File Lists

5. Next against your vCenter device create a new Sensor of type EXE/Script Advanced

6. Enter the following parameters and then click Continue to finish the setup:

    • Sensor Name: The name of the sensor (e.g. VSAN Cluster Status – Development)
    • EXE/Script: PRTG-vSphere-vSANHealth.ps1
    • Parameters: -vCenterServer “<FQDN of your vCenter>” -vSANCluster “<Name of the VSAN Cluster>”
    • Security Context: Use Windows Credentials of parent device
    • Timeout (Sec): 300
After configuration it should start reporting the status and you can setup notifications, monitoring frequency and channel limits as desired.

  • The User executing the scripts (the Windows account specified in PRTG) must have sufficient rights to the vCenter Server
  • PRTG runs all PowerShell scripts under the 32-bit image; this will cause PowerCLI not to run unless the modules are installed under the x86 Modules folder. The easiest way to fix this is to run: Save-Module -Name VMware.PowerCLI -Path SystemRoot%\SysWOW64\WindowsPowerShell\v1.0\Modules\
  • If you encounter a number of virtual machines in your enviornment “vsan-healthcheck-disposable….” in your environment this is due to the Timeout value (which is statically set to 2 seconds) not being high enough in the VSAN.psm1 module; find any references to VsanQueryVcClusterHealthSummary and replace the Integer 2 with something higher (eg. 30 seconds)
The Code


Installing NSX-T 2.1 Management and Control Plane

As I am sure many out there (including myself) are eagerly awaiting GA release of VMWare PKS. In late December 2017 VMWare released NSX-T 2.1 which will provide the Networking virtualization in PKS. NSX-T is the multi-cloud and multi-hypervisor environment version of NSX for vSphere. NSX-T can coexist with NSX for vSphere as NSX-T has a separate management plane which does not interact with vCenter which enables you to build NSX-T compute clusters under the same vCenter as clusters managed by NSX for vSphere. This post will walk through the initial installation of the Management and Control Planes and hopefully will be the first in a series about designing and configuring NSX-T for container workloads.

Before you begin some important points for NSX-T:

  1. Read the VMWare NSX-T 2.1 Installation Guide and plan the deployment
  2. I highly recommend that you check out Hany Michaels Kubernetes in the Enterprise: The Design Guide; this is an absolutely awesome write up on deploying Kubernetes on vRA leveraging NSX-T full of design guidance for NSX-T
  3. IPv6 is not supported
  4. When deploying on the ESXi hypervisor, NSX-T does not support the Host Profiles and Auto Deploy features
  5. If performing an installation on nested ESXi please note that VMXNET 3 vNIC is only supported for VM NSX Edge
  6. If an encryption rule is applied on a hypervisor, the virtual tunnel endpoint (VTEP) interface minimum MTU size must be 1700. MTU size 2000 or later is preferred.
  7. You must enable SSH to access on all hypervisors that will be running NSX-T.
  8. An NSX Manager must have a static IP address. You cannot change the IP address after installation.
  9. NSX Manager VM Configuration:  The OVA contains three deployment options: Small, Medium and Large; Small is for Lab/PoC and Medium and Large for Production. I have not found any official guidance on sizing guidance for “Medium” vs “Large” however the only difference is resources which can be changed as required; start with a Medium deployment and monitor load on the NSX Manager and increase as required.
  10. Control Plane clusters can contain only one or three members; no other configuration is possible (eg. Cannot have 5 node Controller Cluster)
  11. The OVF Template deployment does not work from the vSphere Client (H5) and fails with “Invalid value ‘true/false’ specified for property nsx_IsSSHEnabled : I had this issue deploying on vCenter 6.5 (Build 7312210) this appears to be an issue with the case issue with the translation of the Boolean value that the H5 client passes. Solution: The deployment must be made using the vSphere Web Client (Flash) or using the OVF tool

Step 1. Download the NSX Manager and NSX Controller OVA’s from My VMWare
Step 2. Deploy the NSX Manager VM

  1. Open the vSphere Web Client and deploy the OVF Template (nsx-unified-appliance- 
  1. Provide the standard metrics (VM Name, Location, Storage, Networking
  2. At the Select Configuration screen select Medium and click Next
  1. Enter valid values for the appliance at the Customize template screen ensuring the Role Name is set to nsx-manager click Finish and Power On the appliance

NOTE: It is recommended that SSH is enabled on the appliance in order to ease configuration and troubleshooting if your security policy allows it. It can be disabled once the configuration is complete

  1. After the installation completes browse to https://<address of appliance>/ and logon using the admin credentials provided during template customization and agree to the EULA.
  1. Select System > Configuration > License and install your NSX License

Step 3. Configuring Automatic Backups

The following outlines the process for setting up an Automated backup of the NSX-T Manager. In addition to backing up the Virtual Machines/appliances it is recommended that application backups are taken on a regular interval as they consume very little space but can enable rollback in the event of an unauthorized/accidental misconfiguration. NSX-T provides an automated backup method to an SFTP server on a configurable schedule to allow for restore points to be created. Before we begin a couple of points:

  1. SFTP is the only configurable protocol in the GUI
  2. The target SFTP server must be configured to serve an ECDSA Key in order for the supported SSH thumbprint validation to work (requires a SHA-256 fingerprint generated from the ECDSA Key)
  3. The schedule allows you to set how oftern backups are taken but not when; i.e. can’t set run at 1:00pm only run every 5 minutes

1. Logon to your SSH/SFTP server and determine the thumbprint for the server using ssh-keygen -lf <keyfilename>

2. Logon the NSX-T Manager and select System > Utilities > Backup from the menu and click Edit

3. Check Enabled next to Automatic Backup and enter the details for the SFTP server and set the Passphrase to encrypt the backups; select the schedule tab and adjust the frequency as required and click Save

4. A backup will complete after the settings are saved; review the Output and ensure that the backups completed successfully.

Now when you or someone else breaks your Lab/Production you might have a way back to a known good state quickly.

Step 4. Install the NSX Controller Plane

NSX Controller is an advanced distributed state management system that provides control plane functions for NSX-T logical switching and routing functions. Provides control plane to distribute network information to the hypervisors, enables VXLAN dependency on multicast routing/PIM in the physical network to be removed and provides suppression of ARP broadcast traffic in VXLAN networks. The Control Plane can be installed as a single VM or as a cluster of three Controller Appliances.

NOTE: The Controller Appliances are deployed by default as 4vCPU, 16GB RAM (Fully Reserved) by default; for Lab Deployments if you don’t have the resources the following resourcing should be sufficient to get you going: 1vCPU/4GB Memory

1. Deploy the nsx-controller- OVA

2. If you are deploying a three node Control Cluster repeat this again for another two appliances

3. Logon to the NSX Manager via SSH/CLI and execute get certificate api thumbprint

 4. Now logon to the Controller Nodes from the CLI/SSH console and execute the following to join the node to the NSX-T Manager: join management-plane NSX-Manager username admin thumbprint <thumbprint>

Verify that the Controller Cluster nodes are Up and showing Manager connectivity from the NSX-T Manager (System > Overview)

5. Logon to your first Controller Appliance via SSH/CLI and set the cluster shared secret by executing set control-cluster security-model shared-secret and then initialize the cluster by executing initialize control-cluster 

You can verify the Cluster by executing get control-cluster status verbose and ensure that the node is reporting as master, is in majority, has a status of active, and the Zookeeper Server IP is reachable, ok.

6 (Optional – only if you are deploying a multi-node cluster). Logon to the secondary Controller Appliances via SSH/CLI and set the cluster shared secret by executing set control-cluster security-model shared-secret and record the output of get control-cluster certificate thumbprint

Next, log on the master NSX Controller configured in Step 5 and execute join control-cluster <Cluster Node X IP> thumbprint <Cluster Node X Thumbprint> for each of the controllers

 Verify that the control nodes are all showing as active by executing get control-cluster status  

Finally logon to each of the non-primary Controller Appliances via SSH/CLI and execute activate control-cluster (ensure that each controller has been activated successfully before moving to the next)

7. Verify that the cluster is online by executing get control-cluster status verbose on any node or from the NSX-T Manager (System > Overview)

8. Setup Anti-Affinity rules to ensure that the Control Plane cluster nodes do not run on the same hosts

Next I will look at configuring an NSX-T Fabric and configuring Edge Clusters and Hosts. Happy New Year.

Configuring Storage IO Control IOPS Capacity for vCloud Director Org VDC Storage Profiles

Recently I began a small project to expose and control Storage IO Control in vCloud Director 8.20+. In order to leverage the capabilities there are a few things that need to be configured/considered. Before you begin you need to determine the capabilities (IOPS) of each of your datastores which is set as a custom attribute on each datastore and is exposed to vCloud Director as the “Available IOPS” for a data store. There are a few things to note before you begin:

  1. You cannot enable IOPS support on a VMware Virtual SAN datastore
  2. You cannot enable IOPS support if the Storage Profile contains Datastores that are part of a Storage DRS Cluster; all of the datastores in the Storage Profile must not be part of a Cluster; if any datastores are in a SDRS cluster you can’t leverage the SIOC in vCloud Director
  3. Each Datastore can have a value set between 200-4000 (IOPS)
  4. You need to have vSphere Administrator rights on the vCenters hosting the datastores to complete the below
  5. The tagged datastores must be added to a SIOC enabled Storage Policy which is mapped to vCloud as a Provider VDC Storage Profile
  6. The Organisational VDC Storage Profile can then have SIOC capabilities set against it using the REST API (or Powershell using my vCloud Storage Profile Module

Step 1. Set the iopsCapacity Custom Attribute

In order to expose SIOC in vSphere to vCloud Director custom attributes have to be added to the Datastores using the vSphere Manage Object Browser (MOB) as outlined in VMWare KB2148300 however it’s much easier to do this through the vSphere Client or vSphere Web Client.

  1. Logon to the vSphere H5 Client (https://<vCenter>/ui) and select the Tags & Custom Attributes from the menu and select Custom Attributes and click Add
  1. Enter the attribute iopsCapacity and select the Type Datastore and click Add
  1. Next select Storage from the main menu and select each Datastores which you wish to set SIOC capabilities to be exposed in vCloud and from the Actions menu select Tags & Custom Attributes > Edit Custom Attributes 
  1. Set the value for iopsCapacity and click OK
  1. Next; tag the datastores with a relevant tag and create a new Storage Profile with VMWare Storage IO Control provider for the SIOC enabled datastores

Step 2. Configure Storage Profiles in vCloud Director

  1. After this has been set on all the relevant data stores; logon to vCloud Director and select vCenters > Refresh Storage Policies 
  1. Add the Storage Profile to the relevant Organizations (Organizations > Organisational VDCS > Storage Policies)
  1. Review the Provider VDC and confirm that the IopsCapacity value shows a non-zero value when using the Get-ProviderVdcStorageProfile cmdlet (Open PowerShell and connect to vCloud Director and import the module Module-vCloud-SIOC.psm1 available from here)
  1. Set the Storage IO Control settings using the Set-OrgVdcStorageProfile cmdlet

$objOrgVDCStorageProfile = Get-OrgVdcStorageProfile -OrgName “PigeonNuggets” | ? {$_.Name -eq “SIOC”}
$objOrgVDCStorageProfile | Set-OrgVdcStorageProfile -SIOCEnabled $true -DiskIopsMax 1000 -DiskIopsDefault 100 -DiskIopsPerGBMax 100

The OrgVDC Storage Profile is configured for SIOC which is implemented in vSphere. SIOC as implemented in vCloud Director needs further work (manually tagging the datastores with capabilities and API only exposure is a bit rough) however the capabilities are beginning to be exposed; further configuration can be made on individual Virtual Disks via the API (hopefully I will get to this in the near future). Hopefully this is of some value for you. #LongLiveVCD

SIOC and Provider/Organization VDC Storage Profile Management in vCloud Director with PowerShell

Long time since my last post due to some major life events however thanks to some annoying Jet Lag I have managed to get some work done on a project I have been working on slowly over the past couple of months; development of some PowerShell cmdlets to expose and add support for updating VDC and Provider Storage Profiles/Policies in vCloud Director 8.20/9.0

The rationale for creating these cmdlets was twofold;

  1. There is currently no way to set the Storage I/O control parameters in vCloud Director outside of the API
  2. The Org VDC/Provider Storage Profiles are not readily exposed in PowerCLI which makes them a bit difficult to work with (need to combine API calls and vCloud Views)

Why would you want to use these cmdlets ? Two main use cases that I have;

  1. For orchestrating dynamic updates to the Org VDC Storage Profile limits; for example if you want to prevent Organisations from consuming all of your backend storage in a short period of time (and have limits set) but don’t want to have to manually update the limits/have clients calling asking why they can’t create a new VM or expand a disk these cmdlets can be used to adjust the Org VDC limits based on the available storage in the backend Provider Storage Profile as space is consumed/reclaimed
  2. If you wish to implement SIOC in vCloud in an Organization VDC Storage Policy, limit the IOPS available globally to that Storage Policy etc. and if there is “peak”/”off-peak” arrangement with a customer whereby there Storage Policies adjust based on Time of Day (e.g. Test Tier is throttled during 9am-9pm) this might assist
The code is available on GitHub here or below. The documentation in the PowerShell (get-help cmdlet -full) is more complete however below are a quick summary of the main user functions and how to use them;
  • Get-OrgVdcStorageProfile : Returns the Storage Policies/Profiles which are defined on the target Organisation Virtual Datacenter object.
  • Set-OrgVdcStorageProfile : Sets the properties of a provided Org VDC Storage Policies/Profiles.
  • Get-ProviderVdcStorageProfile : Returns the Provider VDC Storage Profile objects for the target organisation.
  • Set-ProviderVdcStorageProfile: Allows the settings to be adjusted on a Provider VDC Storage Profile

These cmdlets are a big rough and more work to do when time permits but have been tested on PowerCLI 6.5.1 and vCloud Director 8.20.1 and 9.0; I hope you get some value from these cmdlets and #LongLiveVCD


Powershell Cmdlets for Managing VMWare Photon Platform 1.2.1

VMware announced End of Availability (EOA) of Photon Platform on 6th October 2017 with the PKS Service due to launch soon. This is a product I started looking at in June for developing a strategy for offering Kubernetes-as-a-Service delivery to multi-tenant customers and if this was viable to develop into a Service Provider offering. I was not happy with the management tools available but there was a lot of potential for this to be a good fit.

I figured the best way to understand the product was to add some value by developing some PowerShell cmdlets from the REST API to manage the platform which quickly turned into a major exercise in-itself. The scope quickly got away from me part time developing this after work and other projects. With the product going end of life I have decided to publish what I have as-is (very incomplete, partial testing, needs work) as a code sample as if anyone is using Photon Platform at the moment this may be useful or at least of some benefit :) This was really useful for me to get better at coding against REST based API using swagger and some reverse engineering so although this was not how I wanted this project to end it has been valuable and I hope it is to someone else :)

There is a bunch of functions to explore (Get-Command -Module Module-Vmware-PhotonPlatform and Get-Help cmdlet) which should be documented (mostly) I have made an effort but beware the code quality :)

Available on Github here.

Basic functionality that generally has been tested/works is;

  • Host Management
  • Availability Zone Management
  • Cloud Image Management
  • Subnet Management
  • Quota Management
  • Tenant/Project Management