VMware NSX-T 3.0 – What’s New

VMware just launched NSX-T 3.0 so let me provide you with an overview of some of the most exciting new features.

NSX-T adds new features and capabilities in the areas of intrinsic security, modern application networking and streamlined operations. I’ve picked a few of the more notable ones below.

NSX Distributed IDS is an advanced threat detection engine purpose-built to detect lateral threat movement on east-west traffic. This will be available as an add-on subscription to customers with advanced or enterprise plus licencing.

Federation: Centralized policy configuration and enforcement across multiple locations from a single pane of glass, enabling network-wide consistent policy and operational simplicity. This is by far the most eagerly awaited feature of this release.

I have spoken with several customers over the last 6 months who are awaiting this particular feature as it now means NSX-T surpasses NSX-V in terms of feature parity. VMware will continue to develop this particular feature over the course of this year so be sure to check the release notes carefully as to what is and isn’t currently supported.

 

NSX-T for vSphere with Kubernetes (Project Pacific): NSX has been designed-in as the default pod networking solution for vSphere with Kubernetes and provides a rich set of networking capabilities including distributed switching and routing, distributed firewalling, load balancing, etc.

VRF Lite: Complete data plane isolation among tenants with a separate routing table, NAT, and Edge firewall support in each VRF on the NSX Tier-0 gateway.

L3 EVPN: Seamlessly connects telco Virtual Network Functions to the overlay network. The NSX Edge implements standards-based BGP control plane to advertise IP Prefixes into the telco core, running MP-BGP sessions with the telco Provider Edge/DC Gateways.

NSX-T Support on VDS 7.0: NSX-T can now leverage the native VDS built into vSphere 7.0. It is recommended that new deployments of NSX-T leverage this and move away from the N-VDS. If you are an existing NSX-T customer and have already deployed and are using the N-VDS then the recommendation is to remain using that for the moment. However, you will in the future need to plan to move away from this, consider the following when planning this.

  • VDS is configured through vCenter. N-VDS is vCenter independent. With NSX-T support on VDS and the eventual deprecation of N-VDS, NSX-T will be closely tied to vCenter and vCenter will be required to enable NSX.
  • The N-VDS is able to support ESXi host-specific configurations. The VDS uses cluster-based configuration and does not support ESXi host-specific configuration.
  • This release does not have full feature parity between N-VDS and VDS.
  • The backing type for VM and vmKernel interface APIs is different for VDS when compared to N-VDS.

Security and Firewalling:  It’s not possible to leverage Federation to have a consistent security policy across multiple sites (note VMC support will come in a future release). NSX-T introduces the concept of a global manager and has the capability to sync security policies across multiple sites providing a single pane of glass view.

VMware Cloud on AWS – Networking Connectivity – Default Route Options?

I’ll cover a recent design decision we had to make on whether or not to inject a default route from your on-premises network into VMConAWS.

This may at first glance not sound like a big deal however depending on the customer’s topology and footprint i.e. if they have existing on-premises locations then it could be something you need to consider carefully.

For example egress costs for internet connectivity directly out of AWS are charged at a higher rate then egressing across a direct connect connection. Therefore if your customer already has an on-premises presence and infrastructure in-place it may well be more cost-effective to route and egress out of that instead.

The internet breakout within VMConAWS via the IGW is also unfiltered and not inspected, we only have the NSX L4 firewalls to protect us. If you did want to egress directly out of AWS you would need to stand up a transit VPC and deploy your own Layer 7 Next-Gen firewalls to inspect that traffic for you.

We also have an AWS limitation of only being able to receive 100 routes from your network in to AWS via BGP depending on the scale and topology of your network trying to summarise that network may be very difficult and may well push you over the 100 routes. If you do exceed the 100 route limit the BGP session will be terminated and connectivity lost over the direct connect so it is something you need to design against.

This particular customer has already invested in standing up an on-premises SDDC as well as the VMConAWS SDDC’s. This meant from a connectivity perspective they had already invested in L7 firewalls and security appliances for internet connectivity. Which meant in the immediate term we would route traffic across the direct connect and out of the on-premises internet breakout. This could always be revisited in the future as the VMC instance outgrows the on-premises deployments.

The above diagram covers how this would look from a traffic flow perspective, with internet traffic originating in VMC being routed over the direct connect and out of the on-premises egress point.

However; in making that design decision it raised a question around how this might affect VPN connectivity this is especially important if you want to use a Route Based IPSEC VPN as a back connectivity method.  You have to understand that internet connectivity is terminated on the IGW (Internet Gateway) inside the shadow VPC and not directly on the Tier-0 (That looks to be a future release item). Therefore if we receive a default route from on-premises we would not be able to route out to the IGW from the Tier-0 for VPN connectivity as all traffic would be sent down the direct connect.

VMware has a solution for this little issue and that is to inject a /32 route into the Tier-0 for the destination VTI (Virtual Tunnel Interface). As an example to get to the destination VTI1 which is on-premises go via the Tier-0 —> IGW rather than over say the direct connect where the default route would push you.