VMware Network Automation combines the modern microservices architecture of vRealize with VMware NSX network virtualization to enable rapid application rollout. The solution automates VMware NSX via VMware vRealize Automation to deliver complete workload lifecycle automation through networking, compute, and security services that make it simple to template, provision, and update complete environments. That, in turn, enables businesses to accelerate application delivery and drive overall agility.

The latest iterations of vRealize Automation native integration with NSX-T features include multiple new capabilities, such as support for NSX-T Federation, distributed firewall configurations from NSX-T, a shared gateway across on-demand networks, and many others.

In this post, we will provide an overview of the feature-set available with this native integration. The post doesn't aim to be exhaustive, so don't hesitate to look at the vRealize Automation documentation for more details.

Setting up the Environment

The native integration allows for consumption of NSX-T constructs from vRealize Automation after a simple configuration.

The goal is for the cloud admin to be able to offer users a self-service catalog, through Service Broker that enables the deployment of complex topologies with consistent governance policies across the cloud - all while abstracting the underlying infrastructure and its complexity from end users.

The first step is to configure this abstraction via cloud accounts and network profiles.

Cloud Account

It takes only a few clicks to add an NSX-T Cloud account in vRealize Automation and define its properties.

The cloud account defines:

  • Whether the NSX Manager is local or global (as vRealize Automation supports federation)
  • vCenters that are associated with this NSX-T (multiple vCenters can be mapped to the same NSX-T)
  • Which NSX-T API to use (our recommendation is to use the latest policy API)

As with other objects in vRealize Automation, we can associate capability tags (like "london", "nsxt-london", "paris", "nsxt-paris" in the example above) that can be tied to consumption later.

As soon as this configuration is done, vRealize Automation will data-collect configurations from NSX-T managers and allow the use of the existing networks, security groups, tier-0 gateways etc…

Network Profiles

vRealize Automation provides the agility and flexibility to apply network configurations, while also controlling how those resources are used.

's where network profiles come into play, allowing you to define which resources (networks, Tier-0, IPAM etc..) are made available. Profiles define IP allocation and how existing resources can be used (existing networks, existing load balancers..) and how on-demand resources are created (where they are connected, which IP they'll take etc…).

For on-demand networks, this approach takes advantage of the NSX-T two-tier model for routing:

  • Tier-0(s) or Tier-0 VRF, pre-configured to define connectivity to the outside world
  • Tier-1, which will be commissioned and decommissioned via automation to provide networking and security and services (NAT, load balancer…)

Instead of creating and deleting VMs at each deployment, the NSX-T architecture creates Tier-1s on the edge cluster (either made of VMs or bare-metal servers) specified in the profile - thus minimizing complexity and allowing for faster and more efficient provisioning.

This structure enables admins to create environments on demand and have them land with the correct connectivity and IP scheme as well as have both agility and governance for networking.

Defining Deployments in the Cloud Template

The next step is to define the template for the virtualized application, including networking and security components. vRealize Automation provides standardized, repeatable deployment processes to deliver flexible, self-service, and consistent operations that reduce human labor, human error, and associated costs.

The cloud template exposes numerous networking and security configurations from NSX-T either as cloud agnostic objects or NSX-specific objects. NSX-specific objects provide additional parameters that are not exposed via cloud agnostic objects.

Construction of the desired model is simplified by the ability to execute drag-and-drop, and to link components together graphically. The graphical output then automatically generates infrastructure-as-code, as seen on the right, which defines the template and allows for easy export and versioning.

Here we can see an example:

Networking

On the cloud template, the network resource and its properties define the networks deployed from this template. If the workloads deployed leverage already existing networks or create new ones, they are called on-demand networks.

For NSX-T, the network types available are the following:

  • Existing - A network already pre-created in NSX-T and used by vRealize Automation to connect VMs
  • Private - An on-demand network created by vRealize Automation without inbound and outbound connectivity
  • Outbound - An on-demand network created by vRealize Automation providing only outbound connectivity
  • Routed - An on-demand network created by vRealize Automation providing inbound and outbound connectivity

We can see below the translation in NSX-T each time a deployment is performed from this template.

An on-demand network with outside connectivity creates a Tier-1 connected to the Tier-0 gateway. The Tier-0, edge clusters, and IP used are the ones defined via network profiles.

The benefit here is the ability to define complete, isolated environments without the risk of human error.

NAT Resource

An NAT resource manages inbound access configuration for machines running on the outbound network. Those VMs are deployed behind a Tier-1, performing NAT so that any access from the outside must be specifically defined through this resource, which allows for port forwarding.

Here's an example of how you can access VMs on this outbound network using the NAT resource.

Deployments resulting from this blueprint will include the Tier-1s created for the outbound networks, the DNAT rules that enable access, and the SNAT rules allowing outside connectivity.

Gateway Resource

The gateway resource enables admins to granularly design communication within the deployment when on-demand networks are used. The cloud admin can define which networks communicate internally and which ones are isolated. The result will be to define whether a Tier-1 is shared between networks or different Tier-1s are created for isolation.

Here's an example focusing on a deployment with two networks and a gateway:

The gateway is connected to both networks, specifying that a single Tier-1 router must be created in NSX-T for connectivity.

Security

Integration allows for the consumption of the NSX-T distributed firewall by associating security groups with VMs. In our application blueprint, we have associated three groups to the two VMs present:

Two of them are existing groups, discovered from NSX-T, which will be applied to the VMs.

This allows us to ensure deployed VMs have the right security posture according to the rules defined by the security admin in NSX-T.

In addition to this, vRealize Automation allows admins to create new security groups and define rules within the cloud automation template, including specific rules for the application being deployed that are defined as part of the generic description of this application. Here's an example with a rule forbidding SSH outbound:

These rules are created in NSX-T distributed firewall and take advantage of categories. vRealize Automation puts its rules under the application category, which ensures that any rules provisioned by the security admin in NSX-T in previous categories are applied first.

In addition, it's also possible to have vRealize Automation simply tag the VM in the template and carry the tag over to the VM representation in NSX-T where certain security policies can be applied.

Load Balancing

vRealize Automation features a native integration with the NSX-T load balancer, which allows admins to either create a new load balancer in the blueprint or add a virtual IP to an existing load balancer defined in the network profile.

A load balancer is a major component of a modern application - it allows us to define, from a single place, the provisioning of the load balancer and its VIP with the associated pool, protocol load balanced, health check, etc.

This is described in great detail in vRA Cloud Assembly Load Balancer with NSX-T Deep Dive.

Also, a vRO plugin is available so you can leverage NSX-T Advanced Load Balancer (formerly AVI).

Multi-Site Management with Federation

Federation

One of the major additions to the vRealize Automation integration is the support for NSX-T Federation.

NSX-T Federation offers a single point of management for multiple sites and the ability to stretch networks and security groups across those sites. This provides operational simplicity and consistent policy configuration enforcement across sites, simplifying disaster recovery and multi-site management.

By integrating with NSX-T Federation, vRealize Automation discovers global segments and security groups which span multiple sites and enable the cloud admin to offer network automation consistently across multiple sites.

Below is the configuration of the global manager as a cloud account that links together local manager accounts to understand both local and stretched objects.

A cloud administrator can now scale network automation across multiple sites and accelerate the deployment of more resilient, better secured applications by providing cloud templates leveraging both global and local objects.

More details on this can be found in Scaling Network Automation with NSX-T Federation and vRealize Automation 8.5

Conclusion

vRealize Automation and NSX-T work together to accelerate delivery, reduce operational costs, and answer the needs of modern applications. They do this by combining the tenancy, templating, and lifecycle management of vRealize Automation with the agility of NSX-T networking and security.

More details in the product documentation below.

Resources

VMware is a registered trademark or trademark of VMware, Inc. in the United States and other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

VMware Inc. published this content on 19 January 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 19 January 2022 20:24:02 UTC.