When deploying Avi Load Balancer into existing environments, it is often required to migrate application workloads from legacy load balancers to NSX Advanced Load Balancer. NSX Advanced Load Balancer recognizes the stringent requirements of customers and the need to maintain up-time during a live migration. This section provides insight into the process of migrating from F5’s BIG-IP LTM to NSX Advanced Load Balancer.

Avi load balancer creates an extremely intuitive load balancer fabric which is highly automated to reduce operational complexity and time to deploy, learn, and manage

NSX Advanced Load Balancer provides a wide ranging breadth of functionality common in competitive application delivery controllers, or load balancers. But it also extends the load balancer’s capabilities and value by incorporating extensive analytics data, a centralized control plane and modern distributed data plane architecture.

Concepts

While F5 and NSX Advanced Load Balancer provide many similar high level features, there are important differences architecturally, in the operations of various features, and even in the names of features and concepts.

Concept

F5 Term

NSX Advanced Load Balancer Term

application proxy

virtual server

Virtual Service

For more information, see Virtual Service topic in the VMware NSX Advanced Load BalancerConfiguration Guide

group of servers

pool

Pool

For more information, see Server Pools topic in the VMware NSX Advanced Load BalancerConfiguration Guide

Data plane scripting

iRules™

DataScript

For more information, see DataScripts Guide Overview topic in the VMware NSX Advanced Load BalancerDataScript Guide

API

iControl™

REST API

Load balancer

BIG-IP™ LTM™ + GTM™

Service Engine

For more information, see Service Engine topic in the VMware NSX Advanced Load BalancerInstallation Guide

Connection aggregation

OneConnect™

Multiplexing

For more information, see Connection Multiplexing topic in the VMware NSX Advanced Load BalancerConfiguration Guide

Central config manager

Enterprise Manager™ / BIG-IQ™

Controller

For more information, see Control Plane topic in the VMware NSX Advanced Load BalancerInstallation Guide

Orchestrator

none

Controller

For more information, see Control Plane topic in the VMware NSX Advanced Load BalancerInstallation Guide

Control Plane

Architecturally, NSX Advanced Load Balancer is managed by a Controller or a redundant cluster of Controllers. Rather than logging into and managing each pair of load balancer appliances, the NSX Advanced Load Balancer fabric is managed by the Controller cluster. A single Controller cluster may manage hundreds of Service Engines, even if they are deployed in different clouds such as VMware or OpenStack. You may also choose to deploy more than one Controller cluster, though this is usually done for geographically separate data centers. One cluster can manage both test and production environments separated by different tenants, or each could have their own cluster created.

Data Plane

NSX Advanced Load Balancer load balancers, called Service Engines (SEs), may be deployed similar to BIG-IP in active/standby pairs (using NSX Advanced Load Balancer's Legacy HA mode), or they may preferably be deployed in elastic HA mode, with fully active groups. There are a number of configuration options to carve out separate groups through tenants, VRFs, clouds, and SE groups. Each application may have its own load balancer, or all applications may share a group of Service Engines. When migrating from BIG-IP, the appliance-pair versus fabric choice is one of the first architectural questions that should be answered to determine how to best consolidate and minimize unused load balancer capacity.

Migration

Migration from existing, live environments is a delicate process. NSX Advanced Load Balancer provides migration services, with a combination of migration tools and engineers with decades of experience in load balancing.

Automated

BIG-IP LTM configurations can be automatically imported for NSX Advanced Load Balancer using Avi Conversion Tool. The Avi Conversion Tool imports the configured objects and rules, along with keys and certificates, and works towards simplifying the transition process to Avi Load Balancer. This eliminates the potential of errors when converting BIG-IP configuration files that are often tens of thousands of lines long. The Avi Conversion Tool provides a complete output, showing every configuration setting that has been converted.

Conversion of objects not automatically covered by the Conversion Tool

Some functionality from BIG-IP LTM may not yet be available to go through the automatic conversion process. For instance, even while the coverage is constantly being improved, some F5’s iRules may not be automatically converted yet.. NSX Advanced Load Balancer’s experience is that about 75% of all iRules can be converted to native point-and-click features. iRules that cannot be performed as native features will be rewritten in NSX Advanced Load Balancer’s DataScript format, which is similar in logic and function, but based on the more modern Lua language.

Only LTM configuration is migrated; other modules must still be done manually. If a feature or functionality cannot be converted directly, a VMware engineer may work with the customer to provide a workaround or determine the best course of action.

Cutover

Once the configuration is migrated to the NSX Advanced Load Balancer deployment, it is time to begin testing. All virtual services are migrated and left in a disabled state so they do not cause an ARP conflict. There are various methods for testing the configuration prior to cutting over. Most often, the virtual service is given a new IP address and marked as enabled.

The virtual service is deployed onto a Service Engine and is available for testing. This can involve accessing the virtual service directly, by using an alternate name in DNS, or by altering a client host file. In this test scenario, SNAT is typically recommended to ensure no changes are required on the servers, yet traffic will synchronously return through the NSX Advanced Load Balancers load balancers rather than the server’s default gateway.

Once the virtual service is ready to go live, the virtual service is disabled on the BIG-IP and enabled on NSX Advanced Load Balancer. If additional IP addresses are available, NSX Advanced Load Balancer can be configured to use a unique IP for the virtual service. The cutover is performed by changing the IP address advertised through DNS. Traffic will gracefully bleed off the BIG-IP as new traffic is processed through NSX Advanced Load Balancer with no disruption to live traffic. This process can be repeated as necessary for all applications.
  • Virtual Service Status

    • The migration tool imports virtual services with traffic disabled state. Based on the requirement, virtual service state can be modified by patching the JSON configuration.

  • Policies

    • Avi Load Balancer supports reusing polices with multiple virtual services and this is enabled using the reuse_http_policy option.

    • If the reuse_http_policy option is not used, the migration tool duplicates the polices and applies individual policies to the appropriate virtual services.

  • Application Profile

    • Avi Load Balancer default behavior is to share application profile with multiple virtual services. This is the preferred way and any change to an application profile affects all of the attached virtual services. Use the --distinct_app_profile option during migration to create a unique application profile for each virtual services.