This section explains the SE datapath isolation and the configuration of datapath heartbeat and IPC encap config knobs.

The feature creates two independent CPU sets for datapath and control plane SE functions. The creation of these two independent and exclusive CPU sets, will reduce the number of se_dp instances. The number of se_dps deployed depends either on the number of available host CPUs in auto mode or the configured number of non_dp CPUs in custom mode.

This feature is supported only on host CPU instances with >= 8 CPUs.

Note:

This mode of operation may be enabled for latency and jitter sensitive applications.

For Linux Server Cloud alone the following prerequisites must be met to use this feature:

  1. The cpuset package cpuset-py3 must be installed on the host and be present in /usr/bin/cset location (a softlink may need to be created)

  2. The task set utility must be present on the host

  3. pip3 future package required by cset module

For full access environments, the requisite packages will be installed as part of the Service Engine installation.

You can enable this feature through the SE Group knobs:

SE Group Knobs

Character

Description

se_dp_isolation

Boolean

This feature is deactivated by default. If you enable this feature, you need to create two CPU set on the SE. A toggle requires SE reboot.

se_dp_isolation_num_non_dp_cpus

Integer

Allows to ‘1 – 8’ CPUs to be reserved for non_dp CPUs.

Configuring ‘0’ enables auto distribution. By default, Auto is selected. If you modify it, you need to reboot the SE.

The following table shows the CPU distribution in auto mode:

Number of Total CPUs

Number of non_dps

1-7

0

8-15

1

16-23

2

24-31

3

32-1024

4

Examples:
  1. Isolation mode in an instance with 16 host CPUs in auto mode will result in 14 CPUs for datapath instances and 2 CPUs for control plane applications.

  2. Isolation mode in an instance with 16 host CPUs in custom mode of se_dp_isolation_num_non_dp_cpu configured to 4 will result in 12 CPUs for datapath instances and 4 CPUs for control plane applications.

This feature is available as GA and the following caveats apply:

  • maximum se_dp_isolation_num_non_dp_cpus is limited to 8. This needs to be set explicitly. In auto-mode, the maximum is still 4.

Datapath Heartbeat and IPC Encap Configuration

The following datapath heartbeat and IPC encap config knobs belong to the segroup:

  • dp_hb_frequency

  • dp_hb_timeout_count

  • dp_aggressive_hb_frequency

  • dp_aggressive_hb_timeout_count

  • se_ip_encap_ipc

  • se_l3_encap_ipc

License

  • License Tier — Specifies the license tier to be used by new SE groups. By default, this field inherits the value from the system configuration.

  • License Type — If no license type is specified, Avi Load Balancer applies default license enforcement for the cloud type. The default mappings are max SEs for a container cloud, cores for OpenStack and VMware, and sockets for Linux.

  • Instance Flavor — Instance type is an AWS term. In a cloud deployment, this parameter identifies one of a set of AWS EC2 instance types. Flavor is the analogous OpenStack term. Other clouds (especially public clouds) may have their own terminology for essentially the same thing.

SE Group Advanced Tab

The Advanced tab in the Service Engine Group supports the configuration of optional functionality for SE groups. This tab only exists for clouds configured with write access mode. The appearance of some fields is contingent upon selections made.

Service Engine Name Prefix: Enter the prefix to use when naming the SEs within the SE group. This name will be seen both within Avi Load Balancer, and as the name of the virtual machine within the virtualization orchestrator.

Note:
  • From Avi Load Balancer 22.1.x version, use the Object Name Prefix in the cloud settings (similar to Service Engine Name Prefix); this supports “-” character.

  • For Avi Load Balancer 20.1.x/21.1.x versions, Service Engine Name Prefix does not support "-" character.

Service Engine Folder — SE virtual machines for this SE group will be grouped under this folder name within the virtualization orchestrator.

Delete Unused Service Engines After — Enter the number of minutes to wait before the Controller deletes an unused SE. Traffic patterns can change quickly, and a virtual service may therefore need to scale across more SEs with little notice. Setting this field to a high value ensures that the Avi Load Balancer keeps unused SEs around in the event of a sudden spike in traffic. A shorter value means the Controller will need to recreate a new SE to handle a burst of traffic, which might take a couple of minutes.

Host & Data Store Scope

  • Host Scope Service Engine: SEs are deployed on any host that most closely matches the resources and reachability criteria for placement. This setting directs their placement as follows:

    • Any: The default setting allows SEs to be deployed to any host that best fits the deployment criteria.

    • Cluster: Excludes SEs from deploying within specified clusters of hosts. Checking the Include check box reverses the logic, ensuring SEs only deploy within specified clusters.

    • Host: Excludes SEs from deploying on specified hosts. The Include check box reverses the logic, ensuring SEs only be deployed within specified hosts.

  • Data Store Scope for Service Engine Virtual Machine: Sets the storage location for SEs to store the OVA (vmdk) file for VMware deployments.

    • Any: Avi Load Balancer will determine the best option for data storage.

    • Local: The SE will only use storage on the physical host.

    • Shared: Avi Load Balancer will prefer using the shared storage location. When this option is clicked, specific data stores may be identified for exclusion or inclusion.

Advanced HA & Placement

  • Buffer Service Engines: This is excess capacity provisioned for HA failover. In elastic HA N+M mode, this is capacity is expressed as M, an integer number of buffer service engines. It actually translates into a count of potential VS placements.buffer service engines represent spare capacity dedicated for SE HA To calculate that count, Avi Load Balancer multiplies M by the maximum number of virtual services per SE. For example, if one requests 2 buffer SEs (M=2) and the max_VS_per_SE is 5, the count is 10. If max SEs/group hasn’t been reached, Avi Load Balancer will spin up more SEs to maintain the ability to perform 10 placements. As illustrated at right, six virtual services have already been placed, and the current count of spare capacity is 14, more than enough to perform 10 placements. When SE2 fills up, spare capacity will be just right. An 11th placement on SE3 would reduce the count to 9 and require SE5 to be spun up.

  • Scale Per Virtual Service: A pair of integers determine the minimum and number of active SEs onto which a single virtual service may be placed. With native SE scaling, the greatest value one can enter as a maximum is 4; with BGP-based SE scaling, the limit is much higher, governed by the ECMP support on the upstream router.

  • Service Engine Failure Detection: This option refers to the time Avi Load Balancer takes to conclude SE takeover must take place. Standard is approximately 9 seconds and aggressive 1.5 seconds.

  • Auto-Rebalance: If this option is selected, virtual services are automatically migrated (scaled in or out) when CPU loads on SEs fall below the minimum threshold or exceed the maximum threshold. If this option is off, the result is limited to an alert. The frequency with which Avi Load Balancer evaluates the need to rebalance can be set to some number of seconds.

  • Affinity: Selecting this option causes Avi Load Balancer to allocate all cores for SE VMs on the same socket of a multi-socket CPU. The option is applicable only in vCenter environments. Appropriate physical resources need to be present in the ESX Host. If not, then SE creation will fail and manual intervention will be required.

    Note:

    The vCenter drop-down menu populates the datastores if the datastores are shared. The non-shared datastores (which means each ESX Host has their own local datastore) are filtered out from the list because, by default when an ESX Host is chosen for SE VM creation, the local datastore of that ESX Host will be picked.

  • Dedicated dispatcher CPU: Selecting this option dedicates the core that handles packet receive/transmit from/to the data network to just the dispatching function. This option makes most sense in a group whose SEs have three or more vCPUs.

  • Override Management Network: If the SEs require a different network for management than the Controller, that network is specified here. The SEs will use their management route to establish communications with the Controllers.

    For more information, see Deploy SEs in Different Datacenter from Controllers in theVMware NSX Advanced Load Balancer Administration Guide.

    Note:

    This option is only available if the SE group’s overridden management network is DHCP-defined. An administrator’s attempt to override a statically-defined management network (Infrastructure > Cloud > Network) will not work due to not allowing a default gateway in the statically-defined subnet.*

Security

HSM Group: Hardware security modules may be configured within the Templates > Security > HSM Groups. An HSM is an external security appliance used for secure storage of SSL certificates and keys. An HSM group dictates how SEs can reach and authenticate with the HSM.

For more information, see Physical Security for SSL Keys.

Log Collection & Streaming Settings

  • Significant Log Throttle: This limits the number of significant log entries generated per second per core on an SE. Set this parameter to zero to decativate throttling of the UDF log.

  • UDF Log Throttle: This limits the number of user-defined (UDF) log entries generated per second per core on an SE. UDF log entries are generated due to the configured client log filters or the rules with logging enabled. Default is 100 log entries per second. Set this parameter to zero to deactivate throttling of the UDF log.

  • Non-Significant Log Throttle: This limits the number of non-significant log entries generated per second per core on an SE. Default is 100 log entries per second. Set this parameter to zero to deactivate throttling of the non-significant log.

  • Number of Streaming Threads: Number of threads to use for log streaming, ranging from 1-100.

Other Settings

By default, the Avi Load Balancer Controller creates and manages a single security group (SG) for an SE. This SG manages the ingress/egress rules for the SE’s control- and data-plane traffic. In certain customer environments, it may be required to provide custom SGs to be also be associated with the SE management- and/or data-plane vNICs.

  • For more information about SGs in OpenStack and AWS clouds, see:

  • Custom Security Groups in OpenStack in the Avi Load BalancerInstallation Guide.

  • Security Groupsin the Avi Load BalancerInstallation Guide: Supported only for AWS clouds, when this option is enabled, Avi Load Balancer will create and manage security groups along with the custom security groups provided by the user. If disabled, it will only make use of custom SG provided by the user.

  • Management vNIC Custom Security Groups: Custom security groups to be associated with management vNICs for SE instances in OpenStack and AWS clouds.

  • Data vNIC Custom Security Groups: Custom security groups to be associated with data vNICs for SE instances in OpenStack and AWS clouds.

  • Add Custom Tag: Custom tags are supported for Azure and AWS clouds and are useful in grouping and managing resources. Click the Add Custom Tag hyperlink to configure this option. The CLI interface is described in the topic Adding Custom Tags Using CLI for Azure and AWS in the Avi Load BalancerInstallation Guide.

    • Azure tags enable key:value pairs to be created and assigned to resources in Azure. For more information on Azure tags, refer to Azure Tags.

    • AWS tags help manage instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. For more information on AWS tags, see Configuring a Tag for Auto-created SEs in AWS in Avi Load BalancerInstallation Guide.

VIP Autoscale

  • Display FIP Subnets Only: Only display FIP subnets in the drop-down menu.

  • VIP Autoscale Subnet: UUID of the subnet for the new IP address allocation.