This topic explains how SE-Controller communication can be established from Service Engines instantiated on a network isolated from the network of the Controller nodes.

The process of connecting starts with the first communication that a freshly-instantiated SE sends to its parent Controller. A few examples of this type of communication are:

  1. The Controller cluster is protected behind a firewall, while its SEs are on the public Internet.

  2. In a public-private cloud deployment, Controllers reside in the public cloud, for instance, AWS, while SEs reside in the customer’s private cloud.

Implementation

In addition to the management node addresses that Controllers in the cluster can mutually see, you can specify for each Controller a second management IP address or a DNS-resolvable FQDN that is addressed by SEs connected to an isolated network. It is this second IP address or FQDN that is incorporated by the Controller into the SE image used to spawn SEs. Avi Load Balancer has added the public-ip-or-name parameter to support this capability.

Setting the parameter through the Avi Load Balancer CLI

In the initial release, the parameter is accessible only through the REST API and Avi Load Balancer CLI. In the following CLI example, a single-node cluster is employed.

[admin:my-controller-aws]: > configure cluster
Updating an existing object. Currently, the object is:
+---------------+----------------------------------------------+
| Field         | Value                                        |
+---------------+----------------------------------------------+
| uuid          | cluster-223cc977-f0de-4c5e-9612-7b0254b3057d |
| name          | cluster-0-1                                  |
| nodes[1]      |                                              |
|   name        | 10.10.30.102                                 |
|   ip          | 10.10.30.102                                 |
|   vm_uuid     | 005056b02776                                 |
|   vm_mor      | vm-222393                                    |
|   vm_hostname | node1.controller.local                       |
+---------------+----------------------------------------------+
[admin:my-controller-aws]: cluster> nodes index 1
[admin:my-controller-aws]: cluster:nodes> public_ip_or_name 1.1.1.1

Explanation

  • The SEs cannot address (route to) the Controller by using the address 10.10.30.102 from their network.

  • Administrative staff are aware that a NAT-enabled firewall is in place and programmed to translate 1.1.1.1 to 10.10.30.102.

  • The string parameter public_ip_or_name in the object definition of the first (and only) node of the cluster is set to 1.1.1.1. So, Controller cluster-0-1 knows that it must embed 1.1.1.1 (not 10.10.30.102 ) into the SE image it creates for spawning SEs.

  • When an SE comes alive for the first time, it therefore addresses its parent Controller at IP address 1.1.1.1.

  • Due to being completely transparent to that SE and because of the firewall’s NAT’ing ability, the initial communication is passed on to IP address 10.10.30.102.

  • Subsequent Controller-SE communications proceed as normal, as if the Controller and SEs were on the same network.

Important Notes

  • The public_ip_or_name field needs to be configured either for all the nodes in the cluster or none of the nodes. A subset of nodes in the cluster cannot be configured.

  • When this configuration is enabled, SEs from all clouds will always use the public_ip_or_name to attempt to talk to the Controller. It is not currently possible to have SEs from one cloud to use the private network while SEs from another cloud use the NATed network.

  • It is recommended to enable this feature while configuring the cluster before SEs are created and not modify this setting while SEs exist.