NSX-T Data Center resources that you need to configure include an overlay transport zone, a tier-0 logical router, a logical switch to connect the node VMs, IP blocks for Kubernetes nodes, and an IP pool for SNAT.

Important: If you are running with NSX-T Data Center 2.4 or later, you must configure NSX-T resources using the Advanced Networking & Security tab.

In the NCP configuration file ncp.ini, the NSX-T Data Center resources are specified using their UUIDs or names.

Overlay Transport Zone

Log in to NSX Manager and navigate to System > Fabric > Transport Zones. Find the overlay transport zone that is used for container networking or create a new one.

Specify an overlay transport zone for a cluster by setting the overlay_tz option in the [nsx_v3] section of ncp.ini. This step is optional. If you do not set overlay_tz, NCP will automatically retrieve the overlay transport zone ID from the tier-0 router.

Tier-0 Logical Routing

Log in to NSX Manager and navigate to Advanced Networking & Security > Networking > Routers. Find the router that is used for container networking or create a new one.

Specify a tier-0 logical router for a cluster by setting the tier0_router option in the [nsx_v3] section of ncp.ini.

Note: The router must be created in active-standby mode.

Logical Switch

The vNICs used by the node for data traffic must be connected to an overlay logical switch. It is not mandatory for the node's management interface to be connected to NSX-T Data Center, although doing so will make setting up easier. You can create a logical switch by logging in to NSX Manager and navigating to Advanced Networking & Security > Networking > Switching > Switches. On the switch, create logical ports and attach the node vNICs to them. The logical ports must have the following tags:
  • tag: <cluster_name>, scope: ncp/cluster
  • tag: <node_name>, scope: ncp/node_name
The <cluster_name> value must match the value of the cluster option in the [coe] section in ncp.ini.

IP Blocks for Kubernetes Pods

Log in to NSX Manager and navigate to Advanced Networking & Security > Networking > IPAM to create one or more IP blocks. Specify the IP block in CIDR format.

Specify IP blocks for Kubernetes pods by setting the container_ip_blocks option in the [nsx_v3] section of ncp.ini.

You can also create IP blocks specifically for no-SNAT namespaces (for Kubernetes) or clusters (for PCF).

Specify no-SNAT IP blocks by setting the no_snat_ip_blocks option in the [nsx_v3] section of ncp.ini.

If you create no-SNAT IP blocks while NCP is running, you must restart NCP. Otherwise, NCP will keep using the shared IP blocks until they are exhausted.

Note: When you create an IP block, the prefix must not be larger than the value of the parameter subnet_prefix in NCP's configuration file ncp.ini. For more information, see Configmap for ncp.ini in ncp-rc.yml.

IP Pool for SNAT

An IP pool in NSX Manager is used for allocating IP addresses which will be used for translating pod IPs using SNAT rules, and for exposing Ingress controllers using SNAT/DNAT rules, just like Openstack floating IPs. These IP addresses are also referred to as external IPs.

Multiple Kubernetes clusters use the same external IP pool. Each NCP instance uses a subset of this pool for the Kubernetes cluster that it manages. By default, the same subnet prefix for pod subnets will be used. To use a different subnet size, update the external_subnet_prefix option in the [nsx_v3] section in ncp.ini.

You can specify IP pools for SNAT by setting the external_ip_pools option in the [nsx_v3] section of ncp.ini.

You can change to a different IP pool by changing the configuration file and restarting NCP.

Restricting an SNAT IP Pool to Specific Kebernetes Namespaces or PCF Orgs

You can specify which Kubernetes namespace or PCF org can be allocated IPs from the SNAT IP pool by adding the following tags to the IP pool.
  • For a Kubernetes namespace: scope: ncp/owner, tag: ns:<namespace_UUID>
  • For a PCF org: scope: ncp/owner, tag: org:<org_UUID>
You can get the namespace or org UUID with one of the following commands:
kubectl get ns -o yaml
cf org <org_name> --guid
Note the following:
  • Each tag should specify one UUID. You can create multiple tags for the same pool.
  • If you change the tags after some namespaces or orgs have been allocated IPs based on the old tags, those IPs will not be reclaimed until the SNAT configurations of the Kubernetes services or PCF apps change or NCP restarts..
  • The namespace and PCF org owner tags are optional. Without these tags, any namespace or PCF org can have IPs allocated from the SNAT IP pool.

Configuring an SNAT IP Pool for a Service

You can configure an SNAT IP pool for a service by adding an annotation to the service. For example,
    apiVersion: v1
    kind: Service
    metadata:
      name: svc-example
      annotations:
        ncp/snat_pool: <external IP pool ID or name>
      selector:
        app: example
    ...

The IP pool specified by ncp/snat_pool must have the tag {"ncp/owner": cluster:<cluster>}.

NCP will configure the SNAT rule for this service. The rule's source IP is the set of backend pods. The destination IP is the SNAT IP allocated from the specified external IP pool. If an error occurs when NCP configures the SNAT rule, the service will be annotated with ncp/error.snat: <error>. The possible errors are:
  • IP_POOL_NOT_FOUND - The SNAT IP pool is not found in NSX Manager.
  • IP_POOL_EXHAUSTED - The SNAT IP pool is exhausted.
  • IP_POOL_NOT_UNIQUE - The pool specified by ncp/snat_pool refers to multiple pools in NSX Manager.
  • SNAT_POOL_ACCESS_DENY - The pool's owner tag does not match the namespace of the service that is sending the allocation request.
  • SNAT_RULE_OVERLAPPED - A new SNAT rule is created, but the SNAT service's pod also belongs to another SNAT service, that is, there are multiple SNAT rules for the same pod.
  • POOL_ACCESS_DENIED - The IP pool specified by ncp/snat_pool does not have the tag {"ncp/owner": cluster:<cluster>}, or the pool's owner tag does not match the namespace of the service that is sending the allocation request..
Note the following:
  • The pool specified by ncp/snat_pool should already exist in NSX-T Data Center before the service is configured.
  • In NSX-T Data Center, the priority of the SNAT rule for the service is higher than that for the project.
  • If a pod is configured with multiple SNAT rules, only one will work.
  • You can change to a different IP pool by changing the annotation and restarting NCP.

Configuring an SNAT IP Pool for a Namespace

You can configure an SNAT IP pool for a namespace by adding an annotation to the namespace. For example,
    apiVersion: v1
    kind: Namespace
    metadata:
      name: ns-sample
      annotations:
        ncp/snat_pool: <external IP pool ID or name>
    ...
NCP will configure the SNAT rule for this namespace. The rule's source IP is the set of backend pods. The destination IP is the SNAT IP allocated from the specified external IP pool. If an error occurs when NCP configures the SNAT rule, the namespace will be annotated with ncp/error.snat: <error>. The possible errors are:
  • IP_POOL_NOT_FOUND - The SNAT IP pool is not found in NSX Manager.
  • IP_POOL_EXHAUSTED - The SNAT IP pool is exhausted.
  • IP_POOL_NOT_UNIQUE - The pool specified by ncp/snat_pool refers to multiple pools in NSX Manager.
  • POOL_ACCESS_DENIED - The IP pool specified by ncp/snat_pool does not have the tag {"ncp/owner": cluster:<cluster>}, or the pool's owner tag does not match the namespace that is sending the allocation request..
Note the following:
  • You can specify only one SNAT IP pool in the annotation.
  • The SNAT IP pool does not need to be configured in ncp.ini.
  • The IP pool specified by ncp/snat_pool must have the tag {"ncp/owner": cluster:<cluster>}.
  • The IP pool specified by ncp/snat_pool can also have a namespace tag {"ncp/owner": ns:<namespace_UUID>}.
  • If the ncp/snat_pool annotation is missing, the namespace will use the SNAT IP pool for the cluster.
  • You can change to a different IP pool by changing the annotation and restarting NCP.

Configuring an SNAT Pool for a PAS App

By default, NCP configures SNAT IP for a PAS (Pivotal Application Service) org. You can configure an SNAT IP for an app by creating an app with a manifest.xml that contains the SNAT IP pool information. For example,
    applications:
      - name: frontend
        memory: 32M
        disk_quota: 32M
        buildpack: go_buildpack
        env:
          GOPACKAGENAME: example-apps/cats-and-dogs/frontend
          NCP_SNAT_POOL: <external IP pool ID or name>
    ...
NCP will configure the SNAT rule for this app. The rule's source IP is the set of instances' IPs and its destination IP is the SNAT IP allocated from an external IP pool. Note the following:
  • The pool specified by NCP_SNAT_POOL should already exist in NSX-T Data Center before the app is pushed.
  • The priority of SNAT rule for an app is higher than that for an org.
  • An app can be configured with only one SNAT IP.
  • You can change to a different IP pool by changing the configuration and restarting NCP.

Configuring SNAT for PCF version 3

With PCF version 3, you can configure SNAT in one of two ways:

  • Configure NCP_SNAT_POOL in manifest.yml when creating the app.
    For example, the app is called bread and the manifest.yml has the following information:
    applications:
    - name: bread
      stack: cflinuxfs2
      random-route: true
      env:
        NCP_SNAT_POOL: AppSnatExternalIppool
      processes:
      - type: web
        disk_quota: 1024M
        instances: 2
        memory: 512M
        health-check-type: port
      - type: worker
        disk_quota: 1024M
        health-check-type: process
        instances: 2
        memory: 256M
        timeout: 15
    Run the following commands:
    cf v3-push bread
    cf v3-apply-manifest -f manifest.yml
    cf v3-apps
    cf v3-restart bread
  • Configure NCP_SNAT_POOL using the cf v3-set-env command.
    Run the following commands (assuming the app is called app3):
    cf v3-set-env app3 NCP_SNAT_POOL AppSnatExternalIppool
    (optional) cf v3-stage app3 -package-guid <package-guid> (You can get package-guid with "cf v3-packages app3".)
    cf v3-restart app3

(Optional) (For Kubernetes only) Firewall Marker Sections

To allow the administrator to create firewall rules and not have them interfere with NCP-created firewall sections based on network policies, log in to NSX Manager, navigate to Security > Distributed Firewall > General and create two firewall sections.

Specify marker firewall sections by setting the bottom_firewall_section_marker and top_firewall_section_marker options in the [nsx_v3] section of ncp.ini.

The bottom firewall section must be below the top firewall section. With these firewall sections created, all firewall sections created by NCP for isolation will be created above the bottom firewall section, and all firewall sections created by NCP for policy will be created below the top firewall section. If these marker sections are not created, all isolation rules will be created at the bottom, and all policy sections will be created at the top. Multiple marker firewall sections with the same value per cluster are not supported and will cause an error.