Restricting an SNAT IP Pool to Specific Kubernetes Namespaces or TAS Orgs
- For a Kubernetes namespace:
scope: ncp/owner, tag: ns:<namespace_UUID>
- For a TAS org:
scope: ncp/owner, tag: org:<org_UUID>
kubectl get ns -o yaml cf org <org_name> --guid
- Each tag should specify one UUID. You can create multiple tags for the same pool.
- If you change the tags after some namespaces or orgs have been allocated IPs based on the old tags, those IPs will not be reclaimed until the SNAT configurations of the Kubernetes services or TAS apps change or NCP restarts..
- The namespace and TAS org owner tags are optional. Without these tags, any namespace or TAS org can have IPs allocated from the SNAT IP pool.
Configuring an SNAT IP Pool for a Service
apiVersion: v1 kind: Service metadata: name: svc-example annotations: ncp/snat_pool: <external IP pool ID or name> selector: app: example ...
The IP pool specified by ncp/snat_pool must have the tag scope: ncp/owner, tag: cluster:<cluster_name>
.
- IP_POOL_NOT_FOUND - The SNAT IP pool is not found in NSX Manager.
- IP_POOL_EXHAUSTED - The SNAT IP pool is exhausted.
- IP_POOL_NOT_UNIQUE - The pool specified by ncp/snat_pool refers to multiple pools in NSX Manager.
- SNAT_RULE_OVERLAPPED - A new SNAT rule is created, but the SNAT service's pod also belongs to another SNAT service, that is, there are multiple SNAT rules for the same pod.
- POOL_ACCESS_DENIED - The IP pool specified by ncp/snat_pool does not have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
, or the pool's owner tag does not match the namespace of the service that is sending the allocation request. After you fix the error, you must restart NCP, or remove the ncp/snat_pool annotation and add it again.
- The pool specified by ncp/snat_pool should already exist in NSX before the service is configured.
- In NSX, the priority of the SNAT rule for the service is higher than that for the project.
- If a pod is configured with multiple SNAT rules, only one will work.
- You can change to a different IP pool by changing the annotation and restarting NCP.
Configuring an SNAT IP Pool for a Namespace
apiVersion: v1 kind: Namespace metadata: name: ns-sample annotations: ncp/snat_pool: <external IP pool ID or name> ...
- IP_POOL_NOT_FOUND - The SNAT IP pool is not found in NSX Manager.
- IP_POOL_EXHAUSTED - The SNAT IP pool is exhausted.
- IP_POOL_NOT_UNIQUE - The pool specified by ncp/snat_pool refers to multiple pools in NSX Manager.
- POOL_ACCESS_DENIED - The IP pool specified by ncp/snat_pool does not have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
, or the pool's owner tag does not match the namespace that is sending the allocation request. After you fix the error, you must restart NCP, or remove the ncp/snat_pool annotation and add it again.
- You can specify only one SNAT IP pool in the annotation.
- The SNAT IP pool does not need to be configured in ncp.ini.
- The IP pool specified by ncp/snat_pool must have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
. - The IP pool specified by ncp/snat_pool can also have a namespace tag
scope: ncp/owner, tag: ns:<namespace_UUID>
. - If the ncp/snat_pool annotation is missing, the namespace will use the SNAT IP pool for the cluster.
- You can change to a different IP pool by changing the annotation and restarting NCP.
Configuring an SNAT IP Address for a Service
apiVersion: v1 kind: Service metadata: name: svc-example annotations: ncp/static_snat_ip: "1.2.3.4" ...
- IP_ALLOCATED_SUCCESSFULLY
- IP_ALREADY_ALLOCATED - The IP address has already been allocated.
- IP_NOT_IN_POOL - The IP address is not in the SNAT IP Pool.
- IP_POOL_EXHAUSTED - The SNAT IP Pool is exhausted.
- SNAT_PROCESS_FAILED - An unknown error occurred.
Configuring an SNAT IP Address for a Namespace
apiVersion: v1 kind: Namespace metadata: name: svc-example annotations: ncp/static_snat_ip: "1.2.3.4" ...
- IP_ALLOCATED_SUCCESSFULLY
- IP_ALREADY_ALLOCATED - The IP address has already been allocated.
- IP_NOT_IN_POOL - The IP address is not in the SNAT IP Pool.
- IP_NOT_REALIZED - An error occurred in NSX.
- IP_POOL_EXHAUSTED - The SNAT IP Pool is exhausted.
- SNAT_PROCESS_FAILED - An unknown error occurred.
Configuring an SNAT Pool for a TAS App
applications: - name: frontend memory: 32M disk_quota: 32M buildpack: go_buildpack env: GOPACKAGENAME: example-apps/cats-and-dogs/frontend NCP_SNAT_POOL: <external IP pool ID or name> ...
- The pool specified by NCP_SNAT_POOL should already exist in NSX before the app is pushed.
- The priority of SNAT rule for an app is higher than that for an org.
- An app can be configured with only one SNAT IP.
- You can change to a different IP pool by changing the configuration and pushing the app again.
Configuring SNAT for TAS version 3
With TAS version 3, you can configure SNAT in one of two ways:
- Configure NCP_SNAT_POOL in manifest.yml when creating the app.
For example, the app is called bread and the manifest.yml has the following information:
applications: - name: bread stack: cflinuxfs2 random-route: true env: NCP_SNAT_POOL: AppSnatExternalIppool processes: - type: web disk_quota: 1024M instances: 2 memory: 512M health-check-type: port - type: worker disk_quota: 1024M health-check-type: process instances: 2 memory: 256M timeout: 15
Run the following commands:cf v3-push bread cf v3-apply-manifest -f manifest.yml cf v3-apps cf v3-restart bread
- Configure NCP_SNAT_POOL using the cf v3-set-env command.
Run the following commands (assuming the app is called app3):
cf v3-set-env app3 NCP_SNAT_POOL AppSnatExternalIppool (optional) cf v3-stage app3 -package-guid <package-guid> (You can get package-guid with "cf v3-packages app3".) cf v3-restart app3
Configuring an SNAT IP Pool or IP Address for a TAS Org
- ncp_snat_pool - The pool must exist and have the tag
scope: ncp/owner, tag: cluster:<cluster_name>
. - ncp_snat_ip - A specific address in an IP pool.
- If both ncp_snat_pool and ncp_snat_ip are specified, the SNAT IP address must be in the specified SNAT IP pool.
- If only ncp_snat_ip is specified, the SNAT IP address must be in the external IP pool specified in ncp.ini.
- If only ncp_snat_pool is specified, the SNAT IP address will be allocated from the specified pool.
cf curl v3/organizations/<org-guid> -X PATCH -d '{"metadata": {"annotations": {"ncp_snat_pool": "ann-ip-pool", "ncp_snat_ip": "1.2.3.4"}}}'
cf org <org-name> --guid
cf curl v3/organizations/<org-guid> -X PATCH -d '{"metadata": {"annotations": {"ncp_snat_ip": null}}}'
You can go to the NSX Manager UI to check if the SNAT rule is successfully created. To check for errors, look at the NCP logs.
If you see the POOL_ACCESS_DENIED error in the NCP log, it means that the IP pool specified by ncp_snat_pool does not have the tag scope: ncp/owner, tag: cluster:<cluster_name>
, or the pool's owner tag does not match the org that is sending the allocation request. After you fix the error, you must restart NCP, or remove the ncp_snat_pool annotation and add it again.
SNAT, Container Networks and Gateway Firewall
The following information is applicable to NCP 4.1.2.2 and later.
If SNAT is enabled for Container networks, NCP creates SNAT rules on the top-tier router (also called foundation tier-0 in TAS) in both Manager and Policy modes. In TAS, this is controlled by the Enable SNAT for Container Networks configuration option in the VMware NSX-T tile. In Kubernetes, it is controlled by the configuration option ncp.coe.enable_snat. In TKGI, it is always enabled.
If Gateway Firewall rules are configured on the top tier-0 router, SNAT traffic on the top tier-0 router can be impacted depending on the value of the firewall_match property of the SNAT rule. A key difference between the SNAT rules in Manager and Policy mode is the default value of the property firewall_match in a NAT rule. In Policy mode, the default value is MATCH_INTERNAL_ADDRESS. In Manager mode it is BYPASS. If NCP uses any value other than BYPASS for this option, the Gateway Firewall rules defined on the top tier-0 router will be enforced on NCP-created SNAT rules as well. This implies that the traffic might be dropped if there is no rule to explicitly allow egress traffic from the container range.
- BYPASS - No change compared to Manager mode. Gateway Firewall is not enforced for traffic that goes through SNAT.
- MATCH_INTERNAL_ADDRESS - Default setting. Gateway Firewall is enforced and will match traffic on source addresses before SNAT. You must ensure that rules are in place to allow traffic coming from the container range.
- MATCH_EXTERNAL_ADDRESS - Gateway Firewall is enforced and will match traffic on source addresses after SNAT. You must ensure that rules are in place to allow traffic coming from the SNAT range, that is, the external IP pools configured for NCP.
Note that NCP does not update the existing SNAT rules if the value of this configuration option is updated. However, it will update this property if the SNAT rule is updated due to any other reason.
- In TKGI, this property can be configured via the cluster network profile.
- This property is more relevant to dual-tier topologies. For single-tier topologies you should not create Gateway Firewall rules on cluster tier-1 gateways (even though TKGI does not explicitly state that this is not supported).
- In TKGI, the logic is the same with some small differences:
- The "Top Tier" router for a cluster will be a tier-1 router for single-tier topologies, and a tier-0 for dual-tier topologies.
- No SNAT rule is created if the namespace is annotated with ncp/no_snat: true.