This topic describes how to install and configure Pivotal Container Service (PKS) on vSphere with NSX-T integration.

Deployment Architecture

The instructions in this topic deploy NSX-T with PKS using the Network Address Translation (NAT) topology. The following figure shows the NAT deployment architecture:

NSX & PKS Overview

Click here to view a larger version of this image.

This topology has the following characteristics:

  • The BOSH Director, Ops Manager, and the PKS service instance are all located on a logical switch NAT'd behind a T1 logical router.
  • All Kubernetes cluster nodes are located on a logical switch NAT'd behind a T1 logical router. This will require NAT rules to allow access to Kubernetes APIs.

Before You Install

Before performing the procedures in this topic:

Note: When using NSX-T 2.1, creating namespaces with names longer than 40 characters may result in a truncated/hashed name.

Step 1: Pre-allocate Network Subnets

Determine and pre-allocate the following network CIDRs in the IPv4 address space according to the instructions in the NSX-T documentation. Ensure that the CIDRs are routable in your environment.

  • VTEP CIDR(s): One or more of these networks will host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the network(s) to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of 192.168.1.0/24 will provide 254 usable IPs. This will be used when creating the ip-pool-vteps in Step 3.
  • PKS MANAGEMENT CIDR: This small network will be used for NAT access to PKS management components such as Ops Manager and the PKS Service VM. For example, a CIDR of 10.172.1.0/28 will provide 14 usable IPs.
  • PKS LB CIDR: This network provides load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, 10.172.2.0/25 provides 126 usable IPs. This network is used when creating the ip-pool-vips described in 3.1: Create NSX Network Objects.

Refer to the instructions in the NSX-T documentation to ensure that your network topology enables the following communications:

  • vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
  • The Ops Manager Director VM must be able to communicate with vCenter and the NSX Manager.
  • The Ops Manager Director VM must be able to communicate with all nodes in all Kubernetes clusters.
  • Each Kubernetes cluster deployed by PKS will deploy a NCP pod that must be able to communicate with the NSX Manager.

Step 2: Deploy NSX-T

Deploy NSX-T according to the instructions in the NSX-T documentation.

Note: In general, accept default settings unless instructed otherwise.

  1. Deploy the NSX Manager. For more information, see NSX Manager Installation.
  2. Deploy NSX Controllers. For more information, see NSX Controller Installation and Clustering.
  3. Join the NSX Controllers to the NSX Manager. For more information, see Join NSX Controllers with the NSX Manager.
  4. Initialize the Control Cluster. For more information, see Initialize the Control Cluster to Create a Control Cluster Master.
  5. Add your ESXi host(s) to the NSX-T Fabric. For more information, see Add a Hypervisor Host to the NSX-T Fabric. Each host must have at least one free nic/vmnic not already used by other vSwitches on the ESXi host for use with NSX Host Transport Nodes.
  6. Deploy NSX Edge VMs. Pivotal recommends at least two VMs. For more information, see NSX Edge Installation. Each deployed NSX Edge VM requires free resources in your vSphere Environment to provide 8 vCPU, 16 GB of RAM, and 120 GB of storage. When deploying, you must connect the vNICs of the NSX Edge VMs to an appropriate PortGroup for your environment by completing the following steps:
    1. Connect the first Edge interface to your environment's PortGroup/VLAN where your Edge Management IP can route and communicate with the NSX Manager.
    2. Connect the second Edge interface to your environment's PortGroup/VLAN where your GENEVE VTEPs can route and communicate with each other. Your VTEP CIDR should be routable to this PortGroup.
    3. Connect the third Edge interface to your environment's PortGroup/VLAN where your T0 uplink interface will be located. Your PKS MANAGEMENT CIDR and PKS LB CIDR should be routable to this PortGroup.
    4. Join the NSX Edge VMs to the NSX-T Fabric. For more information, see Join NSX Edge with the Management Plane.

Step 3: Create the NSX-T Objects Required for PKS

Create the NSX-T objects (network objects, logical switches, NSX Edge, and logical routers) needed for PKS deployment according to the instructions in the NSX-T documentation.

3.1: Create NSX Network Objects

  1. Create two NSX IP pools. For more information, see Create an IP Pool for Tunnel Endpoint IP Addresses. Configuration details for the NSX IP pools follow:
    • One NSX IP pool for GENEVE Tunnel Endpoints ip-pool-vteps, within the usable range of the VTEP CIDR created in Step 1, to be used with NSX Transport Nodes that you create later in this section
    • One NSX IP pool for NSX Load Balancing VIPs ip-pool-vips, within the usable range of the PKS LB CIDR created in Step 1, to be used with the T0 Logical Router that you create later in this section
  2. Create two NSX Transport Zones (TZs). For more information, see Create Transport Zones. Configuration details for the NSX TZs follow:
    • One NSX TZ for PKS control plane Services and Kubernetes Cluster deployment overlay network(s) called tz-overlay and the associated N-VDS hs-overlay. Select Standard.
    • One NSX TZ for NSX Edge uplinks (ingress/egress) for PKS Kubernetes cluster(s) called tz-vlan and the associated N-VDS hs-vlan. Select Standard.
  3. If the default uplink profile is not applicable in your deployment, create your own NSX uplink host profile. For more information, see Create an Uplink Profile.
  4. Create NSX Host Transport Nodes. For more information, see Create a Host Transport Node. Configuration details follow:

    • For each host in the NSX-T Fabric, create a node named tnode-host-NUMBER. For example, if you have three hosts in the NSX-T Fabric, create three nodes named tnode-host-1, tnode-host-2, and tnode-host-3.
    • Add the tz-overlay NSX Transport Zone to each NSX Host Transport Node.

      Note: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the ip-pool-vteps IP pool that will allow them to route and communicate with each other, as well as other Edge Transport Nodes, to build GENEVE tunnels.

  5. Create an NSX IP Block named ip-block-pks-deployments (for more information, see Manage IP Blocks). The NSX-T Container Plug-in (NCP) and PKS will use this IP Block to assign address space to Kubernetes pods through the Container Networking Interface (CNI). Pivotal recommends using the CIDR block 172.16.0.0/16.

3.2: Create Logical Switches

  1. Create the following NSX Logical Switches. For more information, see Create a Logical Switch. Configuration details for the Logical Switches follow:
    • One for T0 ingress/egress uplink port ls-pks-uplink
    • One for the PKS Management Network ls-pks-mgmt
    • One for the PKS Service Network ls-pks-service
  2. Attach your first NSX Logical Switch to the tz-vlan NSX Transport Zone.
  3. Attach your second and third NSX Logical Switches to the tz-overlay NSX Transport Zone.

3.3: Create NSX Edge Objects

  1. Create NSX Edge Transport Node(s). For more information, see Create an NSX Edge Transport Node.
  2. Add both tz-vlan and tz-overlay NSX Transport Zones to the NSX Edge Transport Node(s). Controller Connectivity and Manager Connectivity should be UP.
  3. Refer to the MAC addresses of the Edge VM interfaces you deployed to deploy your virtual NSX Edge(s):
    1. Connect the hs-overlay N-VDS to the vNIC ( fp-eth#) that matches the MAC address of the second NIC from your deployed Edge VM.
    2. Connect the hs-vlan N-VDS to the vNIC ( fp-eth#) that matches the MAC address of the third NIC from your deployed Edge VM.
  4. Create an NSX Edge cluster called edge-cluster-pks. For more information, see Create an NSX Edge Cluster.
  5. Add the NSX Edge Transport Node(s) to the cluster.

3.4: Create Logical Routers

Create T0 Logical Router for PKS

  1. Create a Tier-0 (T0) logical router named t0-pks. See Create a Tier-0 Logical Router for more information. Configuration details follow:

    • Select edge-cluster-pks for the cluster.
    • Set High Availability Mode to Active-Standby. NAT rules will be applied on T0 by NCP. If not set Active-Standby, the router will not support NAT rule configuration.
  2. Attach the T0 logical router to the ls-pks-uplink logical switch you created previously. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch. Create a logical router port for ls-pks-uplink and assign an IP address and CIDR that your environment will use to route to all PKS assigned IP pools and IP blocks.

  3. Configure T0 routing to the rest of your environment using the appropriate routing protocol for your environment or by using static routes. For more information, see Tier-0 Logical Router. The CIDR used in ip-pool-vips must route to the IP you just assigned to your t0 uplink interface.

(Optional) Configure NSX-Edge for High Availability (HA)

You can configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure.

NSX Edge High Availability

To configure NSX Edge for HA, complete the following steps:

Note: All IP addresses must belong to the same subnet.

Step 1: On the T0 router, create a second uplink attached to the second Edge transport node:

Setting First Uplink Second Uplink
IP Address/Mask uplink_1_ip uplink_2_ip
URPF Mode None (optional) None (optional)
Transport Node edge-TN1 edge-TN2
LS uplink-LS1 uplink-LS1

Step 2: On the T0 router, create the HA VIP:

Setting HA VIP
VIP address [ha_vip_ip]
Uplinks ports uplink-1 and uplink-2

The HA VIP will become the official IP for the T0 router uplink. External router devices peering with the T0 router must use this IP address.

Step 3: On the physical router, configure the next hop to point to the HA VIP address.

Step 4: You can verify your setup by running the following commands:

                  nsx-edge-n> get high-availability channels
                  nsx-edge-n> get high-availability channels stats
                  nsx-edge-n> get logical-router
                  nsx-edge-n> get logical-router ROUTER-UUID high-availability status
                  
                  
                  
               

Create T1 Logical Router for PKS Management VMs

  1. Create a Tier-1 (T1) logical router for PKS management VMs named t1-pks-mgmt. For more information, see Create a Tier-1 Logical Router. Configuration details follow:
    • Link to the t0-pks logical router you created in a previous step.
    • Select edge-cluster-pks for the cluster.
  2. Create a logical router port for ls-pks-mgmt and assign the following CIDR block: 172.31.0.1/24. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch.
  3. Configure route advertisement on the T1 as follows. For more information, see Configure Route Advertisement on a Tier-1 Logical Router. Configuration details follow:
    • Enable Status.
    • Enable Advertise All NSX Connected Routes.
    • Enable Advertise All NAT Routes.
    • Enable Advertise All LB VIP Routes.

Configure NAT Rules for PKS Management VMs

Create the following NAT rules for the Mgmt T1. For more information, see Tier-1 NAT. Configuration details follow:

Type For
NO_NAT Mgmt Net <-> Service Net
DNAT External -> Ops Manager
DNAT External -> Pivotal Container Service
SNAT Ops Manager & BOSH Director -> DNS
SNAT Ops Manager & BOSH Director -> NTP
SNAT Ops Manager & BOSH Director -> vCenter
SNAT Ops Manager & BOSH Director -> ESXi
SNAT Ops Manager & BOSH Director -> NSX-T Manager

The DNAT rule on the T1 maps an external IP from the PKS MANAGEMENT CIDR to the IP where you will deploy Ops Manager on the ls-pks-mgmt logical switch. For example, a DNAT rule that maps 10.172.1.2 to 172.31.0.2, where 172.31.0.2 is the IP address you assign to Ops Manager when connected to ls-pks-mgmt. Later, you will create another DNAT rule to map an external IP from the PKS MANAGEMENT CIDR to the PKS endpoint.

The SNAT rule on the T1 allows the PKS Management VMs to communicate with your vCenter and NSX Manager environments. For example, an SNAT rule that maps 172.31.0.0/24 to 10.172.1.1, where 10.172.1.1 is a routable IP from your PKS MANAGEMENT CIDR.

Note: Ops Manager and BOSH need to use the NFCP protocol to the actual ESX hosts to which it is uploading stemcells. Specifically, Ops Manager & BOSH Director -> ESXi.

Note: Limit the Destination CIDR for the SNAT rules to the subnet(s) that contain your vCenter and NSX Manager IP addresses.

Create T1 Logical Router for PKS Service VMs

  1. Create a Tier-1 (T1) logical router for PKS Service VMs t1-pks-service. For more information, see Create a Tier-1 Logical Router. Configuration details follow:
    • Link to the t0-pks logical switch you created in a previous step.
    • Select edge-cluster-pks for the cluster.
  2. Create a logical router port for ls-pks-service and assign the following CIDR block: 172.31.2.1/23. For more information, see Connect a Tier-0 Logical Router to a VLAN Logical Switch.
  3. Configure route advertisement on the T1 as follows. For more information, see Configure Route Advertisement on a Tier-1 Logical Router. Configuration details follow:
    • Enable Advertise All NSX Connected Routes.
    • Enable Advertise All NAT Routes.
    • Enable Advertise All LB VIP Routes.

Configure NAT Rules for PKS Service VMs

Create the following NAT rules for the Service T1. For more information, see Tier-1 NAT. Configuration details follow:

Type For
NO_NAT Mgmt Net <-> Service Net
SNAT K8s Workers -> External Registries (for example, DockerHub)
SNAT K8s Workers -> DNS
SNAT K8s Workers -> NTP
SNAT K8s Workers -> NSX-T Manager (NCP)
SNAT K8s Workers -> vCenter (vSphere Cloud Provider)
SNAT K8s Workers -> External Service Endpoints for Workloads

The SNAT rule allows the Kubernetes Cluster VMs to communicate with your environment's NSX Manager and allows the NCP pod on each cluster to communicate with your NSX Manager. For example, a SNAT rule that maps 172.31.2.0/23 to 10.172.1.3, where 10.172.1.3 is a routable IP from your PKS MANAGEMENT CIDR.

Note: Limit the Destination CIDR for the SNAT rules to the subnet(s) that contain your vCenter and NSX Manager IP addresses.

Step 4: Deploy Ops Manager

Complete the procedures in Deploying Ops Manager to vSphere.

Step 5: Configure Ops Manager

Perform the following steps to configure Ops Manager for the NSX logical switches:

  1. Complete the procedures in Configuring Ops Manager on vSphere.

    Note: If you have Pivotal Application Service (PAS) installed, Pivotal recommends installing PKS on a separate instance of Ops Manager v2.0.

    • On the vCenter Config page, select Standard vCenter Networking. This configuration is utilized for PAS only. You configure NSX-T integration for PKS in a later step.

      Note: Using this NAT topology, you must have already deployed Ops Manager to the ls-pks-mgmt NSX logical switch by following the instructions above in Create T1 Logical Router for PKS Management VMs. You will use the DNAT IP address to access Ops Manager.

    • On the Create Networks page, create the following networks:
      Infrastructure Network Field Configuration
      Name pks-infrastructure
      Service Network Leave Service Network unchecked.
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-infrastructure
      Description A network for deploying the PKS control plane VM(s) that maps to the NSX logical switch named ls-pks-mgmt created for the PKS Management Network in Step 3: Create the NSX-T Objects Required for PKS.
      Service Network Field Configuration
      Name pks-services
      Service Network Select the Service Network checkbox.
      vSphere Network Name MY-PKS-virt-net/MY-PKS-subnet-services
      Description A service network for deploying PKS Kubernetes cluster nodes that maps to the NSX logical switch named ls-pks-service created for the PKS Service Network in Step 3: Create the NSX-T Objects Required for PKS.
  2. Return to the Ops Manager Installation Dashboard and click Apply Changes.

Step 6: Install and Configure PKS

Perform the following steps to install and configure PKS:

  1. Install the PKS tile. For more information, see Install PKS.
  2. Click the orange Pivotal Container Service tile to start the configuration process.

    Note: Configuration of NSX-T or Flannel cannot be changed after initial installation and configuration of PKS.

    Pivotal Container Service tile on the Ops Manager installation dashboard

Assign AZs and Networks

Perform the following steps:

  1. Click Assign AZs and Networks.
  2. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.

    Note: You must select an additional AZ for balancing other jobs before clicking Save, but this selection has no effect in the current version of PKS.

  3. Under Network, select the PKS Management Network linked to the ls-pks-mgmt NSX logical switch you created in Step 5: Configure Ops Manager. This will provide network placement for the PKS API VM.
  4. Under Service Network, select the PKS Service Network linked to the ls-pks-service NSX logical switch you created in Step 5: Configure Ops Manager. This will provide network placement for the on-demand Kubernetes cluster service instances created by the PKS broker.
  5. Click Save.

PKS API

Perform the procedure in the PKS API section of Installing and Configuring PKS.

Plans

Perform the procedure in the Plans section of Installing and Configuring PKS.

Kubernetes Cloud Provider

Perform the procedures in the Kubernetes Cloud Provider section of Installing and Configuring PKS.

Networking

Perform the following steps:

  1. Click Networking.
  2. Under Network, select NSX-T as the Container Network Type to use.

    NSX-T Networking configuration pane in Ops Manager

  3. For NSX Manager hostname, enter the NSX Manager hostname or IP address.

  4. For NSX Manager credentials, enter the credentials to connect to the NSX Manager.
  5. For NSX Manager CA Cert, optionally enter the custom CA certificate to be used to connect to the NSX Manager.
  6. The Disable SSL certificate verification? checkbox is not selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.
  7. For vSphere Cluster Name, enter the name of the vSphere cluster you used when creating the PKS broker in Assign AZs and Networks.
  8. For T0 Router ID, enter the t0-pks T0 router UUID. This can be located in the NSX-T UI router overview.
  9. For IP Block ID, enter the ip-block-pks-deployments IP block UUID. This can also be located in the NSX-T UI.
  10. For Floating IP pool ID, enter the ip-pool-vips Floating IP pool ID that was created for load balancer VIPs.
  11. Click Save.

UAA

Perform the procedures in the UAA section of Installing and Configuring PKS.

Syslog

(Optional) Perform the procedures in the Syslog section of Installing and Configuring PKS.

Errands

WARNING: You must enable the NSX-T Validation errand in order to verify and tag required NSX-T objects.

Perform the following steps:

  1. Click Errands.
  2. For Post Deploy Errands, select ON for the NSX-T Validation errand. This errand will validate your NSX-T configuration and will tag the proper resources.
  3. Click Save.

Optional: Resource Config and Stemcell

To modify the resource usage or stemcell configuration of PKS, see the Resource Config and Stemcell sections in Installing and Configuring PKS.

Step 7: Apply Changes and Retrieve the PKS Endpoint

  1. After configuring the tile, return to the Ops Manager Installation Dashboard and click Apply Changes to deploy the PKS tile.
  2. When the installation is completed, retrieve the PKS endpoint by performing the following steps:
    1. From the Ops Manager Installation Dashboard, click the Pivotal Container Service tile.
    2. Click the Status tab and record the IP address assigned to the Pivotal Container Service job.
  3. Create a DNAT rule on the t1-pks-mgmt T1 to map an external IP from the PKS MANAGEMENT CIDR to the PKS endpoint. For example, a DNAT rule that maps 10.172.1.4 to 172.31.0.4, where 172.31.0.4 is PKS endpoint IP address on the ls-pks-mgmt NSX Logical Switch. For more information, see Configure Destination NAT on a Tier-1 Router.

    Note: Ensure that you have no overlapping NAT rules. If your NAT rules overlap, you cannot reach Ops Manager from VMs in the vCenter network.

Developers should use the DNAT IP address when logging in with the PKS CLI. For more information, see Using PKS.

WARNING: The PKS CLI is under active development and commands may change. To ensure you have installed the latest version, we recommend that you re-install the PKS CLI before you use it. For more information, see Installing the PKS CLI.

Step 8: Deploy a Cluster and Enable NAT Access

In the current version of PKS, NSX-T does not automatically configure a NAT for the master node of each Kubernetes cluster. As a result, you must perform the following procedure for each cluster to enable your developers to use kubectl:

  1. Download the NSX scripts:
                            $ wget https://storage.googleapis.com/pks-releases/nsx-helper-pkg.tar.gz
                            
                            
                            
                         
  2. Untar the nsx-helper-pkg.tar.gz file:
                            $ tar -xvzf nsx-helper-pkg.tar.gz
                            
                            
                            
                         
  3. Install required packages:
                            $ sudo apt-get install git
                            $ sudo apt-get install -y httpie 
                            $ sudo apt-get install jq
                            
                            
                            
                         
  4. One of the files from the tarball is nsx-cli.sh. Make the script executable:
                            $ chmod 755 nsx-cli.sh
                            
                            
                            
                         
  5. Set your NSX Manager admin user, password, and IP address as environment variables named NSX_MANAGER_USERNAME, NSX_MANAGER_PASSWORD, and NSX_MANAGER_IP. For example:
                            $ export NSX_MANAGER_USERNAME="admin-user" 
                            $ export NSX_MANAGER_PASSWORD="admin-password"
                            $ export NSX_MANAGER_IP="192.0.2.1" 
                            
                            
                            
                         
  6. Execute the nsx-cli script with the following command:
                            $ ./nsx-cli.sh ipam allocate
                            
                            
                            
                         
    Developers can use this IP address as the --external-hostname value to create a cluster via the PKS CLI. For more information, see Using PKS.
  7. Collect the Cluster UUID after cluster has been successfully created.
                            $ pks clusters
                            
                            
                            
                         
  8. Use the nsx-cli script to create a NAT rule to allow access to the Kubernetes API for the cluster. Execute the following command:
                            $ ./nsx-cli.sh nat create-rule CLUSTER-UUID MASTER-IP NAT-IP
                            
                            
                            
                         
    Where:
    • CLUSTER-UUID is the ID of the cluster retrieved in the previous step.
    • MASTER-IP is the IP address that BOSH has assigned to the master node of the cluster. To retrieve this value, use BOSH CLI v2+ to log in to your BOSH Director and list all instances with bosh -e YOUR-ENV instances. For more information, see Commands in the BOSH documentation.
    • NAT-IP is the NAT IP from the ip-pool-vips NSX IP pool retrieved above.

Step 9: Clean NSX-T Objects After Deletion of a Cluster

In the current version of PKS, NSX-T does not automatically delete NSX-T objects created during the life of the product. After a cluster is deleted, you must perform the following task using the nsx-cli.sh script downloaded in the previous step (see Step 8: Deploy a Cluster and Enable NAT Access). Configuration details follow:

  1. Delete the Kubernetes Cluster using the PKS CLI. For more information, see Delete a Cluster.

  2. Execute the nsx-cli script with the following command:

                            $ ./nsx-cli.sh cleanup CLUSTER-UUID false
                            
                            
                            
                         

    Where CLUSTER-UUID is the ID of the cluster you deleted.