This sections lists the prerequisites for deploying Enterprise PKS.

  • An NSX-T backed VI workload domain must be deployed.
  • Shared NSX-T Edges must be deployed and the logical North-South routing must be configured. See Deploy and Configure NSX Edges in Cloud Foundation.
  • Decide on the static IP addresses, host names, and subnets for the Enterprise PKS components - PKS API, BOSH Director, Operations Manager, and Harbor Registry. Verify that these IP addresses and FQDNs are allocated on the DNS server and are available for deployment. The examples in these procedures use the IP addresses defined below. Replace them with IP addresses being used in your network.
    Component FQDN IP Address Notes
    Operations Manager sfo01w02ops01.rainpole.local 10.0.0.10 This IP must be within the exclusion range
    BOSH Director for vSphere sfo01w02bosh01.rainpole.local 10.0.0.11 This IP should be the first one after the last IP in the exclusion range (last value in exclusion range + 1)
    Enterprise PKS DB sfo01w02pksdb01.rainpole.local 10.0.0.12 This IP should be the second one after the last IP in the exclusion range (last value in exclusion range + 2)
    Enterprise PKS sfo01w02pks01.rainpole.local 10.0.0.13 This IP should be the second one after the last IP in the exclusion range (last value in exclusion range + 2)
    Harbor Container Registry sfo01w02hrbr.rainpole.local 10.0.0.14 This IP should be the third one after the last IP in the exclusion range (last value in exclusion range + 3)
    Sample subnets for Enterprise PKS components are in the table below.
    Component CIDR Gateway Route
    PKS management workloads 10.0.0.0/24 10.0.0.1 Route to/from SDDC management network
    PKS service networks 10.0.1.0/24 10.0.1.1 Route to/from SDDC management network
    Nodes IP block 10.1.0.0/16 10.1.0.1 Route to/from SDDC management network and external access to docker.io.
    Pods IP block 10.2.0.0/16 10.2.0.1
    Floating IP pool 10.3.0.0/24 10.3.0.1 External access
  • Two IP blocks and a floating pool of IP addresses must be configured within the NSX-T domain. Enterprise PKS assigns IP addresses from this pool to VIP of the load balancers for Kubernetes Pod services.
    Refer to the example IP block and IP pool tables below.
    IP Block CIDR
    sfo01-w-nodes-ip-block 10.1.0.0/16
    sfo01-w-pods-ip-block 10.2.0.0/16
    Setting Value
    IP Ranges 10.3.0.10-10.3.0.250
    CIDR 10.3.0.0/24
  • Generate a certificate and key for an NSX-T Manager super user
    Enterprise PKS uses a super user to create, modify, and delete objects in NSX-T. Run the script below to create a certificate and key for that super user. This certificate and key is required as input in the Certificate Settings page of the Deploy PKS wizard.
    #!/bin/bash
    
    NSX_SUPERUSER_CERT_FILE="pks-nsx-t-superuser.crt"
    NSX_SUPERUSER_KEY_FILE="pks-nsx-t-superuser.key"
    
    openssl req \
      -newkey rsa:2048 \
      -x509 \
      -nodes \
      -keyout "$NSX_SUPERUSER_KEY_FILE" \
      -new \
      -out "$NSX_SUPERUSER_CERT_FILE" \
      -subj /CN=ocp-nsx-t-superuser \
      -extensions client_server_ssl \
      -config <(
        cat /etc/ssl/openssl.cnf \
        <(printf '[client_server_ssl]\nextendedKeyUsage = clientAuth\n')
      ) \
      -sha256 \
      -days 730
    
    cat pks-nsx-t-superuser.crt
    cat pks-nsx-t-superuser.key
  • Generate CA-Signed Certificates for Operations Manager, Enterprise PKS control plane, and Harbor Registry. The certificates must include the fully qualified domain name for each component. You use these certificates for trusted communication between the Enterprise PKS components and the rest of the environment.
  • Prepare the network settings and resources for availability zones.
    You can achieve availability by deploying the Kubernetes cluster nodes across multiple compute availability zones. You must configure the network CIDR, gateway, reserved IP ranges, and target logical switch for the availability zones. Depending on the storage being used in your environment, there are two ways to define availability zones:
    • When using NFS storage in a multi-cluster topology, availability zones are mapped to vCenter clusters.

      Add three clusters to the NSX-T VI workload domain. NFS storage across clusters is required for persistent volume which is accessible by all hosts in the clusters.

    • When using VMFS on FC storage in a multi-cluster topology, availability zones are mapped to vCenter clusters.

      Add three clusters to the NSX-T VI workload domain. VMFS on FC storage across clusters is required for persistent volume which is accessible by all hosts in the clusters.

    • When using vSAN storage in a single-cluster topology, availability zones are mapped to resource pools in the default cluster.

      Create four resource pools in the default NSX-T VI workload domain cluster. One resource pool is used for the Enterprise PKS VMs and the other resource pool is used for Kubernetes work nodes.

      For example, create the following resource pools in the default cluster:
      • RP-SharedAZ
      • RP-Comp1AZ
      • RP-Comp2AZ
      • RP-Comp3AZ
  • Download the install bundle for Enterprise PKS.See Downloading an Install Bundle.