Install Contour in Workload Clusters Deployed by a Supervisor

This topic explains how to deploy Contour to Tanzu Kubernetes Grid (TKG) workload clusters deployed to vSphere by a vSphere with Tanzu Supervisor.

Contour is a Kubernetes ingress controller that includes the Envoy reverse HTTP proxy. Contour with Envoy is commonly used with other packages, such as ExternalDNS, Prometheus, and Harbor.

You can install Contour on a workload cluster in two ways:

Install Contour Using the Tanzu CLI

Prerequisites

Adhere to the following prerequisites.

Reference

Refer to the following topic as needed.

Install Contour

Complete these steps to install the Contour package which includes Envoy.

  1. Create a unique namespace for the Contour package.

    kubectl create ns tanzu-system-ingress
    
  2. Use Kubectl to list the packages and their versions available in the repository.

    kubectl -n tkg-system get packages
    
    contour.tanzu.vmware.com.1.17.1+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.17.1+vmware.1-tkg.1   20m30s
    contour.tanzu.vmware.com.1.17.2+vmware.1-tkg.2                       contour.tanzu.vmware.com                       1.17.2+vmware.1-tkg.2   20m30s
    contour.tanzu.vmware.com.1.17.2+vmware.1-tkg.3                       contour.tanzu.vmware.com                       1.17.2+vmware.1-tkg.3   20m30s
    contour.tanzu.vmware.com.1.18.2+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.18.2+vmware.1-tkg.1   20m30s
    contour.tanzu.vmware.com.1.20.2+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.20.2+vmware.1-tkg.1   20m30s
    contour.tanzu.vmware.com.1.20.2+vmware.2-tkg.1                       contour.tanzu.vmware.com                       1.20.2+vmware.2-tkg.1   20m30s
    contour.tanzu.vmware.com.1.22.3+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.22.3+vmware.1-tkg.1   20m30s
    contour.tanzu.vmware.com.1.23.5+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.23.5+vmware.1-tkg.1   20m30s
    contour.tanzu.vmware.com.1.24.5+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.24.5+vmware.1-tkg.1   20m30s
    

    The latest available package in the v2023.10.16 repository is Contour 1.24.5+vmware.1-tkg.1. Adjust the version as necessary to meet your requirements.

  3. Create the contour-data-values.yaml file using either of the following options.

    1. Copy the provided data values file without change. See Contour with Envoy Components, Configuration, Data Values.
    2. Or, you can run the following command to generate the file contour-data-values.yaml.

      tanzu package available get contour.tanzu.vmware.com/1.24.5+vmware.1-tkg.1 --default-values-file-output contour-data-values.yaml
      

      Note: If you generate the data values file, by default the Envoy service will be of type NodePort. Change this value to LoadBalancer to allow traffic from outside the cluster to access a Kubernetes service.

  4. Do one of the following:

    • Without hostPorts:

      If hostPorts are not needed for the Envoy daemonset, edit contour-data-values.yaml to set envoy.hostPorts.enable to false:

      envoy:
        hostPorts:
          enable: false
      
    • With hostPorts:

      If hostPorts are needed, create a ClusterRoleBinding that gives the Envoy service account access to the tkg-system-privileged PSP:

      kubectl create clusterrolebinding envoy-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --serviceaccount=tanzu-system-ingress:envoy
      
  5. Use the Tanzu CLI to install the latest available version of Contour.

    tanzu package install contour -p contour.tanzu.vmware.com -v 1.23.5+vmware.1-tkg.1 --values-file contour-data-values.yaml -n tanzu-system-ingress
    
  6. Use the Tanzu CLI to verify Contour installation.

    tanzu package installed list -n tanzu-system-ingress
    
    NAME     PACKAGE-NAME              PACKAGE-VERSION        STATUS
    contour  contour.tanzu.vmware.com  1.23.5+vmware.1-tkg.1  Reconcile succeeded
    
  7. Use Kubectl to verify Contour and Envoy installation.

    kubectl -n tanzu-system-ingress get all
    
    kubectl -n tanzu-system-ingress get all
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/contour-777bdddc69-fqnsp   1/1     Running   0          102s
    pod/contour-777bdddc69-gs5xv   1/1     Running   0          102s
    pod/envoy-d4jtt                2/2     Running   0          102s
    pod/envoy-g5h72                2/2     Running   0          102s
    pod/envoy-pjpzc                2/2     Running   0          102s
    
    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
    service/contour   ClusterIP      10.105.242.46   <none>          8001/TCP                     102s
    service/envoy     LoadBalancer   10.103.245.57   10.197.154.69   80:32642/TCP,443:30297/TCP   102s
    
    NAME                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/envoy   3         3         3       3            3           <none>          102s
    
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/contour   2/2     2            2           102s
    
    NAME                                 DESIRED   CURRENT   READY   AGE
    replicaset.apps/contour-777bdddc69   2         2         2       102s
    

    In this example the Envoy service has an external IP address of 10.197.154.69. This IP address is carved from the CIDR range specified for Workload Network > Ingress. A new load balancer instance is created for this IP address. The members of the server pool for this load balancer are the Envoy pods. Since the Envoy pods assume the IP addresses of the worker nodes on which they run, you can see these IP addresses by querying the cluster nodes (kubectl get nodes -o wide).

Install Contour Using Kubectl

Install the Contour package to expose ingress routes to services running on TKG clusters.

Requirements

Adhere to the following requirements before you install the Contour package.

Reference

Refer to the following topic as needed.

Install Contour

Complete these steps to install the Contour package which includes Envoy.

  1. List the available Contour versions in the repository.

    kubectl get packages -n tkg-system
    
    contour.tanzu.vmware.com.1.17.1+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.17.1+vmware.1-tkg.1   12m10s
    contour.tanzu.vmware.com.1.17.2+vmware.1-tkg.2                       contour.tanzu.vmware.com                       1.17.2+vmware.1-tkg.2   12m10s
    contour.tanzu.vmware.com.1.17.2+vmware.1-tkg.3                       contour.tanzu.vmware.com                       1.17.2+vmware.1-tkg.3   12m10s
    contour.tanzu.vmware.com.1.18.2+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.18.2+vmware.1-tkg.1   12m10s
    contour.tanzu.vmware.com.1.20.2+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.20.2+vmware.1-tkg.1   12m10s
    contour.tanzu.vmware.com.1.20.2+vmware.2-tkg.1                       contour.tanzu.vmware.com                       1.20.2+vmware.2-tkg.1   12m10s
    contour.tanzu.vmware.com.1.22.3+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.22.3+vmware.1-tkg.1   12m10s
    contour.tanzu.vmware.com.1.23.5+vmware.1-tkg.1                       contour.tanzu.vmware.com                       1.23.5+vmware.1-tkg.1   12m10s
    
  2. Create the contour.yaml specification.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: contour-sa
      namespace: tkg-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: contour-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: contour-sa
        namespace: tkg-system
    ---
    apiVersion: packaging.carvel.dev/v1alpha1
    kind: PackageInstall
    metadata:
      name: contour
      namespace: tkg-system
    spec:
      serviceAccountName: contour-sa
      packageRef:
        refName: contour.tanzu.vmware.com
        versionSelection:
          constraints: 1.23.5+vmware.1-tkg.1
      values:
      - secretRef:
          name: contour-data-values
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: contour-data-values
      namespace: tkg-system
    stringData:
      values.yml: |
        ---
        namespace: tanzu-system-ingress
        contour:
          configFileContents: {}
          useProxyProtocol: false
          replicas: 2
          pspNames: "vmware-system-restricted"
          logLevel: info
        envoy:
          service:
            type: LoadBalancer
            annotations: {}
            externalTrafficPolicy: Cluster
            disableWait: false
          hostPorts:
            enable: true
            http: 80
            https: 443
          hostNetwork: false
          terminationGracePeriodSeconds: 300
          logLevel: info
        certificates:
          duration: 8760h
          renewBefore: 360h
    
    
  3. Customize the contour-data-values section of the contour.yaml specification with appropriate values for your environment.

    See Contour Configuration Parameters for a full list of available parameters.

  4. Do one of the following:

    • Without hostPorts:

      If hostPorts are not needed for the Envoy daemonset, edit contour-data-values.yaml to set envoy.hostPorts.enable to false:

      envoy:
        hostPorts:
          enable: false
      
    • With hostPorts:

      If hostPorts are needed, create a ClusterRoleBinding that gives the Envoy service account access to the tkg-system-privileged PSP:

      kubectl create clusterrolebinding envoy-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --serviceaccount=tanzu-system-ingress:envoy
      
  5. Install the Contour package.

    kubectl apply -f contour.yaml
    
    serviceaccount/contour-sa created
    clusterrolebinding.rbac.authorization.k8s.io/contour-role-binding created
    packageinstall.packaging.carvel.dev/contour created
    secret/contour-data-values created
    
  6. Verify Contour installation.

    kubectl get all -n tanzu-system-ingress
    
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/contour-777bdddc69-fqnsp   1/1     Running   0          102s
    pod/contour-777bdddc69-gs5xv   1/1     Running   0          102s
    pod/envoy-d4jtt                2/2     Running   0          102s
    pod/envoy-g5h72                2/2     Running   0          102s
    pod/envoy-pjpzc                2/2     Running   0          102s
    
    NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
    service/contour   ClusterIP      10.105.242.46   <none>          8001/TCP                     102s
    service/envoy     LoadBalancer   10.103.245.57   10.197.154.69   80:32642/TCP,443:30297/TCP   102s
    
    NAME                   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/envoy   3         3         3       3            3           <none>          102s
    
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/contour   2/2     2            2           102s
    
    NAME                                 DESIRED   CURRENT   READY   AGE
    replicaset.apps/contour-777bdddc69   2         2         2       102s
    

    The Contour package installs 2 Contour pods and 3 Envoy pods. Both Contour and Envoy are exposed as services. The Envoy service has an external IP address of 10.197.154.69. This IP address is carved from the CIDR range specified for Workload Network > Ingress. A new load balancer instance created for this IP address. The members of the server pool for this load balancer are the Envoy pods. Since the Envoy pods assume the IP addresses of the worker nodes on which they run, you can see these IPs by querying the cluster nodes (kubectl get nodes -o wide).

check-circle-line exclamation-circle-line close-line
Scroll to top icon