A Kubernetes ingress resource provides HTTP or HTTPS routing from outside the cluster to one or more services within the cluster. Tanzu Kubernetes clusters support ingress through third-party controllers, such as Contour and Nginx.

This tutorial demonstrates how to deploy the Contour ingress controller for routing external traffic to services in a Tanzu Kubernetes cluster. Contour is an open-source project that VMware contributes to. You can also use the Nginx ingress controller to support ingress services.

Prerequisites

Procedure

  1. Create a namespace called projectcontour, which is the default namespace for the Contour ingress controller deployment.
  2. Download the Contour ingress controller YAML: Contour Ingress Deployment.
  3. Open the contour.yaml file using a text editor.
  4. Search for the line externalTrafficPolicy: Local and delete it from the contour.yaml file. This line is in the Service.spec section of the YAML (typically line 1193).
  5. Edit the contour.yaml and add pod security policy in the Role named contour-certgen.
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: contour-certgen
      namespace: projectcontour
    rules:
    - apiGroups:
      - ""
      resources:
      - secrets
      verbs:
      - list
      - watch
      - create
      - get
      - put
      - post
      - patch
    - apiGroups:
      - extensions
      resourceNames:
      - DEFAULT-OR-CUSTOM-PSP                         #Enter the name of a valid PSP
      resources:
      - podsecuritypolicies
      verbs:
      - use
    
  6. Edit the contour.yaml and add pod security policy in the Role named contour-leaderelection.
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: contour-leaderelection
      namespace: projectcontour
    rules:
    - apiGroups:
      - ""
      resources:
      - configmaps
      verbs:
      - create
      - get
      - list
      - watch
      - update
    - apiGroups:
      - ""
      resources:
      - events
      verbs:
      - create
      - update
      - patch
    - apiGroups:
      - extensions
      resourceNames:
      - DEFAULT-OR-CUSTOM-PSP                         #Enter the name of a valid PSP
      resources:
      - podsecuritypolicies
      verbs:
      - use
  7. Deploy Countour by applying the contour.yaml file.
    kubectl apply -f contour.yaml
  8. Verify that the Contour ingress controller is deployed as a service of type LoadBalancer and is accessible from its external IP address.
    kubectl get services -n projectcontour
    NAME      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
    contour   ClusterIP      198.63.146.166   <none>          8001/TCP                     120m
    envoy     LoadBalancer   198.48.52.47     192.168.123.5   80:30501/TCP,443:30173/TCP   120m
  9. Verify that the Contour and Envoy pods are running.
    kubectl get pods -n projectcontour
    NAME                       READY   STATUS      RESTARTS   AGE
    contour-7966d6cdbf-skqfl   1/1     Running     1          21h
    contour-7966d6cdbf-vc8c7   1/1     Running     1          21h
    contour-certgen-77m2n      0/1     Completed   0          21h
    envoy-fsltp                1/1     Running     0          20h
  10. Ping the load balancer using the external IP address.
    For example:
    ping 192.168.123.5
    PING 192.168.123.5 (192.168.123.5) 56(84) bytes of data.
    64 bytes from 192.168.123.5: icmp_seq=1 ttl=62 time=3.50 ms
    
  11. Deploy two test services and add ingress rules and paths by creating the following YAML file named ingress-test.yaml.
    kind: Service
    apiVersion: v1
    metadata:
      name: hello
    spec:
      selector:
        app: hello
        tier: backend
      ports:
      - protocol: TCP
        port: 80
        targetPort: http
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello
          tier: backend
          track: stable
      template:
        metadata:
          labels:
            app: hello
            tier: backend
            track: stable
        spec:
          serviceAccountName: test-sa
          containers:
            - name: hello
              image: "gcr.io/google-samples/hello-go-gke:1.0"
              ports:
                - name: http
                  containerPort: 80
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nihao
    spec:
      selector:
        app: nihao
        tier: backend
      ports:
      - protocol: TCP
        port: 80
        targetPort: http
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nihao
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nihao
          tier: backend
          track: stable
      template:
        metadata:
          labels:
            app: nihao
            tier: backend
            track: stable
        spec:
          containers:
            - name: nihao
              image: "gcr.io/google-samples/hello-go-gke:1.0"
              ports:
                - name: http
                  containerPort: 80
    ---
    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: hello-ingress
    spec:
      rules:
      - http:
          paths:
          - path: /hello
            backend:
              serviceName: hello
              servicePort: 80
          - path: /nihao
            backend:
              serviceName: nihao
              servicePort: 80
    
  12. Apply the ingress-test YAML configuration.
    kubectl apply -f ingress-test.yaml
  13. Verify that the hello and nihao services are created.
    kubectl get services
    NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    hello        ClusterIP      198.63.19.152   <none>          80/TCP         21h
    nihao        ClusterIP      198.57.57.66    <none>          80/TCP         21h
    
  14. Verify that you can access the two services using the ingress routes for each.
    curl http://192.168.123.5:80/hello
    {"message":"Hello"}
    curl http://192.168.123.5:80/nihao
    {"message":"Hello"}
    
    The services running inside the cluster are accessed externally through the ingress controller by way of the load balancer external IP address.