A Kubernetes ingress resource provides HTTP or HTTPS routing from outside the cluster to one or more services within the cluster. TKG clusters support ingress through third-party controllers, such as Contour.
This tutorial demonstrates how to deploy the Contour ingress controller for routing external traffic to services in a TKG cluster. Contour is an open-source project that VMware contributes to.
Prerequisites
- Review the Ingress resource in the Kubernetes documentation.
- Review the Contour ingress controller.
- Provision a TKG cluster.
- Connect to the TKG cluster.
Procedure
- Create a ClusterRoleBinding allowing service accounts to manage all resources of the cluster.
kubectl create clusterrolebinding default-tkg-admin-privileged-binding
--clusterrole=psp:vmware-system-privileged --group=system:authenticated
- Create a namespace called
projectcontour
.
This is the default namespace for the Contour ingress controller deployment
kubectl create ns projectcontour
- Download the latest Contour ingress controller YAML: Contour Ingress Deployment.
- Open the contour.yaml file using a text editor.
- Comment out the following two lines by prepending each line with the
#
symbol:
Line 1632:
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
Line 1634:
# externalTrafficPolicy: Local
- Deploy Contour by applying the
contour.yaml
file.
kubectl apply -f contour.yaml
- Verify that the Contour ingress controller and Envoy load balancer service are deployed.
kubectl get services -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 198.63.146.166 <none> 8001/TCP 120m
envoy LoadBalancer 198.48.52.47 192.168.123.5 80:30501/TCP,443:30173/TCP 120m
- Verify that the Contour and Envoy pods are running.
kubectl get pods -n projectcontour
NAME READY STATUS RESTARTS AGE
contour-7966d6cdbf-skqfl 1/1 Running 1 21h
contour-7966d6cdbf-vc8c7 1/1 Running 1 21h
contour-certgen-77m2n 0/1 Completed 0 21h
envoy-fsltp 1/1 Running 0 20h
- Ping the load balancer using the external IP address.
PING 192.168.123.5 (192.168.123.5) 56(84) bytes of data.
64 bytes from 192.168.123.5: icmp_seq=1 ttl=62 time=3.50 ms
- Create an ingress resource named ingress-nihao.yaml.
Create the YAML.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nihao
spec:
rules:
- http:
paths:
- path: /nihao
backend:
serviceName: nihao
servicePort: 80
Apply the YAML.
kubectl apply -f ingress-nihao.yaml
Verify that the ingress resource is created.
kubectl get ingress
The external IP address for the Envoy LoadBalancer (
192.168.123.5
in this example) is used by the ingress object.
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nihao <none> * 192.168.123.5 80 17s
- Deploy a test service with a backend application.
Create the following YAML file named
ingress-nihao-test.yaml
kind: Service
apiVersion: v1
metadata:
name: nihao
spec:
selector:
app: nihao
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nihao
spec:
replicas: 3
selector:
matchLabels:
app: nihao
tier: backend
track: stable
template:
metadata:
labels:
app: nihao
tier: backend
track: stable
spec:
containers:
- name: nihao
image: "gcr.io/google-samples/hello-go-gke:1.0"
ports:
- name: http
containerPort: 80
Apply the YAML.
kubectl apply -f ingress-nihao-test.yaml
Verify that the
nihao
service is created.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nihao ClusterIP 10.14.21.22 <none> 80/TCP 15s
Verify that the backend deployment is created.
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nihao 3/3 3 3 2m25s
Verify that the backend pods exist.
kubectl get pods
NAME READY STATUS RESTARTS AGE
nihao-8646584495-9nm8x 1/1 Running 0 106s
nihao-8646584495-vscm5 1/1 Running 0 106s
nihao-8646584495-zcsdq 1/1 Running 0 106s
- Get the public IP address of the load balancer used by the Contour ingress controller.
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nihao <none> * 10.19.14.76 80 13m
- Using a browser, navigate to the public IP and include the ingress path.
The message "hello" is returned.
{"message":"Hello"}
Results
The backend app that is fronted by the service running inside the cluster is accessed externally by the browser through the ingress controller using the external IP address of the load balancer.