Install Learning Center on Kind

This topic describes how you install Learning Center on you local machine with Kind.

Kind was developed as a means to support development and testing of Kubernetes. Though it exists primarily for that purpose, Kind clusters are often used for local development of user applications as well. For Learning Center, you can use a local Kind cluster to develop workshop content or self-learning when deploying other people’s workshops.

Because you are deploying to a local machine, you are unlikely to have access to your own custom domain name and certificate you can use with the cluster. If you don’t, you can be restricted as to the sorts of workshops you can develop or run using the Learning Center in Kind. Kind uses containerd, which lacks certain features that allow you to trust any image registries hosted within a subnet. This means you cannot readily run workshops that use a local container image registry for each workshop session. If you must run workshops on your local computer that uses an image registry for each session, VMware recommends you use minikube with dockerd instead. For more information, see Install on Minikube.

Also, since Kind has limited memory resources available, you may be prohibited from running workshops that have large memory requirements. Workshops that demonstrate the use of third-party applications requiring a multinode cluster also do not work unless the Kind cluster is specifically configured to be multinode rather than single node.

Requirements and setup instructions specific to Kind are detailed in this document. Otherwise, follow normal installation instructions for the Learning Center operator.

Prerequisites

You must complete the following installation prerequisites as a user prior to installation:

  • Create a Broadcom Support account and have access to its credentials.
  • Install Kind on your local machine.
  • Install Tanzu CLI on your local machine.
  • Install Kubernetes command-line tool (kubectl) on your local machine.

Kind cluster creation

When initially creating the Kind cluster, you must configure it so that the ingress controller is exposed. The Kind documentation provides the following command to do this, but check the documentation in case the details have changed.

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

Once you have the Kind cluster up and running, you must install an ingress controller.

Ingress controller with DNS

The Kind documentation provides instructions for installing Ambassador, Contour, and Nginx-based ingress controllers.

VMware recommends that you use Contour rather than Nginx, because Nginx drops websocket connections whenever new ingresses are created. The Learning Center workshop environments do include a workaround to re-establish websocket connections for the workshop terminals without losing terminal state, but other applications used with workshops might not, such as terminals available through Visual Studio Code.

Avoid using the Ambassador ingress controller, because it requires all ingresses created to be annotated explicitly with an ingress class of “ambassador.” The Learning Center operator can be configured to do this automatically for ingresses created for the training portal and workshop sessions. However, any workshops that create ingresses as part of the workshop instructions do not work unless they are written to have the user manually add the ingress class when required due to the use of Ambassador.

If you have created a contour ingress controller, verify all pods have a running status. Run:

kubectl get pods -n projectcontour -o wide

For information about installing Contour, which comes with Tanzu Application Platform, see Install cert-manager, Contour.

Install carvel tools

You must install the kapp controller and secret-gen controller carvel tools in order to properly install VMware tanzu packages.

To install kapp controller, run:

kapp deploy -a kc -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/latest/download/release.yml

To install secret-gen controller, run:

kapp deploy -a sg -f https://github.com/vmware-tanzu/carvel-secretgen-controller/releases/latest/download/release.yml
Note

Type “y” and enter to continue when prompted during installation of both kapp and secret-gen controllers.

Install Tanzu package repository

Follow these steps to install the Tanzu package repository:

  1. To create a namespace, run:

    kubectl create ns tap-install
    
  2. Create a registry secret:

    tanzu secret registry add tap-registry \
     --username "BROADCOM-USER" --password "BROADCOM-PASSWORD" \
     --server tanzu.packages.broadcom.com \
     --export-to-all-namespaces --yes --namespace tap-install
    

    Where BROADCOM-USER and BROADCOM-PASSWORD are your credentials for Broadcom Support.

  3. Add a vpackage repository to your cluster:

    tanzu package repository add tanzu-tap-repository \
     --url tanzu.packages.broadcom.com/tanzu-application-platform/tap-packages:VERSION-NUMBER \
     --namespace tap-install
    

    Where VERSION-NUMBER is your Tanzu Application Platform version. For example, 1.6.13.

    Note

    We are currently on build 7. If this changes, we need to update the command with the correct build version after the –url flag.

  4. To check the package repository install status, run:

    tanzu package repository get tanzu-tap-repository --namespace tap-install
    

    Wait for a reconciled successful status before attempting to install any other packages.

Create a configuration YAML file for Learning Center package

To create a configuration YAML file:

See Supported yaml file configurations to see a list of configurations you can provide to Learning Center.

  1. Create a file called learningcenter-value.yaml in your current directory with the following data:

    ingressDomain: workshops.example.com
    

Where:

  • ingressDomain is <your-local-ip>.nip.io if you are using a nip.io DNS address. Details about this are provided in the following section.
  • workshops.example.com with is <your-local-ip>.nip.io.

Using a nip.io DNS address

Before you can start deploying workshops, you must configure the operator to tell it what domain name can be used to access anything deployed by the operator.

Being a local cluster that isn’t exposed to the Internet with its own custom domain name, you can use a nip.io. address.

To calculate the nip.io address to use, first work out the IP address for the ingress controller exposed by Kind. This is usually the IP address of the local machine itself, even when you use Docker for Mac.

How you get the IP address for your local machine depends on the operating system you are using.

For example on a Mac, you can find your IP address by searching for network using spotlight and selecting the network option under system preferences. Here you can see your IP address under status.

After you have the IP address, add this as a prefix to the domain name nip.io. For example, if the address was 192.168.1.1, use the domain name of 192.168.1.1.nip.io.

To configure the Learning Center operator with this cluster domain, run:

kubectl set env deployment/learningcenter-operator -n eduk8s INGRESS_DOMAIN=192.168.1.1.nip.io

This causes the Learning Center operator to redeploy with the new configuration. You can now deploy workshops.

Note

Some home Internet gateways implement what is called rebind protection. These gateways do not allow DNS names from the public Internet bind to local IP address ranges inside the home network. If your home Internet gateway has such a feature and it is enabled, it blocks nip.io addresses from working. In this case, you must configure your home Internet gateway to allow *.nip.io names to be bound to local addresses. Also, you cannot use an address of form 127.0.0.1.nip.io or subdomain.localhost. This causes a failure, because when internal services need to connect to each other, they connect to themselves instead. This happens because the address resolves to the host loopback address of 127.0.0.1.

Install Learning Center package onto a Kubernetes cluster

To install Learning Center on a Kubernetes cluster:

tanzu package install learningcenter --package-name learningcenter.tanzu.vmware.com --version 0.1.0 -f ./learningcenter-value.yaml --namespace tap-install

This package installation uses the installed Package repository with a configuration learningcenter-value.yaml to install our Learning Center package.

Install workshop tutorial package onto a Kubernetes cluster

To install a workshop tutorial on a Kubernetes cluster:

tanzu package install learningcenter-tutorials --package-name workshops.learningcenter.tanzu.vmware.com --version 0.1.0 --namespace tap-install

Make sure you install the workshop package after the Learning Center package has reconciled and successfully installed onto your cluster. In case of new versioning, to obtain package version numbers, run:

kubectl get packages -n tap-install

Run the workshop

To get the training portal URL, run:

kubectl get trainingportals

You get a URL that you can paste into your browser.

Congratulations, you are now running our tutorial workshop using the Learning Center operator.

Trusting insecure registries

Workshops can optionally deploy a container image registry for a workshop session. This image registry is secured with a password specific to the workshop session and is exposed through a Kubernetes ingress so it can be accessed from the workshop session.

In a typical scenario, Kind uses insecure ingress routes. Even were you to generate a self-signed certificate to use for ingress, it is not trusted by containerd that runs within Kind. You must tell Kind to trust any insecure registry running inside of Kind.

You must configure Kind to trust insecure registries when you first create the cluster. Kind, however, is that it uses containerd and not dockerd. The containerd runtime doesn’t provide a way to trust any insecure registry hosted within the IP subnet used by the Kubernetes cluster. Instead, containerd requires that you enumerate every single host name or IP address on which an insecure registry is hosted. Because each workshop session created by the Learning Center for a workshop uses a different host name, this becomes cumbersome.

If you must used Kind, find out the image registry host name for a workshop deployment and configure containerd to trust a set of host names corresponding to low-numbered sessions for that workshop. This allows Kind to work, but once the host names for sessions go beyond the range of host names you set up, you need to delete the training portal and recreate it, so you can use the same host names again.

For example, if the host name for the image registry were of the form:

lab-docker-testing-wMM-sNNN-registry.192.168.1.1.nip.io

where NNN changes per session, you must use a command to create the Kind cluster. For example:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
containerdConfigPatches:
- |
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."lab-docker-testing-w01-s001-registry.192.168.1.1.nip.io"]
    endpoint = ["http://lab-docker-testing-w01-s001-registry.192.168.1.1.nip.io"]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."lab-docker-testing-w01-s002-registry.192.168.1.1.nip.io"]
    endpoint = ["http://lab-docker-testing-w01-s002-registry.192.168.1.1.nip.io"]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."lab-docker-testing-w01-s003-registry.192.168.1.1.nip.io"]
    endpoint = ["http://lab-docker-testing-w01-s003-registry.192.168.1.1.nip.io"]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."lab-docker-testing-w01-s004-registry.192.168.1.1.nip.io"]
    endpoint = ["http://lab-docker-testing-w01-s004-registry.192.168.1.1.nip.io"]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."lab-docker-testing-w01-s005-registry.192.168.1.1.nip.io"]
    endpoint = ["http://lab-docker-testing-w01-s005-registry.192.168.1.1.nip.io"]
EOF

This allows you to run five workshop sessions before you have to delete the training portal and recreate it.

If you use this, you can use the feature of the training portal to automatically update when a workshop definition is changed. This is because the wMM value identifying the workshop environment changes any time you update the workshop definition.

There is no other known workaround for this limitation of containerd. As such, VMware recommends you use minikube with dockerd instead. For more information, see Install on Minikube.

check-circle-line exclamation-circle-line close-line
Scroll to top icon