In Tanzu Kubernetes Grid v1.1.x, the extensions were deployed by applying the extensions manifests to clusters by using kubectl. In v1.2.x, Tanzu Kubernetes Grid extensions are deployed and managed by using the VMware Tanzu Mission Control extension manager and kapp-controller from the Carvel Tools.

Because the mechanism to deploy Tanzu Kubernetes Grid extensions is very different in v1.2.x compared to v1.1.x, upgrading the extensions requires you to delete and recreate the extensions by using the new deployment mechanism. This section provides instructions about how to save the configurations from Tanzu Kubernetes Grid extensions that you deployed with v1.1.x and apply them by using the mechanism from v1.2.x.

Prerequisites

  • You deployed one or more of the extensions from Tanzu Kubernetes Grid v1.1.x.
  • You have upgraded the clusters on which the extenions are running to Tanzu Kubernetes Grid v1.2.x.
  • You have installed the Carvel tools. For information about installing the Carvel tools, see Install the Carvel Tools on the Bootstrap Environment.
  • You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions for v1.2.x. For information about where to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.
  • In a terminal, navigate to the tkg-extensions-v1.2.0+vmware.1/extensions folder. Run all commands in this procedure from this location.

Upgrade the Contour Extension

To upgrade the Contour extension, you must extract the previous configuration as a configmap, and apply it to a new deployment of the extensions.

  1. Obtain the configmap for the Contour extension.

    kubectl get configmap contour -n tanzu-system-ingress -o jsonpath='{.data.contour\.yaml}' > contour-config.yaml
    

    You get the configmap from contour-config.yaml, which includes configurations for request-timeout and disablePermitInsecure.

    # should contour expect to be running inside a k8s cluster
    # incluster: true
    #
    # path to kubeconfig (if not running inside a k8s cluster)
    # kubeconfig: /path/to/.kube/config
    #
    # Client request timeout to be passed to Envoy
    # as the connection manager request_timeout.
    # Defaults to 0, which Envoy interprets as disabled.
    # Note that this is the timeout for the whole request,
    # not an idle timeout.
    request-timeout: 10s
    # disable ingressroute permitInsecure field
    disablePermitInsecure: true
    
    <output trimmed>
    
  2. If you deployed Contour on a cluster running on vSphere, obtain the current NodePorts assignments.

    kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'
    kubectl get svc envoy -n tanzu-system-ingress -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'
    
  3. Delete the Contour namespace, role binding, and role.

    kubectl delete namespace tanzu-system-ingress
    kubectl delete clusterrolebinding contour
    kubectl delete clusterrole contour
    
  4. Make that sure the ingress and HTTP proxy resources are still running.

    kubectl get ingress -A
    kubectl get httpproxy -A
    

    Because you have deleted Contour, ingress traffic is not functioning. After you upgrade Contour, ingress traffic will resume.

  5. Configure the new Contour Extension.

    Follow the procedures in Implementing Ingress Control with Contour to deploy the new version of the extension.

    When you reach the step to update contour-data-values.yaml, add the configurations from contour-config.yaml and the nodePort that you obtained in the preceding steps.

    You add the configmap configurations in a contour.config section in contour-data-values.yaml. The example below shows how to update contour-data-values.yaml with the request-timeout, disablePermitInsecure options and the node ports that were present in the previous Contour configuration.

    Replace <INFRA_PROVIDER> with either vsphere or aws.

    #@data/values
    #@overlay/match-child-defaults missing_ok=True
    ---
    infrastructure_provider: "<INFRA_PROVIDER>"
    contour:
      dummykey: "dummyvalue"
      config:
        disablePermitInsecure: true
        timeouts:
          requestTimeout: 10s
    envoy:
      service:
        nodePort:
          http: <ENVOY_SVC_HTTP_NODE_PORT>
          https: <ENVOY_SVC_HTTPS_NODE_PORT>
    
  6. Generate the manifests.

    Replace <INFRA_PROVIDER> with either vsphere or aws.

     ytt --ignore-unknown-comments -f ../common -f ../ingress/contour -f ingress/contour/<INFRA_PROVIDER>/contour-data-values.yaml
    
  7. Deploy the new Contour extension.

    Continue to follow the procedure in Implementing Ingress Control with Contour to create the Kubernetes secret and deploy the new version of the extension.

  8. Check that the ingress and HTTP proxy resources are valid and that the ingress traffic is working.

    kubectl get ingress -A
    kubectl get httpproxy -A
    

Upgrade the Fluent Bit Extension

How you upgrade the Fluent Bit extension depending on the type of output plugin your deployment uses.

  1. Obtain the configmap for the Fluent-bit extension.

    Elastic Search

    Run the following commands:

    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.output-elasticsearch\.conf}' > fluent-bit-config-plugin.yaml
    
    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.filter-record\.conf}' > fluent-bit-config-filter.yaml
    

    The output-elasticsearch.conf file includes the Host and Port values.

    [OUTPUT]
     Name            es
     Match           *
     Host            elasticsearch
     Port            9200
    <output trimmed>
    

    The filter-record.conf includes the tkg_cluster and tkg_instance values.

    [FILTER]
     Name                record_modifier
     Match               *
     Record tkg_cluster  tkg-wc-1
     Record tkg_instance tkg-mc-1
    

    Kafka

    Run the following commands:

    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.output-kafka\.conf}' > fluent-bit-config-plugin.yaml
    
    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.filter-record\.conf}' > fluent-bit-config-filter.yaml
    

    The output-kafka.conf file includes the Brokers and Topics values.

    [OUTPUT]
     Name           kafka
     Match          *
     Brokers        kafka-service:9092
     Topics         tkg-logs
    <output trimmed>
    

    The filter-record.conf file includes the tkg_cluster and tkg_instance values.

    [FILTER]
     Name                record_modifier
     Match               *
     Record tkg_cluster  tkg-wc-1
     Record tkg_instance tkg-mc-1
    

    Splunk

    Run the following commands:

    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.output-splunk\.conf}' > fluent-bit-config-plugin.yaml
    
    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.filter-record\.conf}' > fluent-bit-config-filter.yaml
    

    The output-splunk.conf file includes the Host, Port and Splunk_Token values.

    [OUTPUT]
     Name           splunk
     Match          *
     Host           example-splunk-host
     Port           8088
     Splunk_Token   foo-bar
    <output trimmed>
    

    The filter-record.confincludes thetkg_clusterandtkg_instance` values.

    [FILTER]
     Name                record_modifier
     Match               *
     Record tkg_cluster  tkg-wc-1
     Record tkg_instance tkg-mc-1
    

    HTTP

    Run the following commands:

    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.output-http\.conf}' > fluent-bit-config-plugin.yaml
    
    kubectl get configmap fluent-bit-config -n tanzu-system-logging -o jsonpath='{.data.filter-record\.conf}' > fluent-bit-config-filter.yaml
    

    The output-http.conf file includes the Host, Port, URI, Header, and Format values.

    [OUTPUT]
     Name              http
     Match             *
     Host              example-http-host
     Port              9200
     URI               /foo/bar
     Header            Authorization Bearer Token
     Format            json
    <output trimmed>
    

    The filter-record.conf file includes the tkg_cluster and tkg_instance values.

    [FILTER]
     Name                record_modifier
     Match               *
     Record tkg_cluster  tkg-wc-1
     Record tkg_instance tkg-mc-1
    
  2. Delete the Fluent Bit namespace, role binding, and role.

    kubectl delete namespace tanzu-system-logging
    
    kubectl delete clusterrolebinding fluent-bit-read
    
    kubectl delete clusterrole fluent-bit-read
    

    Because you have deleted the name space, the Fluent Bit daemonset is also deleted. Logs are not captured. After you have upgraded Fluent Bit, log collection will resume.

  3. Configure the new Fluent Bit extension.

    Follow the procedures in Implementing Ingress Control with Contour to deploy the new version of the extension.

    When you reach the step to update fluent-bit-data-values.yaml, add the configurations from fluent-bit-config-plugin.conf and fluent-bit-config-filter.conf that you obtained in the preceding step. The examples below show how to update fluent-bit-data-values.yaml with the configurations that were present in the previous Fluent Bit configuration.

    Elastic Search

    #@data/values
    #@overlay/match-child-defaults missing_ok=True
    ---
    
    tkg:
     instance_name: "tkg-mc-1"
     cluster_name: "tkg-wc-1"
    fluent_bit:
     output_plugin: "elasticsearch"
     elasticsearch:
       host: "elasticsearch"
       port: "9200"
    

    Kafka

    #@data/values
    #@overlay/match-child-defaults missing_ok=True
    ---
    
    tkg:
    instance_name: "tkg-mc-1"
    cluster_name: "tkg-wc-1"
    fluent_bit:
    output_plugin: "kafka"
    kafka:
      broker_service_name: "kafka-service:9092"
      topic_name: "tkg-logs"
    

    Splunk

    #@data/values
    #@overlay/match-child-defaults missing_ok=True
    ---
    
    tkg:
    instance_name: "mgmt-cluster-1"
    cluster_name: "workload-1"
    fluent_bit:
    output_plugin: "splunk"
    splunk:
      host: "example-splunk-host"
      port: "8088"
      token: "foo-bar"
    

    HTTP

    #@data/values
    #@overlay/match-child-defaults missing_ok=True
    ---
    
    tkg:
    instance_name: "mgmt-cluster-1"
    cluster_name: "workload-1"
    fluent_bit:
     output_plugin: "http"
     http:
       host: "example-http-host"
       port: "9200"
       uri: "/foo/bar"
       header_key_value: "Authorization Bearer Token"
       format: "json"
    
  4. Generate the manifests.

    In the following command, replace <LOG_BACKEND> with elasticsearch, http, kafka, or splunk.

    ytt --ignore-unknown-comments -f ../common -f ../logging/fluent-bit -f logging/fluent-bit/<LOG_BACKEND>/fluent-bit-data-values.yaml
    
  5. Continue to follow the procedure in Implementing Ingress Control with Contour to create the Kubernetes secret and deploy the new version of the extension.

  6. Check that the Fluent Bit daemon set is running and that log collection and forwarding is functioning.

    kubectl get ds -n tanzu-system-logging
    

Upgrade the Dex Extension

  1. Obtain the configmap for the Dex extension.

    kubectl get configmap dex -n tanzu-system-auth -o 'go-template={{ index .data "dex.yaml" }}' > dex-configmap.yaml
    
  2. Delete the Dex namespace.

    kubectl delete namespace tanzu-system-auth
    
  3. Deploy the new Dex extension.

    Follow the procedures in Deploy Dex on Management Clusters to deploy the new version of the extension.

    When you reach the step to update dex-data-values.yaml, add the configurations from dex-configmap.yaml that you obtained in the preceding step.

Next, you must upgrade the Gangway extension.

Upgrade the Gangway Extension

  1. Obtain the configmap for the Gangway extension.

    kubectl get configmap gangway -n tanzu-system-auth -o 'go-template={{ ingangway .data "gangway.yaml" }}' > gangway-configmap.yaml
    
  2. Delete the Gangway namespace.

    kubectl delete namespace tanzu-system-auth
    
  3. Deploy the new Gangway extension.

    Follow the procedures in Deploy Gangway on Tanzu Kubernetes Clusters to deploy the new version of the extension.

    When you reach the step to update gangway-data-values.yaml, add the configurations from gangway-configmap.yaml that you obtained in the preceding step.

check-circle-line exclamation-circle-line close-line
Scroll to top icon