STIG and NSA/CISA Hardening

Tanzu Kubernetes Grid (TKG) releases are continuously validated against the Defense Information Systems Agency (DISA) Kubernetes Security Technical Implementation Guide (STIG) and NSA/CISA Kubernetes Hardening Guide.

This topic explains how to further harden workload clusters deployed by standalone management clusters. The method depends on whether the cluster is class-based or plan-based, as described as described in Workload Cluster Types.

Hardening Class-Based Workload Clusters

To create STIG- and CIS-compliant class-based workload clusters, configure them as described in the sections below.

STIG Hardening

To harden class-based workload clusters to STIG standards, do either of the following before creating the cluster:

  • Include the variable settings below in the cluster configuration file
  • Set the variables in your local environment, for example with export
ETCD_EXTRA_ARGS: "auto-tls=false;peer-auto-tls=false;cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"
KUBE_CONTROLLER_MANAGER_EXTRA_ARGS: "tls-min-version=VersionTLS12;profiling=false;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"
WORKER_KUBELET_EXTRA_ARGS: "streaming-connection-idle-timeout=5m;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384;protect-kernel-defaults=true"
APISERVER_EXTRA_ARGS: "tls-min-version=VersionTLS12;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"
KUBE_SCHEDULER_EXTRA_ARGS: "tls-min-version=VersionTLS12;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"
CONTROLPLANE_KUBELET_EXTRA_ARGS: "streaming-connection-idle-timeout=5m;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384;protect-kernel-defaults=true"
ENABLE_AUDIT_LOGGING: true

Hardening Results

The STIG hardening process changes security scans for TKG v2.1.1 cluster nodes as follows:

Before: Scan results, out-of-the-box TKG v2.1 cluster nodes:

  • Control Plane

    OOTB Control Plane

  • Worker Nodes

    OOTB Worker

After: Scan results, hardened TKG v2.1 cluster nodes:

  • Control Plane

    Hardened Control Plane

  • Worker Nodes

    Hardened Worker

STIG Results and Exceptions for Class-Based Clusters

VID Finding Title Compliant By Default? Fix Exception / Explanation
V-242376 The Kubernetes Controller Manager must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved by setting KUBE_CONTROLLER_MANAGER_EXTRA_ARGS
V-242377 The Kubernetes Scheduler must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved by setting the KUBE_SCHEDULER_EXTRA_ARGS
V-242378 The Kubernetes API server must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved by setting APISERVER_EXTRA_ARGS
V-242379 The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved by setting the ETCD_EXTRA_ARGS
V-242380 The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination. Yes
V-242381 The Kubernetes Controller Manager must create unique service accounts for each work payload. Yes
V-242382 The Kubernetes API server must enable Node or RBAC as the authorization mode. Yes
V-242383 User-managed resources must be created in dedicated namespaces. Yes
V-242384 The Kubernetes Scheduler must have secure binding. Yes
V-242385 The Kubernetes Controller Manager must have secure binding. Yes
V-242387 The Kubernetes Kubelet must have the read-only port flag disabled. Yes
V-242388 The Kubernetes API server must have the insecure bind address not set. Yes
V-242389 The Kubernetes API server must have the secure port set. Yes
V-242390 The Kubernetes API server must have anonymous authentication disabled. No No RBAC authorization is enabled on the API server, it is generally considered reasonable to allow anonymous access to the API server for health checks and discovery purposes when RBAC is enabled
V-242391 The Kubernetes Kubelet must have anonymous authentication disabled. Yes
V-242392 The Kubernetes kubelet must enable explicit authorization. Yes
V-242395 Kubernetes dashboard must not be enabled. Yes
V-242396 Kubernetes Kubectl cp command must give expected access and results. Yes
V-242397 The Kubernetes kubelet static PodPath must not enable static pods. No No Kubernetes uses the static pod path to launch the apiserver, controller manager, and scheduler
V-242398 Kubernetes DynamicAuditing must not be enabled. Yes
V-242400 The Kubernetes API server must have Alpha APIs disabled. Yes
V-242401 The Kubernetes API server must have an audit policy set. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242402 The Kubernetes API server must have an audit log path set. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. No No The default audit rules that are generated filter out a lot of communication between internal components that flood the logs.
V-242404 Kubernetes Kubelet must deny hostname override. No No This is required for Kubernetes on a cloud provider such as AWS/Azure
V-242405 The Kubernetes manifests must be owned by root. Yes
V-242406 The Kubernetes kubelet configuration file must be owned by root. Yes
V-242407 The Kubernetes kubelet configuration files must have file permissions set to 644 or more restrictive. Yes
V-242408 The Kubernetes manifests must have least privileges. Yes
V-242409 Kubernetes Controller Manager must disable profiling. No Yes This can be resolved by setting KUBE_CONTROLLER_MANAGER_EXTRA_ARGS
V-242410 The Kubernetes API server must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL). No Requires manual review
V-242411 The Kubernetes Scheduler must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL). No Requires manual review
V-242412 The Kubernetes Controllers must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL). No Requires manual review
V-242413 The Kubernetes etcd must enforce ports, protocols, and services (PPS) that adhere to the Ports, Protocols, and Services Management Category Assurance List (PPSM CAL). No Requires manual review
V-242414 The Kubernetes cluster must use non-privileged host ports for user pods. Yes
V-242415 Secrets in Kubernetes must not be stored as environment variables. No No “SecretKeyRef will show the secret name not the contents of the secret when calling the ““Get Pod”” API call.”
V-245541 Kubernetes Kubelet must not disable timeouts. No Yes This can be resolved by setting the worker node and control plane KUBELET_EXTRA_ARGS
V-242417 Kubernetes must separate user functionality. No Requires manual review
V-242418 The Kubernetes API server must use approved cipher suites. No Yes This can be resolved by setting APISERVER_EXTRA_ARGS
V-242419 Kubernetes API server must have the SSL Certificate Authority set. Yes
V-242420 Kubernetes Kubelet must have the SSL Certificate Authority set. Yes
V-242421 Kubernetes Controller Manager must have the SSL Certificate Authority set. Yes
V-242422 Kubernetes API server must have a certificate for communication. Yes
V-242423 Kubernetes etcd must enable client authentication to secure service. Yes
V-242424 Kubernetes Kubelet must enable tls-private-key-file for client authentication to secure service. No No RotateServerCertificates requires serving certificates to be manually approved. The user can manually enable this, approve the certificates, and then apply this setting to the kubelets.
V-242425 Kubernetes Kubelet must enable tls-cert-file for client authentication to secure service. No No RotateServerCertificates requires serving certificates to be manually approved. The user can manually enable this, approve the certificates, and then apply this setting to the kubelets.
V-242426 Kubernetes etcd must enable client authentication to secure service. Yes
V-242427 Kubernetes etcd must have a key file for secure communication. Yes
V-242428 Kubernetes etcd must have a certificate for communication. Yes
V-242429 Kubernetes etcd must have the SSL Certificate Authority set. Yes
V-242430 Kubernetes etcd must have a certificate for communication. Yes
V-242431 Kubernetes etcd must have a key file for secure communication. Yes
V-242432 Kubernetes etcd must have peer-cert-file set for secure communication. Yes
V-242433 Kubernetes etcd must have a peer-key-file set for secure communication. Yes
V-242434 Kubernetes Kubelet must enable kernel protection. No Yes This can be resolved by setting the worker node and control plane KUBELET_EXTRA_ARGS
V-242435 Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures or the installation of patches and updates. Yes
V-242436 The Kubernetes API server must have the ValidatingAdmissionWebhook enabled. Yes
V-254801 Kubernetes must have a Pod Security Admission feature gate set. Yes
V-254800 Kubernetes must have a Pod Security Admission control file configured. No No OPA Gatekeeper is a viable alternative and allows for finer grained pod security.
V-242438 Kubernetes API server must configure timeouts to limit attack surface. Yes
V-245542 Kubernetes API server must disable basic authentication to protect information in transit. Yes
V-245543 Kubernetes API server must disable token authentication to protect information in transit. Yes
V-245544 Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit. Yes
V-242442 Kubernetes must remove old components after updated versions have been installed. No Requires manual review
V-242443 Kubernetes must contain the latest updates as authorized by IAVMs, CTOs, DTMs, and STIGs. Yes
V-242444 The Kubernetes component manifests must be owned by root. Yes
V-242445 The Kubernetes component etcd must be owned by etcd. No No To provision clusters, Tanzu Kubernetes Grid uses Cluster API which, in turn, uses the kubeadm tool to provision Kubernetes. kubeadm makes etcd run containerized as a static pod, therefore the directory does not need to be set to a particular user.
V-242446 The Kubernetes conf files must be owned by root. Yes
V-242447 The Kubernetes Kube Proxy must have file permissions set to 644 or more restrictive. No No Kube Proxy runs in a pod so this file does not exist on the host. Within the pod it is linked to another directory that has correct permissions
V-242448 The Kubernetes Kube Proxy must be owned by root. Yes
V-242449 The Kubernetes Kubelet certificate authority file must have file permissions set to 644 or more restrictive. Yes
V-242450 The Kubernetes Kubelet certificate authority must be owned by root. Yes
V-242451 The Kubernetes component PKI must be owned by root. Yes
V-242452 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242453 The Kubernetes kubelet config must be owned by root. Yes
V-242454 The Kubernetes kubeadm.conf must be owned by root. Yes
V-242455 The Kubernetes kubeadm.conf must have file permissions set to 644 or more restrictive. Yes
V-242456 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242457 The Kubernetes kubelet config must be owned by root. Yes
V-242458 The Kubernetes API server must have file permissions set to 644 or more restrictive. Yes
V-242459 The Kubernetes etcd must have file permissions set to 644 or more restrictive. Yes
V-242460 The Kubernetes admin.conf must have file permissions set to 644 or more restrictive. Yes
V-242461 Kubernetes API server audit logs must be enabled. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242462 The Kubernetes API server must be set to audit log max size. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242463 The Kubernetes API server must be set to audit log maximum backup. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242464 The Kubernetes API server audit log retention must be set. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242465 The Kubernetes API server audit log path must be set. No Yes This can be resolved by setting ENABLE_AUDIT_LOGGING=true
V-242466 The Kubernetes PKI CRT must have file permissions set to 644 or more restrictive. Yes
V-242467 The Kubernetes PKI keys must have file permissions set to 600 or more restrictive. Yes
V-242468 The Kubernetes API server must prohibit communication using TLS version 1.0 and 1.1, and SSL 2.0 and 3.0. No Yes This can be resolved by setting APISERVER_EXTRA_ARGS

NSA/CISA Hardening

To harden class-based workload clusters to CIS standards, do the following before creating the cluster:

  1. Review the Event Rate limit configuration below. If you want to change any settings, save the code to event-rate-config.yaml and change the settings as desired:

    apiVersion: eventratelimit.admission.k8s.io/v1alpha1
    kind: Configuration
    limits:
    - type: Namespace
    qps: 50
    burst: 100
    cacheSize: 2000
    - type: User
    qps: 10
    burst: 50
    
  2. If you created event-rate-config.yaml with custom settings, base64-encode the file by running the following and recording the output string:

    • Linux: base64 -w 0 event-rate-config.yaml
    • Mac: base64 -b 0 event-rate-config.yaml
  3. Do either of the following before creating the cluster:

    • Include the variable settings below in the cluster configuration file
    • Set the variables in your local environment, for example with export
    ETCD_EXTRA_ARGS: "auto-tls=false;peer-auto-tls=false;cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
    KUBE_CONTROLLER_MANAGER_EXTRA_ARGS: "profiling=false;terminated-pod-gc-threshold=500;tls-min-version=VersionTLS12;profiling=false;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
    WORKER_KUBELET_EXTRA_ARGS: "read-only-port=0;authorization-mode=Webhook;client-ca-file=/etc/kubernetes/pki/ca.crt;event-qps=0;make-iptables-util-chains=true;streaming-connection-idle-timeout=5m;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256;protect-kernel-defaults=true"
    APISERVER_EXTRA_ARGS: "enable-admission-plugins=AlwaysPullImages,NodeRestriction;profiling=false;service-account-lookup=true;tls-min-version=VersionTLS12;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
    KUBE_SCHEDULER_EXTRA_ARGS: "profiling=false;tls-min-version=VersionTLS12;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
    CONTROLPLANE_KUBELET_EXTRA_ARGS: "read-only-port=0;authorization-mode=Webhook;client-ca-file=/etc/kubernetes/pki/ca.crt;event-qps=0;make-iptables-util-chains=true;streaming-connection-idle-timeout=5m;tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256;protect-kernel-defaults=true"
    APISERVER_EVENT_RATE_LIMIT_CONF_BASE64: "<EVENT-RATE-CONFIG>"
    ENABLE_AUDIT_LOGGING: true
    

    Where <EVENT-RATE-CONFIG> is the base64-encoded value from the last step, or the following if you did not change the Event Rate limit configuration:

    • YXBpVmVyc2lvbjogZXZlbnRyYXRlbGltaXQuYWRtaXNzaW9uLms4cy5pby92MWFscGhhMQpraW5kOiBDb25maWd1cmF0aW9uCmxpbWl0czoKLSB0eXBlOiBOYW1lc3BhY2UKICBxcHM6IDUwCiAgYnVyc3Q6IDEwMAogIGNhY2hlU2l6ZTogMjAwMAotIHR5cGU6IFVzZXIKICBxcHM6IDEwCiAgYnVyc3Q6IDUwCg==

Hardening Results

The NSA/CISA hardening process changes security scans for TKG v2.1.0 cluster nodes as follows:

Before: Scan results, out-of-the-box TKG v2.1 cluster nodes:

OOTB

After: Scan results, hardened TKG v2.1 cluster nodes:

Hardened

Hardening Plan-Based Workload Clusters

Legacy, non class-based TKG workload clusters deployed by standalone management clusters can be hardened by using ytt overlays. For information on how to customize plan-based TKG clusters using ytt, see Legacy Cluster Configuration with ytt.

You can create legacy clusters by setting allow-legacy-cluster to true in your CLI configuration as describe in the Tanzu CLI Reference.

STIG Hardening

To further harden plan-based TKG clusters, VMware provides a STIG hardening ytt overlay.

The following snippet is an ytt overlay to set tls-min-version (STIG: V-242378) on the api-server.

#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match missing_ok=True,by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        extraArgs:
        #@overlay/match missing_ok=True
        tls-min-version: VersionTLS12

NSA/CISA Hardening

To further harden plan-based TKG clusters for NSA/CISA, VMware provides the following Antrea object specifications:

  • NSA/CISA hardening: Antrea ClusterNetworkPolicies:

    The following Antrea ClusterNetworkPolicies specification for Network policies control sets a default policy for all Pods to deny all ingress and egress traffic and ensure that any unselected Pods are isolated.

    apiVersion: security.antrea.tanzu.vmware.com/v1alpha1
    kind: ClusterNetworkPolicy
    metadata:
    name: default-deny
    spec:
    priority: 150
    tier: baseline
    appliedTo:
      - namespaceSelector: {}
    ingress:
      - action: Drop              # For all Pods in every namespace, drop and log all ingress traffic from anywhere
        name: drop-all-ingress
        enableLogging: true
    egress:
      - action: Drop              # For all Pods in every namesapces, drop and log all egress traffic towards anywhere
        name: drop-all-egress
        enableLogging: true
    
    
  • NSA/CISA hardening: Antrea network policy:

    The following Antrea network policy allows tanzu-capabilities-manager egress to kube-apiserver ports 443 and 6443.

    apiVersion: security.antrea.tanzu.vmware.com/v1alpha1
    kind: NetworkPolicy
    metadata:
    name: tanzu-cm-apiserver
    namespace: tkg-system
    spec:
    priority: 5
    tier: securityops
    appliedTo:
      - podSelector:
          matchLabels:
          app: tanzu-capabilities-manager
    egress:
      - action: Allow
        to:
        - podSelector:
            matchLabels:
            component: kube-apiserver
          namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
        ports:
        - port: 443
          protocol: TCP
        - port: 6443
          protocol: TCP
        name: AllowToKubeAPI
    
    
  • NSA/CISA hardening: OPA template and constraints:

    The following example of uses OPA gatekeeper to restrict allowed images repositories.

    • OPA template:

      apiVersion: templates.gatekeeper.sh/v1beta1
      kind: ConstraintTemplate
      metadata:
      name: k8sallowedrepos
      annotations:
        description: Requires container images to begin with a repo string from a specified
          list.
      spec:
      crd:
        spec:
          names:
            kind: K8sAllowedRepos
          validation:
            # Schema for the `parameters` field
            openAPIV3Schema:
              type: object
              properties:
                repos:
                  type: array
                  items:
                    type: string
      targets:
        - target: admission.k8s.gatekeeper.sh
          rego:|
            package k8sallowedrepos
      
            violation[{"msg": msg}] {
              container := input.review.object.spec.containers[_]
              satisfied := [good| repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
              not any(satisfied)
              msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
            }
      
            violation[{"msg": msg}] {
              container := input.review.object.spec.initContainers[_]
              satisfied := [good| repo = input.parameters.repos[_] ; good = startswith(container.image, repo)]
              not any(satisfied)
              msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v", [container.name, container.image, input.parameters.repos])
            }
      
    • OPA constraints:

      apiVersion: constraints.gatekeeper.sh/v1beta1
      kind: K8sAllowedRepos
      metadata:
      name: repo-is-openpolicyagent
      spec:
      match:
        kinds:
          - apiGroups: [""]
            kinds: ["Pod"]
      parameters:
        repos:
          - "<ALLOWED_IMAGE_REPO>"
      
  • NSA/CISA hardening: OPA mutations:

    The following example uses OPA mutation to set allowPrivilegeEscalation to false if it is missing in the pod spec.

    apiVersion: mutations.gatekeeper.sh/v1alpha1
    kind: Assign
    metadata:
    name: allow-privilege-escalation
    spec:
      match:
        scope: Namespaced
        kinds:
          - apiGroups: ["*"]
            kinds: ["Pod"]
        excludedNamespaces:
        - kube-system
      applyTo:
      - groups: [""]
        kinds: ["Pod"]
        versions: ["v1"]
      location: "spec.containers[name:*].securityContext.allowPrivilegeEscalation"
      parameters:
        pathTests:
        - subPath: "spec.containers[name:*].securityContext.allowPrivilegeEscalation"
          condition: MustNotExist
        assign:
          value: false
    
    

Antrea CNI is used in this guide for network hardening as it provides fine-grained control of network policies using tiers and the ability to apply a cluster-wide security policy using ClusterNetworkPolicy.

Open Policy Agent (OPA) is used instead of pod security policies as they were deprecated in Kubernetes 1.21.

ytt overlays, network policies and OPA policies are created in a way to make it easy for cluster admins to opt out of hardening controls for certain workloads. We suggest not completely opting out of the hardening practices and instead isolating workloads in namespaces where these hardening controls are not applied. Opting out of hardening controls also depends on the risk appetite of the TKG deployment.

STIG Results and Exceptions for Plan-Based Cluster Control Plane

VID Finding Title Compliant by default? Can be Resolved? Explanation/Exception
V-242376 The Kubernetes Controller Manager must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved with a ytt overlay
V-242377 The Kubernetes Scheduler must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved with a ytt overlay
V-242378 The Kubernetes API server must use TLS 1.2, at a minimum, to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved with a ytt overlay
V-242379 The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination. No Yes This can be resolved with a ytt overlay
V-242380 The Kubernetes etcd must use TLS to protect the confidentiality of sensitive data during electronic dissemination. Yes
V-242381 The Kubernetes Controller Manager must create unique service accounts for each work payload. Yes
V-242382 The Kubernetes API server must enable Node,RBAC as the authorization mode. Yes
V-242383 User-managed resources must be created in dedicated namespaces. Yes
V-242384 The Kubernetes Scheduler must have secure binding. Yes
V-242385 The Kubernetes Controller Manager must have secure binding. Yes
V-242386 The Kubernetes API server must have the insecure port flag disabled. No No Exception --insecure-port flag has been removed in Kubernetes v1.24+
V-242387 The Kubernetes Kubelet must have the read-only port flag disabled. Yes
V-242388 The Kubernetes API server must have the insecure bind address not set. Yes
V-242389 The Kubernetes API server must have the secure port set. Yes
V-242390 The Kubernetes API server must have anonymous authentication disabled. No No Exception RBAC authorization is enabled on the API server (V-242382), it is generally considered reasonable to allow anonymous access to the API server for health checks and discovery purposes when RBAC is enabled
V-242391 The Kubernetes Kubelet must have anonymous authentication disabled. Yes
V-242392 The Kubernetes kubelet must enable explicit authorization. Yes
V-242393 Kubernetes Worker Nodes must not have sshd service running. Yes
V-242394 Kubernetes Worker Nodes must not have the sshd service enabled. No No Exception ssh is restricted to only the bastion server and is needed to enable serving certificates and install monitoring tools. Also this is not a worker node
V-242395 Kubernetes dashboard must not be enabled. Yes
V-242396 Kubernetes Kubectl cp command must give expected access and results. Yes
V-242397 The Kubernetes kubelet static PodPath must not enable static pods. No No Exception TKG utilizes the staticPodPath to launch numerous components so it can not be disabled
V-242398 Kubernetes DynamicAuditing must not be enabled. Yes
V-242399 Kubernetes DynamicKubeletConfig must not be enabled. Yes
V-242400 The Kubernetes API server must have Alpha APIs disabled. Yes
V-242401 The Kubernetes API server must have an audit policy set. Yes
V-242402 The Kubernetes API server must have an audit log path set. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242403 Kubernetes API server must generate audit records that identify what type of event has occurred, identify the source of the event, contain the event results, identify any users, and identify any containers associated with the event. Yes
V-242404 Kubernetes Kubelet must deny hostname override. No No Exception This is needed for public cloud Kubernetes clusters.
V-242405 The Kubernetes manifests must be owned by root. Yes
V-242406 The Kubernetes kubelet configuration file must be owned by root. Yes
V-242407 The Kubernetes kubelet configuration file must be owned by root. Yes
V-242408 The Kubernetes manifests must have least privileges. Yes
V-242409 Kubernetes Controller Manager must disable profiling. No Yes This can be resolved with a ytt overlay
V-242410 The Kubernetes API server must enforce ports, protocols, and services(PPS) that adhere to the Ports, Protocols, and Services Management CategoryAssurance List (PPSM CAL). No No Exception Manual Review - Handled by PPSM monitoring solution
V-242411 The Kubernetes Scheduler must enforce ports, protocols, and services(PPS) that adhere to the Ports, Protocols, and Services Management CategoryAssurance List (PPSM CAL). No No Exception Manual Review - Handled by PPSM monitoring solution
V-242412 The Kubernetes Controllers must enforce ports, protocols, and services(PPS) that adhere to the Ports, Protocols, and Services Management CategoryAssurance List (PPSM CAL). No No Exception Manual Review - Handled by PPSM monitoring solution
V-242413 The Kubernetes etcd must enforce ports, protocols, and services (PPS)that adhere to the Ports, Protocols, and Services Management Category AssuranceList (PPSM CAL). No No Exception Manual Review - Handled by PPSM monitoring solution
V-242414 The Kubernetes cluster must use non-privileged host ports for user pods. No No Exception Manual Review - Handled by PPSM monitoring solution
V-242415 Secrets in Kubernetes must not be stored as environment variables. Yes
V-242416 Kubernetes Kubelet must not disable timeouts. No Yes This can be resolved with a ytt overlay
V-242417 Kubernetes must separate user functionality. No No Exception Manual Review
V-242418 The Kubernetes API server must use approved cipher suites. Yes
V-242419 Kubernetes API server must have the SSL Certificate Authority set. Yes
V-242420 Kubernetes Kubelet must have the SSL Certificate Authority set. Yes
V-242421 Kubernetes Controller Manager must have the SSL Certificate Authority set. Yes
V-242422 Kubernetes API server must have a certificate for communication. Yes
V-242423 Kubernetes etcd must enable client authentication to secure service. Yes
V-242424 Kubernetes Kubelet must enable tls-private-key-file for client authentication to secure service. No Yes This can be enabled manually after the feature gate for the controller manager and kubelet, RotateServerCertificates, is added to the overlay along with the client-ca-file defined in the kubelet and the kubelet-certificate-authority defined in the api-server. Once the cluster starts with the aforementioned enabled the certificates can be manually approved as kubelet-serving certificates are not auto approved and then the kubelet configs can be modified to include tls-private-key-file and tls-cert-file both pointing at the newly create kubelet-server-current.pem file. Then simply restart the kubelet
V-242425 Kubernetes Kubelet must enable tls-cert-file for client authentication to secure service. No Yes This can be enabled manually after the feature gate for the controller manager and kubelet, RotateServerCertificates, is added to the overlay along with the client-ca-file defined in the kubelet and the kubelet-certificate-authority defined in the api-server. Once the cluster starts with the aforementioned enabled the certificates can be manually approved as kubelet-serving certificates are not auto approved and then the kubelet configs can be modified to include tls-private-key-file and tls-cert-file both pointing at the newly create kubelet-server-current.pem file. Then simply restart the kubelet
V-242426 Kubernetes etcd must enable client authentication to secure service. Yes
V-242427 Kubernetes etcd must have a key file for secure communication. Yes
V-242428 Kubernetes etcd must have a certificate for communication. Yes
V-242429 Kubernetes etcd must have the SSL Certificate Authority set. Yes
V-242430 Kubernetes etcd must have a certificate for communication. Yes
V-242431 Kubernetes etcd must have a key file for secure communication. Yes
V-242432 Kubernetes etcd must have peer-cert-file set for secure communication. Yes
V-242433 Kubernetes etcd must have a peer-key-file set for secure communication. Yes
V-242434 Kubernetes Kubelet must enable kernel protection. No Yes This can be resolved after creating either a custom AMI or setting up the host’s properly ahead of time and then enabling it via a ytt overlay
V-242435 Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures or the installation of patches and updates. Yes
V-242435 Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures or the installation of patches and updates. Yes
V-242435 Kubernetes must prevent non-privileged users from executing privileged functions to include disabling, circumventing, or altering implemented security safeguards/countermeasures or the installation of patches and updates. Yes
V-242436 The Kubernetes API server must have the ValidatingAdmissionWebhook enabled. Yes
V-242437 Kubernetes must have a pod security policy set. No No Exception OPA Gatekeeper is the recommended solution for Pod Security after the deprecation of Pod Security Policies
V-242438 Kubernetes API server must configure timeouts to limit attack surface. Yes
V-242439 Kubernetes API server must disable basic authentication to protect information in transit. Yes
V-242440 Kubernetes API server must disable token authentication to protect information in transit. Yes
V-242441 Kubernetes endpoints must use approved organizational certificate and key pair to protect information in transit. Yes
V-242442 Kubernetes must remove old components after updated versions have been installed. No No Exception Manual Review
V-242443 Kubernetes must contain the latest updates as authorized by IAVMs,CTOs, DTMs, and STIGs. Yes
V-242444 The Kubernetes component manifests must be owned by root. Yes
V-242445 The Kubernetes component etcd must be owned by etcd. No No Exception The data directory (/var/lib/etcd) is owned by root:root.To provision clusters, Tanzu Kubernetes Grid uses Cluster API which, in turn, uses the kubeadm tool to provision Kubernetes. kubeadm makes etcd run containerized as a static pod, therefore the directory does not need to be set to a particular user. kubeadm configures the directory to not be readable by non-root users.
V-242446 The Kubernetes conf files must be owned by root. Yes
V-242447 The Kubernetes Kube Proxy must have file permissions set to 644 or more restrictive. No No Exception Kubeconfig for kube-proxy is a symlink. The base file is 0644 or less permissive. Manual Review
V-242448 The Kubernetes Kube Proxy must be owned by root. Yes
V-242449 The Kubernetes Kubelet certificate authority file must have file permissions set to 644 or more restrictive. Yes
V-242450 The Kubernetes Kubelet certificate authority must be owned by root. Yes
V-242451 The Kubernetes component PKI must be owned by root. Yes
V-242452 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242453 The Kubernetes kubelet config must be owned by root. Yes
V-242454 The Kubernetes kubeadm must be owned by root. Yes
V-242455 The Kubernetes kubelet service must have file permissions set to 644 or more restrictive. Yes
V-242456 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242457 The Kubernetes kubelet config must be owned by root. Yes
V-242458 The Kubernetes API server must have file permissions set to 644 or more restrictive. Yes
V-242459 The Kubernetes etcd must have file permissions set to 644 or more restrictive. Yes
V-242460 The Kubernetes admin.conf must have file permissions set to 644 or more restrictive. Yes
V-242461 Kubernetes API server audit logs must be enabled. Yes
V-242462 The Kubernetes API server must be set to audit log max size. Yes
V-242463 The Kubernetes API server must be set to audit log maximum backup. Yes
V-242464 The Kubernetes API server audit log retention must be set. Yes
V-242465 The Kubernetes API server audit log path must be set. Yes
V-242466 The Kubernetes PKI CRT must have file permissions set to 644 or more restrictive. Yes
V-242467 The Kubernetes PKI keys must have file permissions set to 600 or more restrictive. Yes
V-242468 The Kubernetes API server must prohibit communication using TLS version 1.0 and 1.1, and SSL 2.0 and 3.0. No Yes This can be resolved with a ytt overlay

STIG Results and Exceptions for Plan-Based Cluster Workers

VID Finding Title Compliant by default? Can be Resolved? Explanation/Exception
V-242387 The Kubernetes Kubelet must have the read-only port flag disabled. Yes
V-242391 The Kubernetes Kubelet must have anonymous authentication disabled. Yes
V-242392 The Kubernetes kubelet must enable explicit authorization. Yes
V-242393 Kubernetes Worker Nodes must not have sshd service running. Yes
V-242394 Kubernetes Worker Nodes must not have the sshd service enabled. No No Exception ssh is restricted to only the bastion server and is needed to enable serving certificates and install monitoring tools
V-242396 Kubernetes Kubectl cp command must give expected access and results. Yes
V-242397 The Kubernetes kubelet static PodPath must not enable static pods. No No Exception staticPodPath is needed for TKG to install properly as several of the pods in the tkg-system are defined there
V-242398 Kubernetes DynamicAuditing must not be enabled. Yes
V-242399 Kubernetes DynamicKubeletConfig must not be enabled. Yes
V-242400 The Kubernetes API server must have Alpha APIs disabled. Yes
V-242404 Kubernetes Kubelet must deny hostname override. No No Exception hostname-override is needed for public cloud deployments of Kubernetes
V-242406 The Kubernetes kubelet configuration file must be owned by root. Yes
V-242407 The Kubernetes kubelet configuration file must be owned by root. Yes
V-242416 Kubernetes Kubelet must not disable timeouts. No Yes This can be resolved with a ytt overlay
V-242420 Kubernetes Kubelet must have the SSL Certificate Authority set. Yes
V-242425 Kubernetes Kubelet must enable tls-cert-file for client authentication to secure service. No Yes This can be enabled manually after the feature gate for the controller manager and kubelet, RotateServerCertificates, is added to the overlay along with the client-ca-file defined in the kubelet and the kubelet-certificate-authority defined in the api-server. Once the cluster starts with the aforementioned enabled the certificates can be manually approved as kubelet-serving certificates are not auto approved and then the kubelet configs can be modified to include tls-private-key-file and tls-cert-file both pointing at the newly create kubelet-server-current.pem file. Then simply restart the kubelet
V-242434 Kubernetes Kubelet must enable kernel protection. No Yes This can be resolved after creating either a custom AMI or setting up the host’s properly ahead of time and then enabling it via a ytt overlay
V-242449 The Kubernetes Kubelet certificate authority file must have file permissions set to 644 or more restrictive. Yes
V-242450 The Kubernetes Kubelet certificate authority must be owned by root. Yes
V-242451 The Kubernetes component PKI must be owned by root. Yes
V-242452 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242453 The Kubernetes kubelet config must be owned by root. Yes
V-242454 The Kubernetes kubeadm must be owned by root. Yes
V-242455 The Kubernetes kubelet service must have file permissions set to 644 or more restrictive. Yes
V-242456 The Kubernetes kubelet config must have file permissions set to 644 or more restrictive. Yes
V-242457 The Kubernetes kubelet config must be owned by root. Yes
V-242466 The Kubernetes PKI CRT must have file permissions set to 644 or more restrictive. Yes

NSA/CISA Kubernetes Hardening Guidance

Title Compliant By Default? Can be resolved? Explanation/Exception
Allow privilege escalation No Yes Resolved with OPA Gatekeeper Policy as well as mutations
Non-root containers No Yes Resolved with OPA Gatekeeper Policy as well as mutations.
Exception Some pods such as contour/envoy need root in order to function. Tanzu System Ingress needs to interact with the network
Automatic mapping of service account No Yes Resolved with OPA Gatekeeper Mutation.
Exception Gatekeeper needs access to the API server so its service accounts are automounted
Applications credentials in configuration files No No Exception All of the detected credentials in config files were false positives as they were public keys
Linux hardening No Yes Resolved with OPA gatekeeper constraint as well as mutation to drop all capabilities
Exception Some pods such as contour/envoy need advanced privileges in order to function. Tanzu System Ingress needs to interact with the network
Seccomp Enabled No Yes Resolved with OPA gatekeeper mutation to set a seccomp profile for all pods
Host PID/IPC privileges No Yes A gatekeeper constraint has been added to prohibit all pods from running with PID/IPC
Dangerous capabilities No Yes A gatekeeper constraint has been added to prohibit dangerous capabilities and a mutation has been added to set a default.
Exception Some pods such as contour/envoy need advanced privileges in order to function. Tanzu System Ingress needs to interact with the network
Exec into container No No Kubernetes ships with accounts that have exec access to pods this is likely needed by admins and a customer facing solution would be advised. Such as removing exec in RBAC for normal end users
Allowed hostPath No Yes A gatekeeper constraint has been added to prevent the host path from being mounted
hostNetwork access No Yes A gatekeeper constraint has been added to prevent the host network from being used.
Exception The Kapp controller needs access to the host for tanzu to function and is the only pod outside the control plane allowed host network access
Exposed dashboard Yes
Cluster-admin binding No No A cluster admin binding is needed for k8s to start and should be the only one in the cluster
Resource policies No Yes Fixed by setting a default for all pods via a gatekeeper mutation
Control plane hardening Yes
Insecure capabilities No Yes A gatekeeper constraint has been added to prohibit dangerous capabilities and a mutation has been added to set a default.
Exception Some pods such as contour/envoy need advanced privileges in order to function. Tanzu System Ingress needs to interact with the network
Immutable container filesystem No Yes A gatekeeper constraint has been added to prevent readOnlyRootFilesystems from being disabled.
Exception Pods created by contour/envoy, fluentd, the kapp controller, telemetry agents, and all other data services that need to run on k8s
Caution This mutation can cause issues within the cluster and may not be the wisest to implement.
Privileged container No Yes By default all pods have privileged set to false but a constraint has been added to enforce that a user does not enable it.
Ingress and Egress blocked No Yes A default deny cluster network policy can be implemented in Antrea
Container hostPort No Yes A gatekeeper constraint has been added to ensure users do not use hostPorts.
Exception The Kapp controller needs access to the host for tanzu to function and is the only pod outside the control plane allowed host network access
Network policies No Yes A suite of network policies can be installed to ensure all namespaces have a network policy
Fluent Bit Forwarding to SIEM No Yes Fluent bit needs to be installed and point at a valid output location
Fluent Bit Retry Enabled No Yes Fluent bit needs to be installed by the user with retries enabled in the config
IAAS Metadata endpoint blocked No Yes A Cluster Network Policy can be implemented to restrict all pods from hitting the endpoint
check-circle-line exclamation-circle-line close-line
Scroll to top icon