配置文件到集群类变量转换

本主题介绍了工作负载集群配置文件中的变量如何转换为基于类的 Cluster 对象及其 ClusterBootstrap 以及 Tanzu Kubernetes Grid (TKG) 中的其他辅助对象中的变量设置。有关 Cluster 对象及其辅助对象的示例,请参见下面的 Cluster 对象及其辅助对象的示例

概览:配置文件和集群类变量

配置文件变量和基于类的集群变量不同:

  • 配置文件变量包括:

  • 基于类的集群变量设置ClusterClusterBootstrap 对象规范中的嵌套 YAML 设置:

    • Cluster 对象设置包括:

      • 大多数可用的集群设置,并且可以任意更改。
      • ClusterClass 对象定义,设置为集群的 spec.topology.class 值。
      • spec.topology 下的 controlPlaneworkersvariables 块中设置。
      • 示例设置:

        variables:
        - name: imageRepository
          value:
            host: stg-project.vmware.com
        
    • ClusterBootstrap 设置是容器网络和其他低级别基础架构的一些一次性设置,无法在现有集群中进行更改。

使用集群配置文件运行 tanzu cluster create 命令(如创建基于类的集群中所述)时,Tanzu CLI 执行此转换。

节点配置

下表列出了用于配置控制平面和工作节点以及节点实例运行的操作系统的变量。有关配置文件变量列中变量的信息,请参见《配置文件变量参考》中的节点配置

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。此列中的示例代码适用于 vSphere 上的 Cluster 个对象;AWS 和 Azure 上的对象结构可能不同。

配置文件变量 基于类的集群对象结构
OS_NAME: ubuntu
OS_VERSION: 20.04
OS_ARCH: amd64
metadata: 
  annotations: 
    osInfo: ubuntu,20.04,amd64
    
CONTROL_PLANE_MACHINE_COUNT: 3
    
controlPlane: 
  ...
  replicas: 3
    
CONTROLPLANE_SIZE: large
SIZE: large
值转换为基于基础架构的 machine 设置。
SIZE 适用于控制平面和工作节点。
- name: controlPlane
  value: 
    machine: 
      diskGiB: 40
      memoryMiB: 16384
      numCPUs: 4
    
CONTROL_PLANE_NODE_SEARCH_DOMAINS: corp.local, example.com
    
- name: controlPlane
  value: 
    ...
    network: 
      ...
      searchDomains: 
      - corp.local
      - example.com
    
CONTROL_PLANE_NODE_LABELS: 'key1=value1,key2=value2'
    
- name: controlPlane
  value: 
    ...
    nodeLabels: 
    - key: key1
      value: value1
    - key: key2
      value: value2
    
WORKER_MACHINE_COUNT: 6
    
workers: 
  machineDeployments: 
  - class: tkg-worker
    ...
    replicas: 6
    
WORKER_SIZE: extra-large
SIZE: extra-large
值转换为基于基础架构的 machine 设置。
SIZE 适用于控制平面和工作节点。
- name: worker
  value: 
    ...
    machine: 
      diskGiB: 80
      memoryMiB: 32768
      numCPUs: 8
    
WORKER_NODE_SEARCH_DOMAINS: corp.local, example.com
- name: worker
  value: 
    ...
    network: 
      searchDomains: 
      - corp.local
      - example.com
    
CUSTOM_TDNF_REPOSITORY_CERTIFICATE: "YPdeNjLW[...]"
- name: customTDNFRepository
  value: 
    certificate: YPdeNjLW[...]
    
WORKER_ROLLOUT_STRATEGY: RollingUpdate
  workers: 
    machineDeployments: 
    - class: tkg-worker
      failureDomain: "1"
      metadata: 
        annotations: 
          run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu,os-version=2004
      name: md-0
      replicas: 1
      strategy: 
        type: RollingUpdate
    

Pod 安全准入 (PSA) 控制器

下表列出了用于为 Pod 安全准入控制器配置 Pod 安全标准的变量。有关配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的Pod 安全准入控制器的 Pod 安全标准

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。

配置文件变量 基于类的集群对象结构
POD_SECURITY_STANDARD_DEACTIVATED: false
POD_SECURITY_STANDARD_AUDIT: privileged
POD_SECURITY_STANDARD_WARN: privileged
POD_SECURITY_STANDARD_ENFORCE: baseline
    
- name: podSecurityStandard 
  value:  
    deactivated: false 
    audit: "privileged" 
    enforce: "privileged" 
    warn: "baseline" 
    auditVersion: "v1.26" 
    enforceVersion: "v1.26" 
    warnVersion: "v1.26" 
    exemptions:  
      namespaces: ["kube-system", "tkg-system"] 
    

Cluster Autoscaler

ENABLE_AUTOSCALERtrue 时,Tanzu CLI 将为 Cluster Autoscaler 创建一个 Deployment 对象,并将 Cluster Autoscaler annotations 添加到 Cluster 对象。有关下面的配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的 Cluster Autoscaler

配置文件变量 基于类的集群和 Autoscaler 部署对象结构
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: 10m
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: 10s
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: 3m
AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: 10m
AUTOSCALER_MAX_NODE_PROVISION_TIME: 15m
AUTOSCALER_MAX_NODES_TOTAL: 0
    
在集群 Autoscaler Deployment 对象中:
spec: 
  containers: 
  - args: 
  ...
  - --scale-down-delay-after-add=10m
  - --scale-down-delay-after-delete=10s
  - --scale-down-delay-after-failure=3m
  - --scale-down-unneeded-time=10m
  - --max-node-provision-time=15m
  - --max-nodes-total=0
AUTOSCALER_MAX_SIZE_0: 12
AUTOSCALER_MIN_SIZE_0: 8
AUTOSCALER_MAX_SIZE_1: 10
AUTOSCALER_MIN_SIZE_1: 6
AUTOSCALER_MAX_SIZE_2: 8
AUTOSCALER_MIN_SIZE_2: 4
    
如果未设置 AUTOSCALER_MAX_SIZE_*AUTOSCALER_MIN_SIZE_*,则 annotations 设置采用值 WORKER_MACHINE_COUNT_*
Cluster 对象中的 spec.topology.workers 下:
machineDeployments: 
- class: tkg-worker
  metadata: 
    annotations: 
      cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "12"
      cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "8"
      ...
  name: md-0
- class: tkg-worker
  metadata: 
    annotations: 
      cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "10"
      cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "6"
      ...
  name: md-1
...

代理和专用映像注册表

本节列出了使用代理和专用映像注册表的变量,例如,在 Internet 受限部署中。

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。

配置文件变量 基于类的集群对象结构 备注
TKG_CUSTOM_IMAGE_REPOSITORY: example.com/yourproject
TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: true
- name: imageRepository
  value: 
    host: example.com/yourproject
    tlsCertificateValidation: 
      enabled: false
tldCertificateValidation.enabled 会反转TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY 的布尔设置,如果其值为 true,则不会写入该设置。
TKG_HTTP_PROXY_ENABLED: true
TKG_HTTP_PROXY: http://proxy.example.com:80
TKG_HTTPS_PROXY: https://proxy.example.com:3128
TKG_NO_PROXY: .noproxy.example.com,noproxy.example.com,192.168.0.0/24
- name: proxy
  value: 
    httpProxy: http://proxy.example.com:80
    httpsProxy: https://proxy.example.com:3128
    noProxy: 
    - .noproxy.example.com
    - noproxy.example.com
    - 192.168.0.0/24
    - [...]
在内部,Tanzu CLI 会将未在配置文件中设置的值附加到 noProxy 列表值,如 [Proxy Configuration] 中所述 (../../config-ref.md#proxies)。
TKG_PROXY_CA_CERT: "LS0tLSBL[...]"
TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: MIIEpgIH[...]"
- name: trust
  value: 
    - name: proxy
      data: LS0tLSBL[...]
    - name: imageRepository
      data: MIIEpgIH[...]
值是 base64 编码的 CA 证书。
TKG_PROXY_CA_CERT 优先于 TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE,如果设置了trust proxy 值,则不会写入 trust imageRespository 值。
ADDITIONAL_IMAGE_REGISTRY_1: "example.com/otherregistry-1"
ADDITIONAL_IMAGE_REGISTRY_1_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_1_CA_CERTIFICATE: "LS0tLSBL[...]"
ADDITIONAL_IMAGE_REGISTRY_2: "example.com/otherregistry-2"
ADDITIONAL_IMAGE_REGISTRY_2_SKIP_TLS_VERIFY: true
ADDITIONAL_IMAGE_REGISTRY_3: "example.com/otherregistry-3"
ADDITIONAL_IMAGE_REGISTRY_3_SKIP_TLS_VERIFY: false
ADDITIONAL_IMAGE_REGISTRY_3_CA_CERTIFICATE: "MIIEpgIH[...]"
- name: additionalImageRegistries
  value: 
  - caCert: LS0tLSBL[...]
    host: example.com/otherregistry-1
    skipTlsVerify: false
  - host: example.com/otherregistry-2
    skipTlsVerify: true
  - caCert: MIIEpgIH[...]
    host: example.com/otherregistry-3
    skipTlsVerify: false

常见变量

下表列出了所有目标平台通用的变量。

有关配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的所有目标平台通用的变量

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。

配置文件变量 基于类的集群对象结构
CLUSTER_NAME: my-cluster
NAMESPACE: default
metadata: 
  name: my-cluster
  namespace: default
CLUSTER_PLAN: dev
metadata: 
  annotations: 
    tkg/plan: dev
INFRASTRUCTURE_PROVIDER: vsphere
topology: 
    class: tkg-vsphere-default-v1.0.0
    
CLUSTER_API_SERVER_PORT: 6443
- name: apiServerPort
  value: 6443
CNI: antrea
- name: cni
  value: antrea
ENABLE_AUDIT_LOGGING: true
- name: auditLogging
      value: 
        enabled: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
spec: 
  clusterNetwork: 
    pods: 
      cidrBlocks: 
      - 100.96.0.0/11
    services: 
      cidrBlocks: 
      - 100.64.0.0/13
CONTROLPLANE_CERTIFICATE_ROTATION_ENABLED: true
CONTROLPLANE_CERTIFICATE_ROTATION_DAYS_BEFORE: 65
- name: controlPlaneCertificateRotation
  value: 
    activate: true
    daysBefore: 65

Antrea CNI

本节列出了用于配置集群的 Antrea 容器网络接口 (CNI) 的变量。基于类的集群变量位于ClusterBootstrap 对象的 spec.cni.refName 属性下引用的 AntreaConfig 对象中。

基于类的集群支持许多没有相应 TKG 配置文件变量的 Antrea 配置选项。有关可针对基于类的 Cluster 对象设置的所有 Antrea 配置选项,请参见 Antrea 文档中的 Feature Gates 和其他主题。

AntreaConfig 对象结构列中,所有设置都位于 spec.antrea.config 下的 AntreaConfig 对象定义中。

配置文件变量 AntreaConfig 对象结构
ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false
ANTREA_ENABLE_USAGE_REPORTING: false
ANTREA_KUBE_APISERVER_OVERRIDE: "https://192.168.77.100:6443"
ANTREA_NO_SNAT: false
ANTREA_TRAFFIC_ENCAP_MODE: "encap"
ANTREA_TRANSPORT_INTERFACE: "eth0"
ANTREA_TRANSPORT_INTERFACE_CIDRS: "10.0.0.2/24"
disableUdpTunnelOffload: false
enableUsageReporting: false
kubeAPIServerOverride: https://192.168.77.100:6443
noSNAT: false
trafficEncapMode: encap
transportInterface: eth0
transportInterfaceCIDRs: 
- 10.0.0.2/24
ANTREA_EGRESS: true
ANTREA_IPAM: false
ANTREA_MULTICAST: false
ANTREA_NETWORKPOLICY_STATS: true
ANTREA_NODEPORTLOCAL: true
ANTREA_SERVICE_EXTERNALIP: false
ANTREA_POLICY: true
ANTREA_TRACEFLOW: true
featureGates: 
  Egress: true
  AntreaIPAM: false
  Multicast: false
  NetworkPolicyStats: true
  NodePortLocal: true
  ServiceExternalIP: false
  AntreaPolicy: true
  AntreaTraceflow: true
ANTREA_PROXY: false
ANTREA_PROXY_ALL: false
ANTREA_PROXY_LOAD_BALANCER_IPS: true
ANTREA_PROXY_NODEPORT_ADDRS: "100.70.70.12"
ANTREA_PROXY_SKIP_SERVICES: 10.11.1.2,kube-system/kube-dns
antreaProxy: 
  nodePortAddresses: 
  - 100.70.70.12
  proxyAll: false
  proxyLoadBalancerIPs: true
  skipServices: 
  - 10.11.1.2
  - kube-system/kube-dns
ANTREA_FLOWEXPORTER: false
ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "60s"
ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.svc:4739:tls"
ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s"
ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s"
flowExporter: 
  activeFlowTimeout: 60s
  collectorAddress: flow-aggregator.svc:4739:tls
  idleFlowTimeout: 15s
  pollInterval: 5s
ANTREA_NODEPORTLOCAL_ENABLED: true
ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000
nodePortLocal: 
  enabled: true
  portRange: 61000-62000
ANTREA_EGRESS_EXCEPT_CIDRS: "10.0.0.0/6"
egress: 
  exceptCIDRs: 
  - 10.0.0.0/6
ANTREA_MULTICAST_INTERFACES: "eth0"
multicastInterfaces: 
- eth0

vSphere

本节列出了用于将工作负载集群部署到 vSphere 的变量。有关下面的配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的 vSphere基于类的对象结构列中列出的设置位于列出的 kind 的对象中。

配置文件变量 对象结构
TKG_IP_FAMILY: ipv4
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.237.177.161
VSPHERE_REGION: my-region
VSPHERE_ZONE: my-zone
kind: VSphereCPIConfig
spec: 
  vsphereCPI: 
    ipFamily: ipv4
    vmNetwork: 
      excludeExternalSubnetCidr: 10.237.177.161/32
      excludeInternalSubnetCidr: 10.237.177.161/32
    zone: my-zone

kind: Cluster
metadata: 
  annotations: 
    tkg.tanzu.vmware.com/cluster-controlplane-endpoint: 10.237.177.161
spec: 
  topology: 
    variables: 
    - name: apiServerEndpoint
      value: 10.237.177.161
VSPHERE_MTU

kind: Cluster
spec: 
  topology: 
    variables: 
    - name: controlPlane
      value: 
        ...
        network: 
            mtu: 1500
    - name: worker
      value: 
        ...
        network: 
            mtu: 1500
VSPHERE_DATACENTER: /dc0
VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-0
VSPHERE_FOLDER: /dc0/vm/folder0
VSPHERE_NETWORK: /dc0/network/VM Network
VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/rp0
VSPHERE_SERVER: 10.237.179
VSPHERE_STORAGE_POLICY_ID: my-local-sp
VSPHERE_TEMPLATE: /dc0/vm/ubuntu-2004-kube-v1.27.5+vmware.2-tkg.1
VSPHERE_TLS_THUMBPRINT: B7:15:(...):1D:2F
kind: ClusterBootstrap
spec: 
  additionalPackages: 
  - refName: tkg-storageclass*
    valuesFrom: 
      inline: 
        VSPHERE_STORAGE_POLICY_ID: my-local-sp

kind: Cluster
spec: 
  topology: 
    variables: 
    - name: vcenter
      value: 
        datacenter: /dc0
        datastore: /dc0/datastore/sharedVmfs-0
        folder: /dc0/vm/folder0
        network: /dc0/network/VM Network
        resourcePool: /dc0/host/cluster0/Resources/rp0
        server: 10.237.179.190
        storagePolicyID: my-local-sp
        template: /dc0/vm/ubuntu-2004-kube-v1.27.5+vmware.2-tkg.1
        tlsThumbprint: B7:15:(...):1D:2F
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa NzaC1yc2EA[...]==
kind: Cluster
spec: 
  topology: 
    variables: 
    - name: user
      value: 
        sshAuthorizedKeys: 
        - ssh-rsa NzaC1yc2EA[...]==
VSPHERE_CONTROL_PLANE_DISK_GIB: "30"
VSPHERE_CONTROL_PLANE_MEM_MIB: "2048"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_WORKER_DISK_GIB: "50"
VSPHERE_WORKER_MEM_MIB: "4096"
VSPHERE_WORKER_NUM_CPUS: "4"
kind: Cluster
spec: 
  topology: 
    variables: 
    - name: controlPlane
      value: 
        machine: 
          diskGiB: 40
          memoryMiB: 8192
          numCPUs: 2
    - name: worker
      value: 
        machine: 
          diskGiB: 40
          memoryMiB: 8192
          numCPUs: 2
NTP_SERVERS: time.google.com
kind: Cluster
spec: 
  topology: 
    variables: 
    - name: ntpServers
      value: 
      - time.google.com
VSPHERE_AZ_0: rack1
VSPHERE_AZ_1: rack2
VSPHERE_AZ_2: rack3
kind: Cluster
spec: 
  topology: 
    workers: 
      machineDeployments: 
      - class: tkg-worker
        failureDomain: rack1
        name: md-0
      - class: tkg-worker
        failureDomain: rack2
        name: md-1
      - class: tkg-worker
        failureDomain: rack3
        name: md-2
CONTROL_PLANE_NODE_NAMESERVERS: “10.10.10.10,10.10.10.11”
WORKER_NODE_NAMESERVERS: “10.10.10.10,10.10.10.11”
- name: controlplane
  value: 
	network: 
	  nameservers: 
	  - 10.10.10.10
	  - 10.10.10.11
- name: worker
  value: 
	network: 
	  nameservers: 
	  - 10.10.10.10
	  - 10.10.10.11
    

AWS

本节列出了用于将工作负载集群部署到 AWS 的变量。有关下面的配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的 AWS

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。

配置文件变量 基于类的集群对象结构
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_PROFILE
    
不适用
这些设置不会存储在 Cluster 对象中。
AWS_REGION: us-east-1
    
- name: region
  value: us-east-1
    
AWS_SSH_KEY_NAME: aws-tkg-clusteradmin
    
- name: sshKeyName
  value: aws-tkg-clusteradmin
    
AWS_LOAD_BALANCER_SCHEME_INTERNAL: false
    
- name: loadBalancerSchemeInternal
  value: false
BASTION_HOST_ENABLED: true
    
- name: bastion
  value: 
    enabled: true
    
AWS_NODE_AZ: us-east-1a
AWS_PRIVATE_NODE_CIDR: 10.0.0.0/24
AWS_PRIVATE_SUBNET_ID: subnet-0a7d376dde53c77ed
AWS_PUBLIC_NODE_CIDR: 10.0.1.0/24
AWS_PUBLIC_SUBNET_ID: subnet-0794d50f57e9801b6
AWS_NODE_AZ_1: us-west-2b
AWS_PRIVATE_NODE_CIDR_1: 10.0.2.0/24
AWS_PRIVATE_SUBNET_ID_1: subnet-0c338780824d1c59d
AWS_PUBLIC_NODE_CIDR_1: 10.0.3.0/24
AWS_PUBLIC_SUBNET_ID_1: subnet-0addabd635d02ba97
AWS_NODE_AZ_2: ap-southeast-3
AWS_PRIVATE_NODE_CIDR_2: 10.0.4.0/24
AWS_PRIVATE_SUBNET_ID_2: subnet-00b9638e419a6187b
AWS_PUBLIC_NODE_CIDR_2: 10.0.5.0/24
AWS_PUBLIC_SUBNET_ID_2: subnet-0ed174ef16a2f43aa
- name: network
  value: 
    subnets: 
    - az: us-east-1a
      private: 
        cidr: 10.0.0.0/24
        id: subnet-0a7d376dde53c77ed
      public: 
        cidr: 10.0.1.0/24
        id: subnet-0794d50f57e9801b6
    - az: us-west-2b
      private: 
        cidr: 10.0.2.0/24
        id: subnet-0c338780824d1c59d
      public: 
        cidr: 10.0.3.0/24
        id: subnet-0addabd635d02ba97
    - az: ap-southeast-3
      private: 
        cidr: 10.0.4.0/24
        id: subnet-00b9638e419a6187b
      public: 
        cidr: 10.0.5.0/24
        id: subnet-0ed174ef16a2f43aa
    
AWS_VPC_CIDR: 10.0.0.0/16
    
- name: network
  value: 
    ...
    vpc: 
      cidr: 10.0.0.0/16
    
AWS_VPC_ID: vpc-0ce8bdfea218
    
- name: network
  value: 
    ...
    vpc: 
      existingID: vpc-0ce8bdfea218
    
AWS_SECURITY_GROUP_BASTION: sg-1
AWS_SECURITY_GROUP_APISERVER_LB: sg-2
AWS_SECURITY_GROUP_LB: sg-3
AWS_SECURITY_GROUP_CONTROLPLANE: sg-4
AWS_SECURITY_GROUP_NODE: sg-5
    
- name: network
  value: 
    ...
    securityGroupOverrides: 
      bastion: sg-1
      apiServerLB: sg-2
      lb: sg-3
      controlPlane: sg-4
      node: sg-5
    
AWS_IDENTITY_REF_NAME: my-aws-id
AWS_IDENTITY_REF_KIND: AWSClusterRoleIdentity
    
- name: identityRef
  value: 
    name: my-aws-id
    kind: AWSClusterRoleIdentity
    
NODE_MACHINE_TYPE: m5.large
AWS_NODE_OS_DISK_SIZE_GIB: 80
    
在多 AZ 署中,您还可以设置 NODE_MACHINE_TYPE_1NODE_MACHINE_TYPE_2
Cluster 对象中的 spec.topology.workers 下:
machineDeployments: 
- class: tkg-worker
  name: md-0
  value: 
    instanceType: m5.large
    rootVolume: 
      sizeGiB: 80
    
CONTROL_PLANE_MACHINE_TYPE: t3.large
AWS_CONTROL_PLANE_OS_DISK_SIZE_GIB: 80
    
- name: controlPlane
  value: 
    instanceType: t3.large
    rootVolume: 
      sizeGiB: 80
    

Microsoft Azure

本节列出了用于将工作负载集群部署到 Microsoft Azure 的变量。有关下面的配置文件变量列中的变量的信息,请参见《配置文件变量参考》中的 Microsoft Azure

基于类的集群对象结构列中,所有 name / value 对设置都位于 spec.topology.variables 下的 Cluster 对象定义中。

配置文件变量 基于类的集群对象结构
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
AZURE_TENANT_ID
不适用
这些设置不会公开在 Cluster 对象中。
AZURE_LOCATION: eastus2
- name: location
  value: eastus2
AZURE_RESOURCE_GROUP: my-azure-rg
- name: resourceGroup
  value: my-azure-rg
AZURE_SUBSCRIPTION_ID: c789uce3-aaaa-bbbb-cccc-a51b6b0gb405
- name: subscriptionID
  value: c789uce3-aaaa-bbbb-cccc-a51b6b0gb405
AZURE_ENVIRONMENT: AzurePublicCloud
- name: environment
  value: AzurePublicCloud
AZURE_SSH_PUBLIC_KEY_B64: "c3NoLXJzYSBB [...] vdGFsLmlv"
- name: sshPublicKey
  value: c3NoLXJzYSBB [...] vdGFsLmlv
		
AZURE_FRONTEND_PRIVATE_IP: 10.0.0.100
如果 AZURE_ENABLE_PRIVATE_CLUSTERtrue,则设置。
- name: frontendPrivateIP
  value: 10.0.0.100
    
AZURE_CUSTOM_TAGS: "foo=bar, plan=prod"
- name: customTags
  value: "foo=bar, plan=prod"
    
AZURE_ENABLE_ACCELERATED_NETWORKING: true
- name: acceleratedNetworking
  value: 
    enabled: true
    
AZURE_ENABLE_PRIVATE_CLUSTER: false
- name: privateCluster
  value: 
    enabled: false
    
AZURE_VNET_CIDR: 10.0.0.0/16
AZURE_VNET_NAME: my-azure-vnet
AZURE_VNET_RESOURCE_GROUP: my-azure-vnet-rg
- name: network
  value: 
    vnet: 
      cidrBlocks: 
      - 10.0.0.0/16
      name: my-azure-vnet
      resourceGroup: my-azure-vnet-rg
AZURE_IDENTITY_NAME: my-azure-id
AZURE_IDENTITY_NAMESPACE: default
    
- name: identityRef
  value: 
    name: my-azure-id
    namespace: default  
AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB: 256
AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB: 128
AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS
AZURE_ENABLE_CONTROL_PLANE_OUTBOUND_LB: true
AZURE_CONTROL_PLANE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1
AZURE_CONTROL_PLANE_SUBNET_CIDR: 10.0.0.0/24
AZURE_CONTROL_PLANE_SUBNET_NAME: my-azure-cp-subnet
AZURE_CONTROL_PLANE_MACHINE_TYPE: Standard_D2s_v3
    
- name: controlPlane
  value: 
    dataDisks: 
    - sizeGiB: 256
    osDisk: 
      sizeGiB: 128
      storageAccountType: Premium_LRS
    outboundLB: 
      enabled: true
      frontendIPCount: 1
    subnet: 
      cidr: 10.0.0.0/24
      name: my-azure-cp-subnet
    vmSize: Standard_D2s_v3
    
AZURE_ENABLE_NODE_DATA_DISK: true
AZURE_NODE_DATA_DISK_SIZE_GIB: 256
AZURE_NODE_OS_DISK_SIZE_GIB: 128
AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS
AZURE_ENABLE_NODE_OUTBOUND_LB: true
AZURE_NODE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1
AZURE_NODE_OUTBOUND_LB_IDLE_TIMEOUT_IN_MINUTES: 4
AZURE_NODE_SUBNET_CIDR: 10.0.1.0/24
AZURE_NODE_SUBNET_NAME: my-azure-worker-subnet
AZURE_NODE_MACHINE_TYPE: Standard_D2s_v3
    
- name: worker
  value: 
    dataDisks: 
    - sizeGiB: 256
    osDisk: 
      sizeGiB: 128
      storageAccountType: Premium_LRS
    outboundLB: 
      enabled: true
      frontendIPCount: 1
      idleTimeoutInMinutes: 4
    subnet: 
      cidr: 10.0.1.0/24
      name: my-azure-worker-subnet
    vmSize: Standard_D2s_v3
    

NSX Advanced Load Balancer

本节列出了用于在 TKG 中配置 NSX Advanced Load Balancer (ALB) 的变量。

配置文件变量 基于类的集群对象结构
AVI_CONTROL_PLANE_HA_PROVIDER
topology: 
  variables: 
    - name: aviAPIServerHAProvider
      value: true
    

Cluster 对象及其辅助对象示例

将集群配置文件传递到 tanzu cluster create--file 标记时,命令会将集群配置文件转换为集群规范文件。有关 tanzu cluster create 生成的 Cluster 对象及其辅助对象的示例,请参见下文:

CLUSTER_NAME: example-cluster
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
VSPHERE_NETWORK: /dc0/network/VM Network
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAACCCza2EBBB[...]ADGQAg/POl6vyWOmQ==
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: 1234567AbC!
VSPHERE_SERVER: 10.XXX.XXX.71
VSPHERE_DATACENTER: /dc0
VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/example-tkg
VSPHERE_DATASTORE: /dc0/datastore/vsanDatastore
VSPHERE_FOLDER: /dc0/vm/example-tkg
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_INSECURE: true
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.XXX.XXX.75
AVI_CONTROL_PLANE_HA_PROVIDER: false
ENABLE_AUDIT_LOGGING: false
ENABLE_DEFAULT_STORAGE_CLASS: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
ENABLE_AUTOSCALER: false

生成的 Cluster 对象及其辅助对象:

apiVersion: cpi.tanzu.vmware.com/v1alpha1
kind: VSphereCPIConfig
metadata:
  name: example-cluster
  namespace: default
spec:
  vsphereCPI:
    mode: vsphereCPI
    tlsCipherSuites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    vmNetwork:
      excludeExternalSubnetCidr: 10.XXX.XXX.75/32
      excludeInternalSubnetCidr: 10.XXX.XXX.75/32
---
apiVersion: csi.tanzu.vmware.com/v1alpha1
kind: VSphereCSIConfig
metadata:
  name: example-cluster
  namespace: default
spec:
  vsphereCSI:
    config:
      datacenter: /dc0
      httpProxy: ""
      httpsProxy: ""
      insecureFlag: true
      noProxy: ""
      region: null
      tlsThumbprint: ""
      useTopologyCategories: false
      zone: null
    mode: vsphereCSI
---
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: ClusterBootstrap
metadata:
  annotations:
    tkg.tanzu.vmware.com/add-missing-fields-from-tkr: TKR-NAME
  name: example-cluster
  namespace: default
spec:
  additionalPackages:
  - refName: metrics-server*
  - refName: secretgen-controller*
  - refName: pinniped*
  cpi:
    refName: vsphere-cpi*
    valuesFrom:
      providerRef:
        apiGroup: cpi.tanzu.vmware.com
        kind: VSphereCPIConfig
        name: example-cluster
  csi:
    refName: vsphere-csi*
    valuesFrom:
      providerRef:
        apiGroup: csi.tanzu.vmware.com
        kind: VSphereCSIConfig
        name: example-cluster
  kapp:
    refName: kapp-controller*
---
apiVersion: v1
kind: Secret
metadata:
  name: example-cluster
  namespace: default
stringData:
  password: 1234567AbC!
  username: [email protected]
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  annotations:
    osInfo: ubuntu,20.04,amd64
    tkg.tanzu.vmware.com/cluster-controlplane-endpoint: 10.XXX.XXX.75
    tkg/plan: dev
  labels:
    tkg.tanzu.vmware.com/cluster-name: example-cluster
  name: example-cluster
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 100.96.0.0/11
    services:
      cidrBlocks:
      - 100.64.0.0/13
  topology:
    class: tkg-vsphere-default-CLUSTER-CLASS-VERSION
    controlPlane:
      metadata:
        annotations:
          run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
      replicas: 1
    variables:
    - name: cni
      value: antrea
    - name: controlPlaneCertificateRotation
      value:
        activate: true
        daysBefore: 90
    - name: auditLogging
      value:
        enabled: false
    - name: podSecurityStandard
      value:
        audit: restricted
        deactivated: false
        warn: restricted
    - name: apiServerEndpoint
      value: 10.XXX.XXX.75
    - name: aviAPIServerHAProvider
      value: false
    - name: vcenter
      value:
        cloneMode: fullClone
        datacenter: /dc0
        datastore: /dc0/datastore/vsanDatastore
        folder: /dc0/vm/example-tkg
        network: /dc0/network/VM Network
        resourcePool: /dc0/host/cluster0/Resources/example-tkg
        server: 10.XXX.XXX.71
        storagePolicyID: ""
        tlsThumbprint: ""
    - name: user
      value:
        sshAuthorizedKeys:
        - ssh-rsa AAAACCCza2EBBB[...]ADGQAg/POl6vyWOmQ==
    - name: controlPlane
      value:
        machine:
          diskGiB: 40
          memoryMiB: 8192
          numCPUs: 2
    - name: worker
      value:
        machine:
          diskGiB: 40
          memoryMiB: 4096
          numCPUs: 2
    - name: security
      value:
        fileIntegrityMonitoring:
          enabled: false
        imagePolicy:
          pullAlways: false
          webhook:
            enabled: false
            spec:
              allowTTL: 50
              defaultAllow: true
              denyTTL: 60
              retryBackoff: 500
        kubeletOptions:
          eventQPS: 50
          streamConnectionIdleTimeout: 4h0m0s
        systemCryptoPolicy: default
    version: KUBERNETES-VERSION
    workers:
      machineDeployments:
      - class: tkg-worker
        metadata:
          annotations:
            run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
        name: md-0
        replicas: 1
        strategy:
          type: RollingUpdate

上面的示例对象规范文件包含以下占位符文本:

  • tkg.tanzu.vmware.com/add-missing-fields-from-tkr: TKR-NAME 中的 TKR-NAME:由 tanzu cluster create 设置为兼容的 Tanzu Kubernetes 版本 (TKr),具体取决于您的配置。例如,tkg.tanzu.vmware.com/add-missing-fields-from-tkr: v1.27.5---vmware.1-tkg.1v1.27.5---vmware.1-tkg.1 是此 Tanzu Kubernetes Grid 版本的默认 TKr。
  • version: KUBERNETES-VERSION 中的 KUBERNETES-VERSION:由 tanzu cluster create 设置为兼容的 Kubernetes 版本,具体取决于您的配置。例如,version: v1.27.5+vmware.1-tkg.1v1.27.5+vmware.1-tkg.1 是此 Tanzu Kubernetes Grid 版本的默认 Kubernetes 版本。
  • class: tkg-vsphere-default-CLUSTER-CLASS-VERSION 中的 CLUSTER-CLASS-VERSION:默认 class 的版本。例如,class: tkg-vsphere-default-v1.1.1
check-circle-line exclamation-circle-line close-line
Scroll to top icon