本主題說明工作負載叢集組態檔中的變數如何轉換為基於類的 Cluster
物件及其 ClusterBootstrap
以及 Tanzu Kubernetes Grid (TKG) 中的其他從屬物件中的變數設定。如需範例 Cluster
物件及其從屬物件,請參閱以下的範例 Cluster
物件及其從屬物件。
組態檔變數和以類別為基礎的叢集變數不同:
組態檔變數包括:
TKG_CUSTOM_IMAGE_REPOSITORY: stg-project.vmware.com
以類別為基礎的叢集變數設定是 Cluster
或 ClusterBootstrap
物件規格中的巢狀 YAML 設定:
Cluster
物件設定包括:
ClusterClass
物件集定義為叢集的 spec.topology.class
值。spec.topology
之下於 controlPlane
、workers
或 variables
區塊中設定。範例設定:
variables:
- name: imageRepository
value:
host: stg-project.vmware.com
ClusterBootstrap
設定是容器網路和其他低階基礎架構的一些一次性設定,無法在現有叢集中進行更改。使用叢集組態檔執行 tanzu cluster create
命令 (如建立以類別為基礎的叢集中所述) 時,Tanzu CLI 執行此轉換。
下表所列的變數用於設定控制平面和 worker 節點,以及節點執行個體執行的作業系統。有關組態檔變數欄中變數的資訊,請參閱〈組態檔變數參考〉中的節點組態。
在以類別為基礎的叢集物件欄,所有 name
/ value
組設定位在 spec.topology.variables
之下的 Cluster
物件定義。此欄中的範例程式碼適用於 vSphere 上的 Cluster
個物件;AWS 和 Azure 上的物件結構可能會有所不同。
組態檔變數 | 以類別為基礎的叢集物件結構 |
---|---|
OS_NAME: ubuntu OS_VERSION: 20.04 OS_ARCH: amd64 |
metadata: annotations: osInfo: ubuntu,20.04,amd64 |
CONTROL_PLANE_MACHINE_COUNT: 3 |
controlPlane: … replicas: 3 |
CONTROLPLANE_SIZE: large SIZE: large值轉換為基於基礎架構的 machine 設定。SIZE 適用於控制平面和 worker 節點。 |
- name: controlPlane value: machine: diskGiB: 40 memoryMiB: 16384 numCPUs: 4 |
CONTROL_PLANE_NODE_SEARCH_DOMAINS: corp.local, example.com |
- name: controlPlane value: … network: … searchDomains: - corp.local - example.com |
CONTROL_PLANE_NODE_LABELS: ‘key1=value1,key2=value2’ |
- name: controlPlane value: … nodeLabels: - key: key1 value: value1 - key: key2 value: value2 |
WORKER_MACHINE_COUNT: 6 |
- name: worker value: count: 6 |
WORKER_SIZE: extra-large SIZE: extra-large值轉換為基於基礎架構的 machine 設定。SIZE 適用於控制平面和 worker 節點。 |
- name: worker value: … machine: diskGiB: 80 memoryMiB: 32768 numCPUs: 8 |
WORKER_NODE_SEARCH_DOMAINS: corp.local, example.com |
- name: worker value: … network: searchDomains: - corp.local - example.com |
CUSTOM_TDNF_REPOSITORY_CERTIFICATE: “YPdeNjLW[…]” |
- name: customTDNFRepository value: certificate: YPdeNjLW[…] |
ENABLE_AUTOSCALER
為 true
時,Tanzu CLI 將為 Cluster Autoscaler 建立一個 Deployment
物件,並將 Cluster Autoscaler annotations
新增到 Cluster
物件。有關下方組態檔變數欄中變數的資訊,請參閱〈組態檔變數參考〉中的 Cluster Autoscaler。
組態檔變數 | 以類別為基礎的叢集和 Autoscaler 部署物件結構 |
---|---|
AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: 10m AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: 10s AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: 3m AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: 10m AUTOSCALER_MAX_NODE_PROVISION_TIME: 15m AUTOSCALER_MAX_NODES_TOTAL: 0 |
在 Cluster Autoscaler Deployment 物件中:spec: containers: - args: … - –scale-down-delay-after-add=10m - –scale-down-delay-after-delete=10s - –scale-down-delay-after-failure=3m - –scale-down-unneeded-time=10m - –max-node-provision-time=15m - –max-nodes-total=0 |
AUTOSCALER_MAX_SIZE_0: 12 AUTOSCALER_MIN_SIZE_0: 8 AUTOSCALER_MAX_SIZE_1: 10 AUTOSCALER_MIN_SIZE_1: 6 AUTOSCALER_MAX_SIZE_2: 8 AUTOSCALER_MIN_SIZE_2: 4如果未設定 AUTOSCALER_MAX_SIZE_ 或
,則 |
在 Cluster 物件中,在 spec.topology.workers 下:machineDeployments: - class: tkg-worker metadata: annotations: cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: “12” cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: “8” … name: md-0 - class: tkg-worker metadata: annotations: cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: “10” cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: “6” … name: md-1 … |
本節列出了使用 Proxy 和私人映像登錄的變數,例如,在網際網路受限部署中。
在以類別為基礎的叢集物件欄,所有 name
/ value
組設定位在 spec.topology.variables
之下的 Cluster
物件定義。
組態檔變數 | 以類別為基礎的叢集物件結構 | 附註 |
---|---|---|
TKG_CUSTOM_IMAGE_REPOSITORY: example.com/yourproject TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: true |
- name: imageRepository value: host: example.com/yourproject tlsCertificateValidation: enabled: false |
tldCertificateValidation.enabled 會反轉TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY 的布林值設定,如果其值為 true ,則不會寫入該設定。 |
TKG_HTTP_PROXY_ENABLED: true TKG_HTTP_PROXY: http://proxy.example.com:80 TKG_HTTPS_PROXY: https://proxy.example.com:3128 TKG_NO_PROXY: .noproxy.example.com,noproxy.example.com,192.168.0.0/24 |
- name: proxy value: httpProxy: http://proxy.example.com:80 httpsProxy: https://proxy.example.com:3128 noProxy: - .noproxy.example.com - noproxy.example.com - 192.168.0.0/24 - […] |
在內部,Tanzu CLI 會將未在組態檔中設定的值附加到 noProxy 列表值,如 [Proxy Configuration] 中所述。 |
TKG_PROXY_CA_CERT: “LS0tLSBL[…]” TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: “MIIEpgIH[…]" |
- name: trust value: - name: proxy data: LS0tLSBL[…] - name: imageRepository data: MIIEpgIH[…] |
值是 base64 編碼的 CA 憑證。TKG_PROXY_CA_CERT 優先於 TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE ,如果設定了trust imageRespository 值,則不會寫入 trust proxy 值。 |
下表列出所有目標平臺通用的變數。
有關組態檔變數欄中變數的資訊,請參閱〈組態檔變數參考〉中的所有目標平台的通用變數。
在以類別為基礎的叢集物件欄,所有 name
/ value
組設定位在 spec.topology.variables
之下的 Cluster
物件定義。
組態檔變數 | 以類別為基礎的叢集物件結構 |
---|---|
CLUSTER_NAME: my-cluster NAMESPACE: default |
metadata: name: my-cluster namespace: default |
CLUSTER_PLAN: dev |
metadata: annotations: tkg/plan: dev |
INFRASTRUCTURE_PROVIDER: vsphere |
topology: class: tkg-vsphere-default-v1.0.0 |
CLUSTER_API_SERVER_PORT: 6443 |
- name: apiServerPort value: 6443 |
CNI: antrea |
- name: cni value: antrea |
ENABLE_AUDIT_LOGGING: true |
- name: auditLogging value: enabled: true |
CLUSTER_CIDR: 100.96.0.0/11 SERVICE_CIDR: 100.64.0.0/13 |
spec: clusterNetwork: pods: cidrBlocks: - 100.96.0.0/11 services: cidrBlocks: - 100.64.0.0/13 |
CONTROLPLANE_CERTIFICATE_ROTATION_ENABLED: true CONTROLPLANE_CERTIFICATE_ROTATION_DAYS_BEFORE: 65 |
- name: controlPlaneCertificateRotation value: activate: true daysBefore: 65 |
本節列出了用於設定叢集的 Antrea 容器網路介面 (CNI 的變數。以類別為基礎的叢集變數位於 AntreaConfig
物件,ClusterBootstrap
物件的 spec.cni.refName
內容之下會參照此物件。
以類別為基礎的叢集支援許多沒有相應 TKG 組態檔變數的 Antrea 組態選項。有關可針對以類別為基礎的 Cluster
物件設定的所有 Antrea 組態選項,請參閱 Antrea 說明文件中的 Feature Gates 和其他主題。
在 AntreaConfig 物件結構欄中,所有設定都位於 spec.antrea.config
之下的 AntreaConfig
物件定義。
組態檔變數 | AntreaConfig 物件結構 |
---|---|
ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false ANTREA_ENABLE_USAGE_REPORTING: false ANTREA_KUBE_APISERVER_OVERRIDE: “https://192.168.77.100:6443” ANTREA_NO_SNAT: false ANTREA_TRAFFIC_ENCAP_MODE: “encap” ANTREA_TRANSPORT_INTERFACE: “eth0” ANTREA_TRANSPORT_INTERFACE_CIDRS: "10.0.0.2/24" |
disableUdpTunnelOffload: false enableUsageReporting: false kubeAPIServerOverride: https://192.168.77.100:6443 noSNAT: false trafficEncapMode: encap transportInterface: eth0 transportInterfaceCIDRs: - 10.0.0.2/24 |
ANTREA_EGRESS: true ANTREA_IPAM: false ANTREA_MULTICAST: false ANTREA_NETWORKPOLICY_STATS: true ANTREA_NODEPORTLOCAL: true ANTREA_SERVICE_EXTERNALIP: false ANTREA_POLICY: true ANTREA_TRACEFLOW: true |
featureGates: Egress: true AntreaIPAM: false Multicast: false NetworkPolicyStats: true NodePortLocal: true ServiceExternalIP: false AntreaPolicy: true AntreaTraceflow: true |
ANTREA_PROXY: false ANTREA_PROXY_ALL: false ANTREA_PROXY_LOAD_BALANCER_IPS: true ANTREA_PROXY_NODEPORT_ADDRS: “100.70.70.12” ANTREA_PROXY_SKIP_SERVICES: 10.11.1.2,kube-system/kube-dns |
antreaProxy: nodePortAddresses: - 100.70.70.12 proxyAll: false proxyLoadBalancerIPs: true skipServices: - 10.11.1.2 - kube-system/kube-dns |
ANTREA_FLOWEXPORTER: false ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: “60s” ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: “flow-aggregator.svc:4739:tls” ANTREA_FLOWEXPORTER_POLL_INTERVAL: “5s” ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: “15s” |
flowExporter: activeFlowTimeout: 60s collectorAddress: flow-aggregator.svc:4739:tls idleFlowTimeout: 15s pollInterval: 5s |
ANTREA_NODEPORTLOCAL_ENABLED: true ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000 |
nodePortLocal: enabled: true portRange: 61000-62000 |
ANTREA_EGRESS_EXCEPT_CIDRS: "10.0.0.0/6" |
egress: exceptCIDRs: - 10.0.0.0/6 |
ANTREA_MULTICAST_INTERFACES: “eth0” |
multicastInterfaces: - eth0 |
本節列出了用於將工作負載叢集部署到 vSphere 的變數。有關下面的組態檔變數欄中的變數資訊,請參關〈組態檔變數參考〉中的 vSphere。以類別為基礎的物件結構欄中列出的設定位於列出的 kind
物件。
組態檔變數 | 物件結構 |
---|---|
TKG_IP_FAMILY: ipv4 VSPHERE_CONTROL_PLANE_ENDPOINT: 10.237.177.161 VSPHERE_REGION: my-region VSPHERE_ZONE: my-zone |
kind: VSphereCPIConfig spec: vsphereCPI: ipFamily: ipv4 vmNetwork: excludeExternalSubnetCidr: 10.237.177.161/32 excludeInternalSubnetCidr: 10.237.177.161/32 zone: my-zone |
VSPHERE_DATACENTER: /dc0 VSPHERE_DATASTORE: /dc0/datastore/sharedVmfs-0 VSPHERE_FOLDER: /dc0/vm/folder0 VSPHERE_NETWORK: /dc0/network/VM Network VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/rp0 VSPHERE_SERVER: 10.237.179 VSPHERE_STORAGE_POLICY_ID: my-local-sp VSPHERE_TEMPLATE: /dc0/vm/ubuntu-2004-kube-v1.24.10+vmware.1-tkg.1 VSPHERE_TLS_THUMBPRINT: B7:15:2D:D8:E5:FC:BE:C6:85:7A:03:98:0E:43:7E:59:A9:CF:1D:2F |
kind: ClusterBootstrap spec: additionalPackages: - refName: tkg-storageclass* valuesFrom: inline: VSPHERE_STORAGE_POLICY_ID: my-local-sp |
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa NzaC1yc2EA[…]== |
kind: Cluster spec: topology: variables: - name: user value: sshAuthorizedKeys: - ssh-rsa NzaC1yc2EA[…]== |
VSPHERE_CONTROL_PLANE_DISK_GIB: “30” VSPHERE_CONTROL_PLANE_MEM_MIB: “2048” VSPHERE_CONTROL_PLANE_NUM_CPUS: “2” VSPHERE_WORKER_DISK_GIB: “50” VSPHERE_WORKER_MEM_MIB: “4096” VSPHERE_WORKER_NUM_CPUS: “4” |
kind: Cluster spec: topology: variables: - name: controlPlane value: machine: diskGiB: 40 memoryMiB: 8192 numCPUs: 2 - name: worker value: count: 1 machine: diskGiB: 40 memoryMiB: 8192 numCPUs: 2 |
NTP_SERVERS: time.google.com |
kind: Cluster spec: topology: variables: - name: ntpServers value: - time.google.com |
VSPHERE_AZ_0: rack1 VSPHERE_AZ_1: rack2 VSPHERE_AZ_2: rack3 |
kind: Cluster spec: topology: workers: machineDeployments: - class: tkg-worker failureDomain: rack1 name: md-0 - class: tkg-worker failureDomain: rack2 name: md-1 - class: tkg-worker failureDomain: rack3 name: md-2 |
本節列出了用於將工作負載叢集部署到 AWS 的變數。有關下面的組態檔變數欄中的變數資訊,請參關〈組態檔變數參考〉中的 AWS。
在以類別為基礎的叢集物件欄,所有 name
/ value
組設定位在 spec.topology.variables
之下的 Cluster
物件定義。
組態檔變數 | 以類別為基礎的叢集物件結構 |
---|---|
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_PROFILE |
N/A 這些設定不會儲存在 Cluster 物件中。 |
AWS_REGION: us-east-1 |
- name: region value: us-east-1 |
AWS_SSH_KEY_NAME: aws-tkg-clusteradmin |
- name: sshKeyName value: aws-tkg-clusteradmin |
AWS_LOAD_BALANCER_SCHEME_INTERNAL: false |
- name: loadBalancerSchemeInternal value: false |
BASTION_HOST_ENABLED: true |
- name: bastion value: enabled: true |
AWS_NODE_AZ: us-east-1a AWS_PRIVATE_NODE_CIDR: 10.0.0.0/24 AWS_PRIVATE_SUBNET_ID: subnet-0a7d376dde53c77ed AWS_PUBLIC_NODE_CIDR: 10.0.1.0/24 AWS_PUBLIC_SUBNET_ID: subnet-0794d50f57e9801b6 AWS_NODE_AZ_1: us-west-2b AWS_PRIVATE_NODE_CIDR_1: 10.0.2.0/24 AWS_PRIVATE_SUBNET_ID_1: subnet-0c338780824d1c59d AWS_PUBLIC_NODE_CIDR_1: 10.0.3.0/24 AWS_PUBLIC_SUBNET_ID_1: subnet-0addabd635d02ba97 AWS_NODE_AZ_2: ap-southeast-3 AWS_PRIVATE_NODE_CIDR_2: 10.0.4.0/24 AWS_PRIVATE_SUBNET_ID_2: subnet-00b9638e419a6187b AWS_PUBLIC_NODE_CIDR_2: 10.0.5.0/24 AWS_PUBLIC_SUBNET_ID_2: subnet-0ed174ef16a2f43aa |
- name: network value: subnets: - az: us-east-1a private: cidr: 10.0.0.0/24 id: subnet-0a7d376dde53c77ed public: cidr: 10.0.1.0/24 id: subnet-0794d50f57e9801b6 - az: us-west-2b private: cidr: 10.0.2.0/24 id: subnet-0c338780824d1c59d public: cidr: 10.0.3.0/24 id: subnet-0addabd635d02ba97 - az: ap-southeast-3 private: cidr: 10.0.4.0/24 id: subnet-00b9638e419a6187b public: cidr: 10.0.5.0/24 id: subnet-0ed174ef16a2f43aa |
AWS_VPC_CIDR: 10.0.0.0/16 |
- name: network value: … vpc: cidr: 10.0.0.0/16 |
AWS_VPC_ID: vpc-0ce8bdfea218 |
- name: network value: … vpc: existingID: vpc-0ce8bdfea218 |
AWS_SECURITY_GROUP_BASTION: sg-1 AWS_SECURITY_GROUP_APISERVER_LB: sg-2 AWS_SECURITY_GROUP_LB: sg-3 AWS_SECURITY_GROUP_CONTROLPLANE: sg-4 AWS_SECURITY_GROUP_NODE: sg-5 |
- name: network value: … securityGroupOverrides: bastion: sg-1 apiServerLB: sg-2 lb: sg-3 controlPlane: sg-4 node: sg-5 |
AWS_IDENTITY_REF_NAME: my-aws-id AWS_IDENTITY_REF_KIND: AWSClusterRoleIdentity |
- name: identityRef value: name: my-aws-id kind: AWSClusterRoleIdentity |
NODE_MACHINE_TYPE: m5.large AWS_NODE_OS_DISK_SIZE_GIB: 80在多 AZ 部署中,您還可以設定 NODE_MACHINE_TYPE_1 和 NODE_MACHINE_TYPE_2 |
在 Cluster 物件中,在 spec.topology.workers 下:machineDeployments: - class: tkg-worker name: md-0 value: instanceType: m5.large rootVolume: sizeGiB: 80 |
CONTROL_PLANE_MACHINE_TYPE: t3.large AWS_CONTROL_PLANE_OS_DISK_SIZE_GIB: 80 |
- name: controlPlane value: instanceType: t3.large rootVolume: sizeGiB: 80 |
本節列出了用於將工作負載叢集部署到 Microsoft Azure 的變數。有關下方組態檔變數欄中變數的資訊,請參閱〈組態檔變數參考〉中的 Microsoft Azure。
在以類別為基礎的叢集物件欄,所有 name
/ value
組設定位在 spec.topology.variables
之下的 Cluster
物件定義。
組態檔變數 | 以類別為基礎的叢集物件結構 |
---|---|
AZURE_CLIENT_ID AZURE_CLIENT_SECRET AZURE_TENANT_ID |
N/A 這些設定不會公開在 Cluster 物件中。 |
AZURE_LOCATION: eastus2 |
- name: location value: eastus2 |
AZURE_RESOURCE_GROUP: my-azure-rg |
- name: resourceGroup value: my-azure-rg |
AZURE_SUBSCRIPTION_ID: c789uce3-aaaa-bbbb-cccc-a51b6b0gb405 |
- name: subscriptionID value: c789uce3-aaaa-bbbb-cccc-a51b6b0gb405 |
AZURE_ENVIRONMENT: AzurePublicCloud |
- name: environment value: AzurePublicCloud |
AZURE_SSH_PUBLIC_KEY_B64: “c3NoLXJzYSBB […] vdGFsLmlv” |
- name: sshPublicKey value: c3NoLXJzYSBB […] vdGFsLmlv |
AZURE_FRONTEND_PRIVATE_IP: 10.0.0.100如果 AZURE_ENABLE_PRIVATE_CLUSTER 為 true ,則設定。 |
- name: frontendPrivateIP value: 10.0.0.100 |
AZURE_CUSTOM_TAGS: “foo=bar, plan=prod” |
- name: customTags value: “foo=bar, plan=prod” |
AZURE_ENABLE_ACCELERATED_NETWORKING: true |
- name: acceleratedNetworking value: enabled: true |
AZURE_ENABLE_PRIVATE_CLUSTER: false |
- name: privateCluster value: enabled: false |
AZURE_VNET_CIDR: 10.0.0.0/16 AZURE_VNET_NAME: my-azure-vnet AZURE_VNET_RESOURCE_GROUP: my-azure-vnet-rg |
- name: network value: vnet: cidrBlocks: - 10.0.0.0/16 name: my-azure-vnet resourceGroup: my-azure-vnet-rg |
AZURE_IDENTITY_NAME: my-azure-id AZURE_IDENTITY_NAMESPACE: default |
- name: identityRef value: name: my-azure-id namespace: default |
AZURE_CONTROL_PLANE_DATA_DISK_SIZE_GIB: 256 AZURE_CONTROL_PLANE_OS_DISK_SIZE_GIB: 128 AZURE_CONTROL_PLANE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS AZURE_ENABLE_CONTROL_PLANE_OUTBOUND_LB: true AZURE_CONTROL_PLANE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1 AZURE_CONTROL_PLANE_SUBNET_CIDR: 10.0.0.0/24 AZURE_CONTROL_PLANE_SUBNET_NAME: my-azure-cp-subnet AZURE_CONTROL_PLANE_MACHINE_TYPE: Standard_D2s_v3 |
- name: controlPlane value: dataDisks: - sizeGiB: 256 osDisk: sizeGiB: 128 storageAccountType: Premium_LRS outboundLB: enabled: true frontendIPCount: 1 subnet: cidr: 10.0.0.0/24 name: my-azure-cp-subnet vmSize: Standard_D2s_v3 |
AZURE_ENABLE_NODE_DATA_DISK: true AZURE_NODE_DATA_DISK_SIZE_GIB: 256 AZURE_NODE_OS_DISK_SIZE_GIB: 128 AZURE_NODE_OS_DISK_STORAGE_ACCOUNT_TYPE: Premium_LRS AZURE_ENABLE_NODE_OUTBOUND_LB: true AZURE_NODE_OUTBOUND_LB_FRONTEND_IP_COUNT: 1 AZURE_NODE_OUTBOUND_LB_IDLE_TIMEOUT_IN_MINUTES: 4 AZURE_NODE_SUBNET_CIDR: 10.0.1.0/24 AZURE_NODE_SUBNET_NAME: my-azure-worker-subnet AZURE_NODE_MACHINE_TYPE: Standard_D2s_v3 |
- name: worker value: dataDisks: - sizeGiB: 256 osDisk: sizeGiB: 128 storageAccountType: Premium_LRS outboundLB: enabled: true frontendIPCount: 1 idleTimeoutInMinutes: 4 subnet: cidr: 10.0.1.0/24 name: my-azure-worker-subnet vmSize: Standard_D2s_v3 |
本節列出了用於在 TKG 中設定 NSX Advanced Load Balancer (ALB) 的變數。
組態檔變數 | 以類別為基礎的叢集物件結構 |
---|---|
AVI_CONTROL_PLANE_HA_PROVIDER |
topology: variables: - name: aviAPIServerHAProvider value: true |
Cluster
物件及其從屬物件將叢集組態檔傳遞到 tanzu cluster create
的 --file
旗標時,命令會將叢集組態檔轉換為叢集規格檔。如需 tanzu cluster create
從以下組態檔產生的範例 Cluster
物件及其從屬物件,請參見以下。
CLUSTER_NAME: example-cluster
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
VSPHERE_NETWORK: /dc0/network/VM Network
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAACCCza2EBBB[...]ADGQAg/POl6vyWOmQ==
VSPHERE_USERNAME: [email protected]
VSPHERE_PASSWORD: 1234567AbC!
VSPHERE_SERVER: 10.XXX.XXX.71
VSPHERE_DATACENTER: /dc0
VSPHERE_RESOURCE_POOL: /dc0/host/cluster0/Resources/example-tkg
VSPHERE_DATASTORE: /dc0/datastore/vsanDatastore
VSPHERE_FOLDER: /dc0/vm/example-tkg
VSPHERE_TLS_THUMBPRINT: ""
VSPHERE_INSECURE: true
VSPHERE_CONTROL_PLANE_ENDPOINT: 10.XXX.XXX.75
AVI_CONTROL_PLANE_HA_PROVIDER: false
ENABLE_AUDIT_LOGGING: false
ENABLE_DEFAULT_STORAGE_CLASS: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
ENABLE_AUTOSCALER: false
產生的 Cluster
物件及其從屬物件:
apiVersion: cpi.tanzu.vmware.com/v1alpha1
kind: VSphereCPIConfig
metadata:
name: example-cluster
namespace: default
spec:
vsphereCPI:
mode: vsphereCPI
tlsCipherSuites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
vmNetwork:
excludeExternalSubnetCidr: 10.XXX.XXX.75/32
excludeInternalSubnetCidr: 10.XXX.XXX.75/32
---
apiVersion: csi.tanzu.vmware.com/v1alpha1
kind: VSphereCSIConfig
metadata:
name: example-cluster
namespace: default
spec:
vsphereCSI:
config:
datacenter: /dc0
httpProxy: ""
httpsProxy: ""
insecureFlag: true
noProxy: ""
region: null
tlsThumbprint: ""
useTopologyCategories: false
zone: null
mode: vsphereCSI
---
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: ClusterBootstrap
metadata:
annotations:
tkg.tanzu.vmware.com/add-missing-fields-from-tkr: TKR-NAME
name: example-cluster
namespace: default
spec:
additionalPackages:
- refName: metrics-server*
- refName: secretgen-controller*
- refName: pinniped*
cpi:
refName: vsphere-cpi*
valuesFrom:
providerRef:
apiGroup: cpi.tanzu.vmware.com
kind: VSphereCPIConfig
name: example-cluster
csi:
refName: vsphere-csi*
valuesFrom:
providerRef:
apiGroup: csi.tanzu.vmware.com
kind: VSphereCSIConfig
name: example-cluster
kapp:
refName: kapp-controller*
---
apiVersion: v1
kind: Secret
metadata:
name: example-cluster
namespace: default
stringData:
password: 1234567AbC!
username: [email protected]
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
annotations:
osInfo: ubuntu,20.04,amd64
tkg.tanzu.vmware.com/cluster-controlplane-endpoint: 10.XXX.XXX.75
tkg/plan: dev
labels:
tkg.tanzu.vmware.com/cluster-name: example-cluster
name: example-cluster
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 100.96.0.0/11
services:
cidrBlocks:
- 100.64.0.0/13
topology:
class: tkg-vsphere-default-CLUSTER-CLASS-VERSION
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
replicas: 1
variables:
- name: cni
value: antrea
- name: controlPlaneCertificateRotation
value:
activate: true
daysBefore: 90
- name: auditLogging
value:
enabled: false
- name: podSecurityStandard
value:
audit: restricted
deactivated: false
warn: restricted
- name: apiServerEndpoint
value: 10.XXX.XXX.75
- name: aviAPIServerHAProvider
value: false
- name: vcenter
value:
cloneMode: fullClone
datacenter: /dc0
datastore: /dc0/datastore/vsanDatastore
folder: /dc0/vm/example-tkg
network: /dc0/network/VM Network
resourcePool: /dc0/host/cluster0/Resources/example-tkg
server: 10.XXX.XXX.71
storagePolicyID: ""
tlsThumbprint: ""
- name: user
value:
sshAuthorizedKeys:
- ssh-rsa AAAACCCza2EBBB[...]ADGQAg/POl6vyWOmQ==
- name: controlPlane
value:
machine:
diskGiB: 40
memoryMiB: 8192
numCPUs: 2
- name: worker
value:
machine:
diskGiB: 40
memoryMiB: 4096
numCPUs: 2
- name: security
value:
fileIntegrityMonitoring:
enabled: false
imagePolicy:
pullAlways: false
webhook:
enabled: false
spec:
allowTTL: 50
defaultAllow: true
denyTTL: 60
retryBackoff: 500
kubeletOptions:
eventQPS: 50
streamConnectionIdleTimeout: 4h0m0s
systemCryptoPolicy: default
version: KUBERNETES-VERSION
workers:
machineDeployments:
- class: tkg-worker
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=ubuntu
name: md-0
replicas: 1
strategy:
type: RollingUpdate
上面的範例物件規格檔包含以下預留位置文字:
tkg.tanzu.vmware.com/add-missing-fields-from-tkr: TKR-NAME
中的 TKR-NAME
:由 tanzu cluster create
設定為相容的 Tanzu Kubernetes 版本 (TKr),視您的組態而定。例如,tkg.tanzu.vmware.com/add-missing-fields-from-tkr: v1.24.10---vmware.1-tkg.2
。v1.24.10---vmware.1-tkg.2
是此 Tanzu Kubernetes Grid 版本的預設 TKr。version: KUBERNETES-VERSION
中的 KUBERNETES-VERSION
:由 tanzu cluster create
設定為相容的 Kubernetes 版本,視您的組態而定。例如,version: v1.24.10+vmware.1-tkg.2
。v1.24.10+vmware.1-tkg.2
是此 Tanzu Kubernetes Grid 版本的預設 Kubernetes 版本。class: tkg-vsphere-default-CLUSTER-CLASS-VERSION
中的 CLUSTER-CLASS-VERSION
:是預設 class
的版本,由 tanzu cluster create
設定。例如,class: tkg-vsphere-default-v1.0.0
。