This topic describes configuring Windows worker-based Kubernetes clusters in VMware Enterprise PKS.
In Enterprise PKS you can provision a Windows worker-based Kubernetes cluster on vSphere with Flannel.
To provision a Windows worker-based Kubernetes cluster:
Warning: Support for Windows-based Kubernetes clusters is in beta and supports only vSphere with Flannel.
Do not enable this feature if you are using Enterprise PKS with vSphere with NSX-T, Google Cloud Platform (GCP), Azure, or Amazon Web Services (AWS).
We are actively looking for feedback on this beta feature. To submit feedback, send an email to email@example.com.
The following are required for creating a Windows worker-based Kubernetes cluster in Enterprise PKS:
Note: NSX-T does not support networking Windows containers. If this is a key requirement for you, submit feedback by sending an email to firstname.lastname@example.org.
You must have a vSphere stemcell 2019.7 or later for Windows Server version 2019. The latest vSphere stemcell for Windows Server version 2019 is recommended.
Note: Windows stemcells for vSphere are not available on Pivotal Network. These stemcells must be created using your own Windows Server disk image (ISO file). To create a Windows stemcell for vSphere, complete the procedures in Creating a Windows Stemcell for vSphere Manually or Creating a Windows Stemcell for vSphere Using stembuild (Beta).
A plan defines a set of resource types used for deploying a cluster.
Note: Before configuring your Windows worker plan, you must first activate and configure Plan 1. See Plans in Installing Enterprise PKS on vSphere for more information.
To activate and configure a plan, perform the following steps:
Click the plan that you want to activate. You must activate and configure either Plan 11, Plan 12, or Plan 13 to deploy a Windows worker-based cluster.
Select Active to activate the plan and make it available to developers deploying clusters.
Under Name, provide a unique name for the plan.
Note: If you deploy a cluster with multiple master/etcd node VMs, confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see Hardware recommendations in the etcd documentation.
In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see Monitoring Master/etcd Node VMs.
WARNING: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Enterprise PKS does not support changing the number of master/etcd nodes for plans with existing clusters.
Under Master/ETCD VM Type, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the Master Node VM Size section of VM Sizing for Enterprise PKS Clusters.
Under Master Persistent Disk Type, select the size of the persistent disk for the Kubernetes master node VM.
Under Master/ETCD Availability Zones, select one or more AZs for the Kubernetes clusters deployed by Enterprise PKS. If you select more than one AZ, Enterprise PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs. If you are using multiple masters, Enterprise PKS deploys the master and worker VMs across the AZs in round-robin fashion.
Under Maximum number of workers on a cluster, set the maximum number of Kubernetes worker node VMs that Enterprise PKS can deploy for each cluster. Enter any whole number in this field.
Under Worker VM Type, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see the Worker Node VM Number and Size section of VM Sizing for Enterprise PKS Clusters.
Note: BOSH does not support persistent disks for Windows VMs. If specifying Worker Persistent Disk Type on a Windows worker is a requirement for you, submit feedback by sending an email to email@example.com.
Under Worker Availability Zones, select one or more AZs for the Kubernetes worker nodes. Enterprise PKS deploys worker nodes equally across the AZs you select.
Under Kubelet customization - system-reserved, enter resource values that Kubelet can use to reserve resources for system daemons. For example,
memory=250Mi, cpu=150m. For more information about system-reserved values, see the Kubernetes documentation.
EVICTION-SIGNAL=QUANTITY. For example,
memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%. For more information about eviction thresholds, see the Kubernetes documentation.
WARNING: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.
mcr.microsoft.com/k8s/core/pause:1.2.0configures Enterprise PKS to pull the Windows pause image from the Microsoft Docker registry.
---as a separator. For more information, see Adding Custom Linux Workloads.
Note: Windows in Kubernetes does not support privileged containers. See Feature Restrictions in the Kubernetes documentation for additional information.
(Optional) Enable or disable one or more admission controller plugins: PodSecurityPolicy, and SecurityContextDeny. See Admission Plugins for more information. Windows in Kubernetes does not support the DenyEscalatingExec Admission Plugin feature. See API in the Kubernetes documentation for additional information.
To configure networking, do the following:
(Optional) Enter values for Kubernetes Pod Network CIDR Range and Kubernetes Service Network CIDR Range.
10.220.0.0/16. vSphere on Flannel does not support networking Windows containers. If customizing the Service Network CIDR range is a key requirement for you, submit feedback by sending an email to firstname.lastname@example.org.
Note: This setting will not set the proxy for running Kubernetes workloads or pods.
Note: Using an HTTPS connection to the proxy server is not supported. HTTP and HTTPS proxy options can only be configured with an HTTP connection to the proxy server. You cannot populate either of the proxy URL fields with an HTTPS URL. The proxy host and port can be different for HTTP and HTTPS traffic, but the proxy protocol must be HTTP.
* Any additional IP addresses or domain names that should bypass the proxy.
.example2.com, example3.com, 198.51.100.0/24, 203.0.113.0/24, 192.0.2.0/24 </pre> <p class="note"><strong>Note</strong>: By default the <code>10.100.0.0/8</code> and <code>10.200.0.0/8</code> IP address ranges, <code>.internal</code>, <code>.svc</code>,<code>.svc.cluster.local</code>, <code>.svc.cluster</code>, and your Enterprise PKS FQDN are not proxied. This allows internal Enterprise PKS communication. <br><br> Do not use the <code>_</code> character in the <strong>No Proxy</strong> field. Entering an underscore character in this field can cause upgrades to fail. <br><br> Because some jobs in the VMs accept `\*.` as a wildcard, while others only accept `.`, we recommend that you define a wildcard domain using both of them. For example, to denote `example.com` as a wildcard domain, add both `\*.example.com` and `example.com` to the **No Proxy** property. </p>
- Under Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent), ignore the Enable outbound internet access checkbox.
- Click Save.
- When prompted by Ops Manager to upload a stemcell, follow the instructions and provide your previously created vSphere stemcell 2019.7 or later for Windows Server version 2019.
- To create a Windows worker-based cluster follow the steps in Creating Clusters.
To deploy a Windows pod, Kubelet deploys a Windows container image fetched from a Docker registry. Microsoft restricts distribution of Windows container base images and the fetched Windows container image is typically pulled from the Microsoft Docker registry.
To deploy Windows pods in an air-gapped environment you must have a Windows container image in a Docker registry accessible from your Enterprise PKS environment.
To prepare a Windows pause image for an air-gapped environment, perform the following:
- Create an accessible Windows Server 2019 machine in your environment.
- Install Docker on this Windows Server 2019 machine.
- Configure the machine's Docker daemon to allow non-redistributable artifacts to be pushed to your private registry. For information about configuring your Docker daemon, see Allow push of nondistributable artifacts in the Docker documentation.
- Open a command line on the Windows machine.
To download a Windows container image from the Microsoft Docker registry, run the following command:
docker pull mcr.microsoft.com/k8s/core/pause:1.2.0
To tag the Windows container image, run the following command:
docker tag mcr.microsoft.com/k8s/core/pause:1.2.0 REGISTRY-ROOT/windows/pause:1.2.0
REGISTRY-ROOTis your private registry's URI.
To upload the Windows container image to your accessible private registry, run the following command:
docker push PAUSE-IMAGE-URI
PAUSE-IMAGE-URIis the URI to the Windows pause image in your private registry. Your pause image URI should follow the pattern:
To configure Enterprise PKS to fetch your accessible Windows container image when deploying Windows pods, perform the following:
- Open the Enterprise PKS tile.
- Click the Windows worker Plan that you want to configure to use your accessible private registry.
Modify the Kubelet customization - Windows pause image location property to be your pause image URI.