check-circle-line exclamation-circle-line close-line

VMware Enterprise PKS | 19 AUG 2019

Check for additions and updates to these release notes.

VMware Enterprise PKS is used to create and manage on-demand Kubernetes clusters using the PKS CLI.

Versions:

v1.3.8

Release Date: August 19, 2019

Release Snapshot

Component Details
PKS version v1.3.8
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version

v170.107

 

Kubernetes version v1.12.8
On-Demand Broker version v0.24.0
CFCR version v0.25.14
Docker version v18.06.3-ce
NSX-T versions

v2.3.1, v2.4.0.1

NCP version v2.4.0
vSphere versions for NSX-T 2.3.1

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

vSphere versions for NSX-T 2.4.0

6.7 U1 EP06 (ESXi670-201901001)

6.5 U2 P03 (ESXi650-201811002)

Backup and Restore SDK version v1.8.0
UAA version v64.3

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

NOTE: NSX-T v2.4 implements a new Policy API that PKS v1.3.6 does not support. If you are using NSX-T v2.4 with PKS 1.3.6, you must use the "Advanced Networking" tab in NSX Manager to create, read, update, and delete network object required for PKS.

Upgrade

The supported upgrade paths to PKS v1.3.6 are as follows:

  • PKS 1.3.4 or later

For general upgrade information, see Upgrading PKS. For specific instructions for upgrading to PKS v1.3.6 and NSX-T 2.4.0.1, refer to the topic Upgrading PKS with NSX-T to NSX-T v2.4.0.1.

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 hot-patch. For more information, see KB article 67499 at the VMware Knowledge Base.
  • To obtain the NSX-T v2.4.0.1 hot-patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

New Features

PKS v1.3.8 adds the following:

  • [Security Fix] This patch addresses CVE-2019-3794, the UAA client.write scope vulnerability

v1.3.7

Release Date: July 17, 2019

Release Snapshot

Component Details
PKS version v1.3.7
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version

v170.76

 

Kubernetes version v1.12.8
On-Demand Broker version v0.24
CFCR version v0.25.11
Docker version v18.06.3-ce
NSX-T versions

v2.3.1, v2.4.0.1

NCP version v2.4.0
vSphere versions for NSX-T 2.3.1

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

vSphere versions for NSX-T 2.4.0

6.7 U1 EP06 (ESXi670-201901001)

6.5 U2 P03 (ESXi650-201811002)

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

NOTE: NSX-T v2.4 implements a new Policy API that PKS v1.3.6 does not support. If you are using NSX-T v2.4 with PKS 1.3.6, you must use the "Advanced Networking" tab in NSX Manager to create, read, update, and delete network object required for PKS.

Upgrade

The supported upgrade paths to PKS v1.3.6 are as follows:

  • PKS 1.3.4 or later

For general upgrade information, see Upgrading PKS. For specific instructions for upgrading to PKS v1.3.6 and NSX-T 2.4.0.1, refer to the topic Upgrading PKS with NSX-T to NSX-T v2.4.0.1.

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 hot-patch. For more information, see KB article 67499 at the VMware Knowledge Base.
  • To obtain the NSX-T v2.4.0.1 hot-patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

New Features

PKS v1.3.7 adds the following:

  • Security Fix. Updates stemcell to v170.76. This addresses the Zombieload CVE.
  • Security Fix. Fixes security issue around PKS cluster restore. Please use BOSH Backup and Restore (BBR) CLI version v1.5.0 or higher with this version of PKS.
  • Security Fix. Updates fluent-bit container image in the PKS Telemetry agent.
  • Security Fix. Updates base images to latest Xenial SHA for all sink resources.
  • Bug Fix. Enables kubelet to write data to /var/vcap/data/kubelet. This prevents pod eviction due to insufficient resources in emptyDir mounts.

v1.3.6

Release Date: April 8, 2019

Release Snapshot

Component Details
PKS version v1.3.7
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version

v170.15

 

Kubernetes version v1.12.7
On-Demand Broker version v0.24
CFCR version v0.25.11
Docker version v18.06.3-ce
NSX-T versions

v2.3.1, v2.4.0.1

NCP version v2.4.0
vSphere versions for NSX-T 2.3.1

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

vSphere versions for NSX-T 2.4.0

6.7 U1 EP06 (ESXi670-201901001)

6.5 U2 P03 (ESXi650-201811002)

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

NOTE: NSX-T v2.4 implements a new Policy API that PKS v1.3.6 does not support. If you are using NSX-T v2.4 with PKS 1.3.6, you must use the "Advanced Networking" tab in NSX Manager to create, read, update, and delete network object required for PKS.

Upgrade

The supported upgrade paths to PKS v1.3.6 are as follows:

  • PKS 1.3.4 or later

For general upgrade information, see Upgrading PKS. For specific instructions for upgrading to PKS v1.3.6 and NSX-T 2.4.0.1, refer to the topic Upgrading PKS with NSX-T to NSX-T v2.4.0.1.

When upgrading to NSX-T 2.4:

  • Use the official VMware NSX-T Data Center 2.4 build.
  • Apply the NSX-T v2.4.0.1 hot-patch. For more information, see KB article 67499 at the VMware Knowledge Base.
  • To obtain the NSX-T v2.4.0.1 hot-patch, open a support ticket with VMware Global Support Services (GSS) for NSX-T Engineering.

New Features

PKS v1.3.6 adds the following:

  • Telemetry property environment_provider.
  • Support for nsx-cf-cni 2.4.0.12511604.
  • Remaining plans to osb-proxy configuration.

Known Issues

The following known issues apply to v1.3.6.

  • Azure Resource Group Field in the Kubernetes Cloud Provider Is Ignored. On the Azure IaaS platform, the Resource Group field in the Kubernetes Cloud Provider section of the PKS tile is ignored. The PKS VM is deployed to the same Resource Group as the Ops Manager and BOSH VMs.

  • Upgrades from 2.3.X to 2.4.0.1 fail for Bare Metal Edge Node. If you are using a Bare Metal Edge Node, please refrain from upgrading to Hot Patch 2.4.0.1.

  • Master and Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure. PKS deploys packages to the ephemeral disk, `/var/vcap/data`, during installations and upgrades. If worker node VMs have ephemeral disks smaller than 8GB,  the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following: {"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}. Workaround: In the plans you use to deploy clusters, ensure that worker node ephemeral disks are set to greater than 8GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS. This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8GB.

  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a "bosh recreate" to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.

  • Cluster Upgrades from 1.3.0 may fail on Azure if services are exposed. Customers deploying PKS to Azure can experience a cluster failing when upgrading the cluster from 1.3.0 to 1.3.1 or 1.3.2.  The error "result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns" message is a symptom. This issue is due to a timeout condition for nodes that are hosting Kubernetes pods that are exposed externally via a Kubernetes service. New cluster creations or cluster scale operations are not affected. If customers deploying PKS to Azure experience this scenario they are advised to contact Support for assistance until it is resolved in a patch release.

  • Kubelet Customization Feature Only Enabled for Plan 1. PKS v1.3.4 introduced the ability to configure Kubelet startup parameters system-reserved and eviction-hard within a plan. For more information, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.

v1.3.5

Release Date: March 28, 2019

Release Snapshot

Component Details
PKS version v1.3.5
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.7
On-Demand Broker version v0.24
CFCR version v0.25.11
Docker version v18.06.3-ce
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.2
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Upgrade

The supported upgrade path to PKS v1.3.5 is PKS v1.3.4 or later.

For instructions, see Upgrading PKS.

New Features

PKS v1.3.5 adds the following:

  • Support for Kubernetes v1.12.7.
  • Fix: CVE-2019-1002101. Kubernetes v1.12.7 address this CVE.
  • Fix: CVE-2019-9946. Kubernetes v1.12.7 address this CVE.

Known Issues

The following known issues apply to v1.3.5.

  • Master and Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure. PKS deploys packages to the ephemeral disk, `/var/vcap/data`, during installations and upgrades. If worker node VMs have ephemeral disks smaller than 8GB,  the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following: {"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}. Workaround: In the plans you use to deploy clusters, ensure that worker node ephemeral disks are set to greater than 8GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS. This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8GB.
  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a "bosh recreate" to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.
  • Cluster Upgrades from 1.3.0 may fail on Azure if services are exposed. Customers deploying PKS to Azure can experience a cluster failing when upgrading the cluster from 1.3.0 to 1.3.1 or 1.3.2.  The error "result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns" message is a symptom. This issue is due to a timeout condition for nodes that are hosting Kubernetes pods that are exposed externally via a Kubernetes service. New cluster creations or cluster scale operations are not affected. If customers deploying PKS to Azure experience this scenario they are advised to contact Support for assistance until it is resolved in a patch release.
  • Kubelet Customization Feature Only Enabled for Plan 1. PKS v1.3.4 introduced the ability to configure Kubelet startup parameters system-reserved and eviction-hard within a plan. For more information, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere.

v1.3.4

Release Date: March 26, 2019

Release Snapshot

Component Details
PKS version v1.3.4
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.6
On-Demand Broker version v0.24
CFCR version v0.25.11
Docker version v18.06.3-ce
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.2
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Upgrade

The supported upgrade paths to PKS v1.3.4 are as follows:

  • When upgrading from PKS v1.3.x: PKS 1.3.1 or later
  • When upgrading from PKS v1.2.x: PKS v1.2.8 or later
For instructions, see Upgrading PKS.

New Features

PKS v1.3.4 adds the following:

  • Custom DNS configuration for Kubernetes clusters using Network Profiles. For more information, see DNS Configuration for Kubernetes Clusters in the Defining Network Profiles topic.
  • Support for VMware NSX Container Plug-in (NCP) v2.3.2.
  • Support for additional plans. Operators can configure up to ten sets of resource types, or Plans, in the PKS tile. All plans except the first can made available or unavailable to developers deploying clusters. Plan 1 must be configured and made available as a default for developers.
  • Kubelet customization. You can configure Kubelet to reserve compute resources for system daemons in the Plans pane of the PKS tile. For more information, see the Plans section of the Installing PKS topic for your IaaS.
  • Fix: CVE-2019-1002100. Kubernetes v1.12.6 address this CVE.
  • Fix: Updated the Telemetry URL.
  • Fix: Resolved known issue where vSphere Cloud Provider configuration could fail if credentials contained non-alphanumeric characters. For example, `#`, `\`, and `"`.

Known Issues

The following known issues apply to v1.3.4.

  • Master and Worker Nodes with Small Ephemeral Disks Can Cause Upgrade Failure. PKS deploys packages to the ephemeral disk, `/var/vcap/data`, during installations and upgrades. If worker node VMs have ephemeral disks smaller than 8GB,  the disk can fill during an upgrade and cause the upgrade to fail. Cluster upgrades can present error messages such as the following: {"time":999999999,"error":{"code":450001,"message":"Response exceeded maximum allowed length"}}. Workaround: In the plans you use to deploy clusters, ensure that worker node ephemeral disks are set to greater than 8GB. For plan configuration instructions, see the Plans section of the Installing PKS topic for your IaaS. This issue should not affect new installations of PKS v1.3.x as the default ephemeral disk size in plans is larger than 8GB.
  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a "bosh recreate" to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.
  • Cluster Upgrades from 1.3.0 may fail on Azure if services are exposed. Customers deploying PKS to Azure can experience a cluster failing when upgrading the cluster from 1.3.0 to 1.3.1 or 1.3.2.  The error "result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns" message is a symptom. This issue is due to a timeout condition for nodes that are hosting Kubernetes pods that are exposed externally via a Kubernetes service. New cluster creations or cluster scale operations are not affected. If customers deploying PKS to Azure experience this scenario they are advised to contact Support for assistance until it is resolved in a patch release.
  • Kubelet Customization Feature Only Enabled for Plan 1. PKS v1.3.4 introduces the ability to configure Kubelet startup parameters system-reserved and eviction-hard within a plan. For more information, see the Plans section of the Installing PKS topic for your IaaS, such as Installing PKS on vSphere. This feature is only functional in Plan 1 for PKS v1.3.4.

v1.3.3

Release Date: February 22, 2019

Release Snapshot

Component Details
PKS version v1.3.3
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.5
Docker version v18.06.3-ce
On-Demand Broker version v0.24
CFCR v35.0.0
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.1
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Upgrade

The supported upgrade paths to PKS v1.3.3 are as follows:

  • When upgrading from PKS v1.3.x: PKS 1.3.1 or 1.3.2
  • When upgrading from PKS v1.2.x: PKS v1.2.8 through 1.2.11
For more information, see Upgrading PKS and Upgrading PKS with NSX-T.

New Features

PKS v1.3.3 adds the following:

  • Fix: CVE-2019-5736. This release updates the version of Docker deployed by PKS to v18.06.3-ce. This Docker version addresses a runc vulnerability whereby a malicious image could run in privileged mode and elevate to root access on worker nodes. Docker v18.06.2-ce, deployed by PKS v1.3.2, did not contain the correct compiled binary. This Docker version includes the correct runc binary to address the CVE.

Known Issues

The Known Issues for v1.3.2 also apply to v1.3.3.

v1.3.2

Release Date: February 13, 2019

Release Snapshot

Component Details
PKS version v1.3.2
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
Docker version v18.06.2-ce (CFCR v0.25.8)
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.1
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Upgrade

WARNING: PKS v1.3.1 and earlier includes a critical CVE. Follow the procedures in the PKS upgrade approach for CRITICAL CVE article at the VMware Support Knowledge Base to perform an upgrade to PKS v1.3.2.

The supported upgrade paths to PKS v1.3.2 are as follows:

  • When upgrading from PKS v1.3.x: PKS 1.3.1
  • When upgrading from PKS v1.2.x: PKS v1.2.8 or v1.2.9

For upgrade instructions, see Upgrading PKS.

New Features

There are no new features in this release.

Fixed Issues

The PKS v1.3.2 release fixes the following known issues from previous releases:

  • Fix: CVE-2019-3779. This fix addresses a vulnerability where certs signed by the Kubernetes API could be used to gain access to a PKS-deployed cluster's etcd service.
  • Fix: CVE-2019-3780. This fixes a regression bug in PKS where vCenter IaaS credentials intended for the vSphere Cloud Provider were written on worker node VM disks.
  • Fix: Clusters can now be successfully created if there are pre-existing Kubernetes clusters using the same hostname.

Known Issues

The PKS v1.3.2 release has the following known issues and other changes:

  • There is a known vulnerability with Docker v18.06.2-ce that is shipped with this release (CVE-2019-5736).
  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a "bosh recreate" to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.
  • Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes. If you install PKS on vSphere, and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters (such as pound sign (#), dollar sign ($), comma (,), exclamation point (!), or dash (-)), your deployment can fail with the following error: "ServerFaultCode: Cannot complete login due to an incorrect user name or password." To temporarily resolve this issue if NSX-T is NOT deployed, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!", then redeploy the PKS tile by clicking Apply Changes. If you are using NSX-T, do not use special characters in this field.
  • Cluster Upgrades from 1.3.0 may fail on Azure if services are exposed. Customers deploying PKS to Azure can experience a cluster failing when upgrading the cluster from 1.3.0 to 1.3.1 or 1.3.2.  The error "result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns" message is a symptom. This issue is due to a timeout condition for nodes that are hosting Kubernetes pods that are exposed externally via a Kubernetes service. New cluster creations or cluster scale operations are not affected. If customers deploying PKS to Azure experience this scenario they are advised to contact Support for assistance until it is resolved in a patch release.

v1.3.1

Release Date: February 08, 2019

Release Snapshot

Component Details
PKS version v1.3.1
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
Docker version v18.06.1-ce (CFCR)
On-Demand Broker version v0.24
CFCR v0.25.8
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.1
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NOTE: Ops Manager v2.3.10 and later in the v2.3 version line and Ops Manager v2.4.4 and later in the v2.4 version line do not support PKS v1.3 on Azure. Before deploying PKS v1.3 on Azure, you must install Ops Manager v2.3.9 or earlier in the 2.3 version line or Ops Manager v2.4.3 or earlier in the 2.4 version line.

Upgrade

The supported upgrade paths to PKS v1.3.1 are as follows:

  • When upgrading from PKS v1.3.0.
  • When upgrading from PKS v1.2.7 or PKS v1.2.8.

For upgrade instructions, see Upgrading PKS.

New Features

The PKS v1.3.1 release adds support for the following:

  • Certificates for the Etcd instance for each Kubernetes cluster provisioned by PKS are generated with a four-year lifetime and signed by a new Etcd Certificate Authority (CA).

Fixed Issues

The PKS v1.3.1 release fixes the following known issues from previous releases:

  • Upgrading PKS no longer fails during upgrades if there are Kubernetes clusters with duplicate hostnames.
  • Deploying PKS no longer fails if an entry in the No Proxy field contains special characters such as the dash (-) character.
  • The Kubernetes API now responds with the CA certificate that signed the Kubernetes cluster’s certificate so that customer scripts such as the get-pks-k8s-config.sh tool will function again.

Known Issues

The PKS v1.3.1 release has the following known issues and other changes:

  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a "bosh recreate" to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. Workaround: See PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.
  • Deploy Fails if vSphere Master Credentials Field Has Special Characters Without Quotes. If you install PKS on vSphere, and you enter credentials in the vCenter Master Credentials field of the Kubernetes Cloud Provider pane of the PKS tile that contain special characters (such as pound sign (#), dollar sign ($), comma (,), exclamation point (!), or dash (-)), your deployment can fail with the following error: "ServerFaultCode: Cannot complete login due to an incorrect user name or password." To temporarily resolve this issue if NSX-T is NOT deployed, place quotes around the credentials in the cloud provider configuration. For example, "SomeP4$$w0rd#!", then redeploy the PKS tile by clicking Apply Changes.

  • Cluster Upgrades from 1.3.0 may fail on Azure if services are exposed. Customers deploying PKS to Azure can experience a cluster failing when upgrading the cluster from 1.3.0 to 1.3.1 or 1.3.2.  The error "result: 1 of 2 post-start scripts failed. Failed Jobs: kubelet. Successful Jobs: bosh-dns" message is a symptom. This issue is due to a timeout condition for nodes that are hosting Kubernetes pods that are exposed externally via a Kubernetes service. New cluster creations or cluster scale operations are not affected. If customers deploying PKS to Azure experience this scenario they are advised to contact Support for assistance until it is resolved in a patch release.

v1.3.0

Release Date: January 16, 2019

Release Snapshot

Component Details
PKS version v1.3.0
Ops Manager versions v2.3.1+, v2.4.0+
Stemcell version v170.15
Kubernetes version v1.12.4
Docker version v18.06.1-ce (CFCR)
On-Demand Broker version v0.24
NSX-T versions*

v2.2, v2.3.0.2, v2.3.1

NCP version v2.3.1
vSphere versions

v6.7.0, v6.7 U1

v6.5 U1, v6.5 U2

NSX-T Version Support

PKS v1.3 supports NSX-T v2.2 and v2.3, with the following caveats:

Upgrade

The supported upgrade paths to the PKS v1.3.0 release are from PKS v1.2.5 and later. For more information, see Upgrading PKS.

NOTE: Upgrading to the PKS v1.3.0 release causes all certificates to be automatically regenerated. The old certificate authority is still trusted, and has a validity of one year. But the new certificates are signed with a new certificate authority, which is valid for four years.

New Features

The PKS v1.3.0 release adds support for the following:

  • Deployment on Azure.
  • BOSH Backup and Restore (BBR) for single-master clusters.
  • Custom and Routable pod networks on NSX-T.
  • Large size NSX-T load balancers with Bare Metal NSX-T edge nodes.
  • HTTP proxy for NSX-T components.
  • Ability to specify the size of the Pods IP Block subnet using a network profile.
  • Bootstrap security groups, custom floating IPs, and edge router selection using network profiles.
  • Sink resources in air-gapped environments.
  • Creating sink resources with the PKS CLI.
  • Sink resources include both pod logs as well as events from the Kubernetes API. These events are combined in a shared format that provides operators with a robust set of filtering and monitoring options.
  • Multiple NSX-T Tier-0 (T0) logical routers for use with PKS multi-tenant environments.
  • Multiple PKS foundations on the same NSX-T.
  • Smoke tests errand that uses the PKS CLI to create a Kubernetes cluster and then delete it. If the creation or deletion fails, the errand fails and the installation of the PKS tile is aborted. For more information, see the Errands section of the Installing PKS topic for your IaaS.
  • Scaling down the number of worker nodes.
  • Defining the CIDR range for Kubernetes pods and services on Flannel networks. For more information, see the Networking section of the Installing PKS topic for your IaaS.
  • Kubernetes v1.12.4. 

Fixed Issues

The PKS v1.3.0 release fixes the following known issues and security vulnerabilities:

  • The No proxy property for vSphere now accepts wildcard domains like "*.example.com" and "example.com".
  • The issue with NSX-T where special characters in username and password doesn't work is resolved.
  • CVE 2018-18264. This CVE allowed unauthenticated secret access to the Kubernetes Dashboard.
  • CVE-2018-15759. This CVE allowed an attacker to infer valid credentials and gain access to perform broker operations.

Known Issues

The PKS v1.3.0 release has the following known issues and other changes:

  • Upgrades from 1.2.x can fail if a customer has multiple clusters using the same external hostname. Customers who currently leverage the same external hostname across more than one Kubernetes cluster will be impacted when upgrading from 1.2.x to 1.3.0. The external hostname value can be set using the "--external-hostname" or "-e" argument when creating a cluster, for example: "pks create-cluster -e [hostname]". PKS 1.3.0 introduced restrictions to prevent this scenario and the upgrade will fail if clusters with duplicate hostnames exist. Customers who use duplicate external hostnames are recommended to NOT upgrade to 1.3.0 at this time and contact Support for more details. This issue ONLY affects customers who have existing clusters with duplicate external hostnames.
  • Upgrades from 1.2.x can fail if a customer is using a special character in the NO_PROXY settings for PKS on vSphere. Customers who leverage the HTTP Proxy feature for PKS network configuration may experience validation errors during the PKS 1.3.0 upgrade if you use the dash character ("-") in the NO_PROXY settings. Customers who experience this issue should contact Support for a hotfix that will also be applied to a future 1.3.x release.
  • PKS Flannel Network Gets Out of Sync with Docker Bridge Network (cni0). When VMs have been powered down for multiple days, turning them back on and issuing a `bosh recreate` to recreate the VMs causes the pods to get stuck in a "ContainerCreating" state. For a workaround, see PKS Flannel network gets out of sync with docker bridge network (cni0) in the Pivotal Knowledge Base.
  • Heapster is deprecated in PKS v1.3, and Kubernetes has retired Heapster.