This section describes each of the properties that you can define for a GreenplumPLService
configuration in the VMware Tanzu Greenplum manifest file.
apiVersion: "greenplum.pivotal.io/v1beta1"
kind: "GreenplumPLService"
metadata:
name: <string>
spec:
replicas: <integer>
cpu: <cpu-limit>
memory: <memory-limit>
workerSelector: {
<label>: "<value>"
[ ... ]
}
You specify Greenplum PL/Container configuration properties to the Greenplum Operator via the YAML-formatted Greenplum manifest file. A sample manifest file is provided in workspace/samples/my-gp-with-pl-instance.yaml
. The current version of the manifest supports configuring the cluster name, number of PL/Container replicas, and the memory, CPU, and worker selector settings. See also Deploying PL/Container with Greenplum for information about deploying a new Greenplum cluster with PL/Container using a manifest file.
Note: As a best practice, keep the PL./Container configuration properties in the same manifest file as Greenplum Database, to simplify upgrades or changes to the related service objects.
name: <string>
kubectl
commands using this name.
replicas: <int>
memory: <memory-limit>
4.5Gi
.). If omitted, the pod has no upper bound on the memory resource it can use or inherits the default limit if one is specified in its deployed namespace. See
Assign Memory Resources to Containers and Pods in the Kubernetes documentation for more information.
memory:
keyword from the YAML file.
cpu: <cpu-limit>
cpu: "1.2"
). If omitted, the pod has no upper bound on the CPU resource it can use or inherits the default limit if one is specified in its deployed namespace. See
Assign CPU Resources to Containers and Pods in the Kubernetes documentation for more information.
cpu:
keyword from the YAML file.
workerSelector: <map of key-value pairs>
nodeSelector
attribute. If a
workerSelector
is not desired, remove the
workerSelector
attribute from the manifest file.
worker=gpdb-pl4k
to one or more pods using the command:
$ kubectl label node <node_name> worker=gpdb-pl4k
With the above labels present in your cluster, you would edit the Greenplum Operator manifest file to specify the same key-value pairs in the workerSelector
attribute. This shows the relevant excerpt from the manifest file:
...
workerSelector: {
worker: "gpdb-pl4k"
}
...
See the workspace/samples/my-gp-with-pl-instance.yaml
file for an example manifest that configures the PL/Container resource.