If you use Kubernetes as your orchestration framework, you can install and deploy the DSM Consumption Operator to enable native, self-service consumption of VMware Data Services Manager from your Kubernetes environment.

Be familiar with basic concepts and terminology related to the DSM Consumption Operator.

  • Consumption Operator - The operator used to manage resources against the DSM Gateway.
  • Cloud Administrator - An administrator of a Kubernetes cluster that consumes DSM in a self-service manner from within Kubernetes clusters. The Cloud Administrator is responsible for installing and setting up the Consumption Operator. The Cloud Administrator account maps to a DSM user account within DSM. For information about the DSM user, see About Roles and Responsibilities and Configuring VMware Data Services Manager in the vSphere Client.
  • Cloud User - A developer using a Kubernetes cluster to consume database cluster from DSM in a self-service manner.
  • DSM Gateway - The gateway implementation based on Kubernetes. It is used within DSM to provide a Kubernetes API for infrastructure policies, Postgres clusters, and MySQL clusters.
  • Consumption Cluster - The Kubernetes cluster where the consumption operator and custom resources are deployed to use the DSM API for self-service.
  • Infrastructure Policy - Allows vSphere administrators to define and set limits to specific compute, storage, and network resources that DSM database workloads can use.

Supported DSM Version

The following table maps the Consumption Operator version with the DSM version.

Consumption Operator DSM Version
1.0.0 2.0.0


Follow these steps to install and configure the DSM Consumption Operator:

Step 1: Satisfy the Requirements.

Step 2: Install the DSM Consumption Operator. This task is performed by the Cloud Administrator.

Step 3: Configure a User Namespace. This task is performed by the Cloud Administrator.

Step 4: Create a Database Cluster. This task is performed by Database Users.

Step 1: Requirements

Before you begin installing and deploying the DSM Consumption Operator, ensure that the following requirements are met:

Step 2: Install the DSM Consumption Operator

This task is performed by the Cloud Administrator.


As a Cloud Administrator, obtain the following details from the DSM Administrator:

  • DSM username and password for the Cloud Administrator.
  • DSM provider URL.
  • TLS certificate for secure communication between consumption operator and DSM provider. Save this certificate in a file named 'root_ca' under directory 'consumption/'. These names are just examples. You can use other names, but make sure to use them correctly in the helm and kubectl commands during the installation. For more details, see Getting Provider VM Certificate
  • List of infrastructure policies that are supported by the DSM provider and are allowed to be used in the given consumption cluster.
  • List of backup locations that are supported by the DSM provider and are allowed to be used in the given consumption cluster.

Getting Provider VM Certificate

If you are a DSM Administrator, you need to share the provider VM CA certificate with the Cloud Administrator. Use one of the following methods to get this certificate.

  • From Chrome Browser:

    1. Open the DSM UI portal.

    2. Click the icon to the left of the URL in the address bar.

    3. In the dropdown list, click the Connection tab > Certificate to open the Certificate Viewer window.

    4. Click the Details tab, click Export, and save the certificate with the .pem extension locally.

      Getting Provider VM Certificate from Chrome
  • From Firefox Browser:

    1. Open DSM UI portal.

    2. Click the lock icon to the left of the URL in the address bar.

    3. Click Connection Secure > More Information.

    4. On the Security window, click View Certificate.

    5. In the Miscellaneous section, click PEM (cert) to download the CA certificate for Provider VM.

      Getting Provider VM Certificate from Firefox


Follow this procedure to install the DSM Consumption Operator.

  1. Pull the helm chart from registry and unpack it in a directory.

    helm pull oci://projects.registry.vmware.com/dsm-consumption-operator/dsm-consumption-operator --version 1.0.0 -d consumption/ --untar

    This action will pull the DSM consumption operator helm chart and untar the package in the consumption/ directory.

  2. Create a values-override.yaml file and update appropriate DSM resources, including infrastructure policies, backup locations, and so on.

    In the consumption directory, copy the values.yaml and create a new values_override.yaml file. Use the following as an example:

    imagePullSecret: registry-creds
    replicas: 1
      name: projects.registry.vmware.com/dsm-consumption-operator/consumption-operator
      tag: 1.0.0
      authSecretName: dsm-auth-creds
      # allowedInfrastructurePolicies is a mandatory field that needs to be filled with allowed infrastructure policies for the given consumption cluster
      - infra-policy-01
      # allowedBackupLocations is a mandatory field that holds a list of backup locations that can be used by database clusters created in this consumption cluster
      - default-backup-storage
    # consumptionClusterName is an optional name that you can provide to identify the Kubernetes cluster where the operator is deployed
    consumptionClusterName: "vcd-org-k8s-cluster"
    # psp field allows you to deploy the operator on pod security policies-enabled Kubernetes cluster.
    # Set psp.required to true and provide the ClusterRole corresponding to the restricted policy.
      required: false
      role: ""

    Note: If you are using TKG, switch psp.required to true and use psp role, for example: psp:vmware-system-restricted. For more information on TKG and PodSecurityPolicies, see Using Pod Security Policies with Tanzu Kubernetes Clusters.

  3. Create an operator namespace.

    kubectl create namespace dsm-consumption-operator-system
  4. Create a docker registry secret named registry-creds.

    This secret is needed to pull the image from the registry where the consumption operator image exists.

    The secret name should match the value in the field imagePullSecret in the values_override.yaml you created before.

    If you are directly using the default VMware Harbor distribution registry, you don't need authentication. You can simply create a registry secret using the following command, ignoring the username and password values:

    kubectl -n dsm-consumption-operator-system create secret docker-registry registry-creds \
      --docker-server=https://projects.registry.vmware.com \
      --docker-username=ignore \

    If you are using your own internal registry, where the consumption operator image exists, you need to provide those credentials here:

      kubectl -n dsm-consumption-operator-system create secret docker-registry registry-creds \
      --docker-server=<DOCKER_REGISTRY> \
      --docker-username=<REGISTRY_USERNAME> \
  5. Create an authentication secret that includes all the information needed to connect to the DSM provider.

    The following example uses the Daily CI environment.

    kubectl -n dsm-consumption-operator-system create secret generic dsm-auth-creds \
     --from-file=root_ca=consumption/root-ca \
     --from-literal=dsm_user=<CLOUD_ADMIN_USERNAME> \
     --from-literal=dsm_password=<CLOUD_ADMIN_PASSWORD> \

    Get all these values from the DSM administrator, who is managing the DSM provider instance.

    Note: Make sure that this secret name matches the value in the field 'dsm.authSecretName' in the 'values_override.yaml' file.
  6. Install the operator. Make sure that you already have the operator helm chart pulled and a values_override.yaml file in the consumption/ directory.

    helm install dsm-consumption-operator consumption/dsm-consumption-operator -f consumption/values_override.yaml --namespace dsm-consumption-operator-system
  7. As a Cloud Admin, check that the operator pod is up and running in the given operator namespace:

    kubectl get pods -n dsm-consumption-operator-system
    NAME                                                              READY   STATUS      RESTARTS   AGE
    dsm-consumption-operator-controller-manager-7c69b5cbdc-s5jfl      1/1     Running     0          3h41m

Step 3: Configure a User Namespace

This task is performed by the Cloud Administrator.

Once the consumption operator is deployed successfully, the Cloud Administrator can set up Kubernetes namespaces for the Cloud Users so that they can start deploying databases on DSM.

The Cloud Administrator can also have namespace-specific enforcements to allow or disallow the use of InfrastructurePolicies and BackupLocations in different namespaces.

To achieve this policy enforcement, consumption operator introduces two custom resources called InfraPolicyBinding and BackupLocationBinding. They are simple objects with no spec field. The Cloud Administrator only needs to make sure that the name of the binding object matches the InfrastructurePolicy or BackupLocation that the namespace is allowed to use.

Create a file dev-team-ns.yaml and run the following command:

kubectl apply -f dev-team-ns.yaml

This action creates a namespace called dev-team, and creates infrastructurepolicybinding and backuplocationbinding.

To check the status of these bindings, you can run kubectl get on those resources and check the status column.

Use the following as an example:

apiVersion: v1
kind: Namespace
  name: dev-team
apiVersion: infrastructure.dataservices.vmware.com/v1alpha1
kind: InfrastructurePolicyBinding
  name: infra-policy-01
  namespace: dev-team
apiVersion: databases.dataservices.vmware.com/v1alpha1
kind: BackupLocationBinding
  name: default-backup-storage
  namespace: dev-team

As a Cloud Administrator, you can configure multiple namespaces with different infrastructure policies and backup locations in the same consumption cluster.

You can also set up multiple consumption clusters to connect to a single DSM provider.

You can use the same cluster name in different namespaces or different Kubernetes clusters. However, in DSM 2.0, same namespace cannot have a PostgresCluster and a MySQLCluster with the same name.

Step 4: Create a Database Cluster

This task is performed by Cloud Users.

  1. View available infrastructure policies by running the following command:

    kubectl get infrastructurepolicybinding -n dev-team

    You can also check the status field of each infrapolicybinding to find out the values of vmClass, storagePolicy, and so on.

  2. View available backup locations by running the following command:

    kubectl get backuplocationbinding -n dev-team

  3. Create a Postgres or MySQL database cluster.

    For Postgres, save the following content in a file called pg-dev-cluster.yaml and run kubectl apply -f pg-dev-cluster.yaml.

    apiVersion: databases.dataservices.vmware.com/v1alpha1
    kind: PostgresCluster
      name: pg-dev-cluster
      namespace: dev-team
      replicas: 1
      version: "14"
        name: medium
      storageSpace: 20Gi
        name: infra-policy-01
      storagePolicyName: dsm-test-1
        backupRetentionDays: 91
          - name: full-weekly
            type: full
            schedule: "0 0 * * 0"
          - name: incremental-daily
            type: incremental
            schedule: "0 0 * * *"
        name: default-backup-storage    

    For MySQL, save the following content in a file called mysql-dev-cluster.yaml and run kubectl apply -f mysql-dev-cluster.yaml.

    apiVersion: databases.dataservices.vmware.com/v1alpha1
    kind: MySQLCluster
      name: mysql-dev-cluster
      namespace: dev-team
      members: 1
      version: "8.0.32"
        name: medium
      storageSpace: 25Gi
        backupRetentionDays: 91
          - name: full-30mins
            type: full
            schedule: "*/30 * * * *"
        name: infra-policy-01
      storagePolicyName: dsm-test-1
        name: default-backup-storage

    For more information on the PostgresCluster and MySQLCluster API fields, see VMware Data Services Manager API Reference.

Step 5: Connect to the Database Cluster

  1. Once the cluster is in ready state, use its status field to get connection details. Password is stored in a secret in the same namespace and with name as referenced in status.passwordRef.name below.

       dbname: pg-dev-cluster
       host: <host-IP>
         name: pg-dev-cluster
       port: 5432
       username: pgadmin

    To get the password, run:

    kubectl get secrets/pg-dev-cluster --template={{.data.password}} | base64 -D
  2. Test the connection to your newly-created database cluster using psql. Enter password when prompted.

     psql -h <host-IP> -p 5432 -U pgadmin -d pg-dev-cluster
     Password for user pgadmin:
     SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
     Type "help" for help.
check-circle-line exclamation-circle-line close-line
Scroll to top icon