Here are instructions for configuring file storage for VMware Tanzu Application Service for VMs (TAS for VMs) based on your IaaS and installation method. See the section that applies to your use case.
To minimize system downtime, VMware recommends using highly resilient and redundant external filestores for your TAS for VMs file storage. For more factors to consider when selecting file storage, see Configure File Storage in Configuring TAS for VMs for Upgrades.
After initial installation, do not change file storage configuration without first migrating existing files to the new provider.
To use the TAS for VMs internal filestore:
Select Internal WebDAV.
Click Save.
This section describes how to configure file storage for AWS.
If you followed the procedure in Preparing to Deploy Tanzu Operations Manager on AWS, you created the necessary resources for external S3-compatible file storage.
Important Some blobstores, for example, Oracle Cloud Infrastructure Object Storage, do not support S3 Signature v4 Streaming. To use blobstores without S3 Signature v4 Streaming support with VMware Tanzu Application Service for VMs, deselect the Signature v4 streaming check box.
For more information, see AWS-S3 Signature v4 Streaming.
For production-level Operations Manager deployments on AWS, VMware recommends selecting External S3-compatible filestore. For instructions, see External S3-Compatible Filestore.
You can also configure Fog blobstores to use AWS IAM instance profiles. For instructions, see Fog with AWS IAM Instance Profiles.
For more information about production-level Operations Manager deployments on AWS, see AWS Reference Architecture.
Select External S3-compatible filestore.
For URL endpoint, enter the https://
URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/
.
For Access key, enter the access key of the pcf-user
you created when you configured AWS for Operations Manager.
For Secret key, enter the secret key of the pcf-user
you created when configuring AWS for Operations Manager.
(Optional) If your TAS for VMs deployment is on AWS, you can alternatively enter the Access key and Secret key of the pcf-user
you created when you configured AWS for Operations Manager. If you select the S3 AWS with instance profile check box and also enter an Access key and Secret key, the instance profile overrules the access key and secret key.
From the S3 signature version drop-down menu, select V4 signature. For more information about S3 signatures, see the AWS documentation.
For Region, enter the AWS region in which your S3 buckets are located. For example, us-west-2
.
To encrypt the contents of your S3 filestore, select the Allow server-side encryption check box. This option is only available for AWS S3.
(Optional) If you selected the Allow server-side encryption check box, you can also configure a KMS key in KMS key ID. TAS for VMs uses this KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, TAS for VMs uses the default AWS key. For more information, see the AWS documentation.
Deselect the Path-style S3 URLs (deprecated) check box. When this check box is deselected, the S3 bucket is accessed using the virtual-hosted model instead of the path-based model. The deprecated path-based model is removed from AWS as of September 30, 2020. For more information about S3 path deprecation, see the AWS News Blog.
Enter names for your S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Buildpacks bucket name | pcf-buildpacks-bucket |
This S3 bucket stores app buildpacks. |
Droplets bucket name | pcf-droplets-bucket |
This S3 bucket stores app droplets. VMware recommends that you use a unique bucket name for droplets, but you can also use the same name as above. |
Packages bucket name | pcf-packages-bucket |
This S3 bucket stores app packages. VMware recommends that you use a unique bucket name for packages, but you can also use the same name as above. |
Resources bucket name | pcf-resources-bucket |
This S3 bucket stores app resources. VMware recommends that you use a unique bucket name for app resources, but you can also use the same name as above. |
Configure these check boxes depending on whether your S3 buckets are versioned:
For Backup region, enter the name of the AWS region in which your backup S3 buckets are located. For example, us-west-2
. These are the buckets used to back up and restore the contents of your S3 filestore.
(Optional) Enter names for your backup S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Backup buildpacks bucket name | buildpacks-backup-bucket |
This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above. |
Backup droplets bucket name | droplets-backup-bucket |
This S3 bucket is used to back up and restore your droplets bucket. VMware recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above. |
Backup packages bucket name | packages-backup-bucket |
This S3 bucket is used to back up and restore your packages bucket. VMware recommends that you use a unique bucket name for package backups, but you can also use the same name as above. |
Click Save.
Note For more information about AWS S3 signatures, see the AWS documentation.
To configure Fog blobstores to use AWS IAM instance profiles:
Configure an additional cloud-controller
IAM role with the following policy to give access to the S3 buckets you plan to use:
{
"Version": "2012-10-21",
"Statement": [{
"Effect": "Allow",
"Action": [ "s3:*" ],
"Resource": [
"arn:aws:s3:::YOUR-AWS-BUILDPACK-BUCKET",
"arn:aws:s3:::YOUR-AWS-BUILDPACK-BUCKET/*",
"arn:aws:s3:::YOUR-AWS-DROPLET-BUCKET",
"arn:aws:s3:::YOUR-AWS-DROPLET-BUCKET/*",
"arn:aws:s3:::YOUR-AWS-PACKAGE-BUCKET",
"arn:aws:s3:::YOUR-AWS-PACKAGE-BUCKET/*",
"arn:aws:s3:::YOUR-AWS-RESOURCE-BUCKET",
"arn:aws:s3:::YOUR-AWS-RESOURCE-BUCKET/*",
]
}]
}
Replace YOUR-AWS-BUILDPACK-BUCKET
, YOUR-AWS-DROPLET-BUCKET
, YOUR-AWS-PACKAGE-BUCKET
, and YOUR-AWS-RESOURCE-BUCKET
with the names of your AWS buckets. Do not use periods (.
) in your AWS bucket names.
If you use the AWS console, an IAM role is automatically assigned to an IAM instance profile with the same name, cloud-controller
. If you do not use the AWS console, you must create an IAM instance profile with a single assigned IAM role. For more information, see Step 4: Create an IAM Instance Profile for Your Amazon EC2 Instances in the AWS documentation.
In your BOSH cloud config, create a VM extension to add the IAM instance profile you created to VMs using the extension.
vm_extensions:
- cloud_properties:
iam_instance_profile: cloud-controller
name: cloud-controller-iam
You can also create a VM extension using the Tanzu Operations Manager API. For more information, see Create or Update a VM Extension in Managing Custom VM Extensions.
In your TAS for VMs deployment manifest, use the cloud-controller-iam
VM extension you created for the instance groups containing cloud_controller
, cloud_controller_worker
, and cloud_controller_clock
, as in the following example:
instance_groups:
...
- name: api
...
vm_extensions:
- cloud-controller-iam
...
- name: cc-worker
...
vm_extensions:
- cloud-controller-iam
...
- name: scheduler
...
vm_extensions:
- cloud-controller-iam
Insert the following configuration into your deployment manifest under properties.cc
:
cc:
buildpacks:
blobstore_type: fog
buildpack_directory_key: YOUR-AWS-BUILDPACK-BUCKET
fog_connection: &fog_connection
provider: AWS
region: us-east-1
use_iam_profile: true
droplets:
blobstore_type: fog
droplet_directory_key: YOUR-AWS-DROPLET-BUCKET
fog_connection: *fog_connection
packages:
blobstore_type: fog
app_package_directory_key: YOUR-AWS-PACKAGE-BUCKET
fog_connection: *fog_connection
resource_pool:
blobstore_type: fog
resource_directory_key: YOUR-AWS-RESOURCE-BUCKET
fog_connection: *fog_connection
Replace YOUR-AWS-BUILDPACK-BUCKET
, YOUR-AWS-DROPLET-BUCKET
, YOUR-AWS-PACKAGE-BUCKET
, and YOUR-AWS-RESOURCE-BUCKET
with the names of your AWS buckets. Do not use periods (.
) in your AWS bucket names.
(Optional) Provide other configuration with the fog_connection
hash, which is passed through to the Fog gem.
This section describes how to configure file storage for GCP. Follow the procedure that corresponds to your installation method:
For production-level Operations Manager deployments on GCP, VMware recommends selecting External Google Cloud Storage. For more information about production-level Operations Manager deployments on GCP, see GCP Reference Architecture.
This section describes how to configure file storage for GCP if you installed Operations Manager manually.
TAS for VMs can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS service account.To configure file storage for GCP, follow one of these procedures:
Use an access key and secret key. For more information, see External Google Cloud Storage with access key and secret key below.
Use a service account. For more information, see External Google Cloud Storage with service account below.
To configure file storage for GCP using an access key and secret key:
Select External GCS with access key and secret key.
Enter values for Access key and Secret key. To obtain the values for these fields:
Enter the names of the storage buckets you created in Step 6: Create Database Instance and Databases in Preparing to Deploy Tanzu Operations Manager on GCP Manually:
PREFIX-PCF-buildpacks
PREFIX-PCF-droplets
PREFIX-PCF-packages
PREFIX-PCF-resources
PREFIX
is a prefix of your choice, required to make the bucket name unique.Click Save.
To configure file storage for GCP using a service account:
You can either use the same service account that you created for Tanzu Operations Manager, or create a separate service account for TAS for VMs file storage. To create a separate service account for TAS for VMs file storage, follow the procedure in Step 1: Set Up IAM Service Accounts in Preparing to Deploy Tanzu Operations Manager on GCP Manually, but only select the Storage > Storage Admin role.
Select External GCS with service account.
For GCP project ID, enter the Project ID on your GCP Console that you want to use for your TAS for VMs file storage.
For GCP service account email, enter the email address associated with your GCP account.
For GCP service account key, enter in JSON format the account key that you use to access the specified GCP project.
Enter the names of the storage buckets you created in Step 7: Create storage buckets in Preparing to Deploy Tanzu Operations Manager on GCP Manually:
PREFIX-PCF-buildpacks
PREFIX-PCF-droplets
PREFIX-PCF-packages
PREFIX-PCF-resources
PREFIX-PCF-backup
PREFIX
is a prefix of your choice, required to make the bucket name unique.Click Save.
This section describes how to configure file storage for GCP if you installed Operations Manager with Terraform.
TAS for VMs can use Google Cloud Storage (GCS) as its external filestore by using either a GCP interoperable storage access key or your GCS Service Account.To configure file storage for GCP, follow one of these procedures:
Use an access key and secret key. For more information, see External Google Cloud Storage with access key and secret key below.
Use a service account. For more information, see External Google Cloud Storage with service account below.
To configure file storage for GCP using an access key and secret key:
Select External GCS with access key and secret key.
Enter values for Access key and Secret key. To obtain the values for these fields:
Enter the names of the storage buckets you created in [GCP Service Account Key for Blobstore][https://docs.vmware.com/en/VMware-Tanzu-Application-Service/4.0/tas-for-vms/pas-file-storage.html) in Deploying Tanzu Operations Manager on GCP Using Terraform:
buildpacks_bucket
from your Terraform output.droplets_bucket
from your Terraform output.resources_bucket
from your Terraform output.packages_bucket
from your Terraform output.Click Save.
To configure file storage for GCP using a service account:
Note You can use the same service account that you created for Tanzu Operations Manager, or you can create a separate service account for TAS for VMs file storage. To create a separate service account for TAS for VMs file storage, follow the procedure in Step 1: Set up IAM service accounts in Preparing to Deploy Tanzu Operations Manager on GCP manually, but only select the Storage > Storage Admin role.
Select External GCS with service account.
For GCP project ID, enter the Project ID on your GCP Console that you want to use for your TAS for VMs file storage.
For GCP service account email, enter the email address associated with your GCP account.
For GCP service account key, enter in JSON format the account key that you use to access the specified GCP project.
Enter the names of the storage buckets you created in GCP Service Account Key for Blobstore in Deploying Tanzu Operations Manager on GCP using Terraform:
buildpacks_bucket
from your Terraform output.droplets_bucket
from your Terraform output.resources_bucket
from your Terraform output.packages_bucket
from your Terraform output.backup_bucket
from your Terraform output.Click Save.
This section describes how to configure file storage for Azure.
For production-level Operations Manager deployments on Azure, VMware recommends selecting External Azure Storage. For more information about production-level Operations Manager deployments on Azure, see Azure Reference Architecture.
For more factors to consider when selecting file storage, see Configure File Storage in Configuring TAS for VMs for Upgrades.
To use external Azure file storage for your TAS for VMs filestore:Select External Azure storage.
To create a new storage account for the TAS for VMs filestore:
To create new storage containers in the storage account you created in the previous step:
Note: BBR requires that you configure soft delete in your Azure storage account before you configure backup and restore for your Azure blobstores in Tanzu Operations Manager. You must set a reasonable retention policy to minimize storage costs. For more information about configuring soft delete in your Azure storage account, see the Azure documentation.
In TAS for VMs, enter the name of the storage account you created for Account name.
In the Access key field, enter one of the access keys provided for the storage account. To obtain a value for this field:
For Environment, enter the name of the Azure Cloud environment that contains your storage. This value defaults to AzureCloud, but other options include AzureChinaCloud, AzureUSGovernment, and AzureGermanCloud.
For Buildpacks container name, enter the container name for storing your app buildpacks.
For Droplets container name, enter the container name for your app droplet storage. VMware recommends that you use a unique container name, but you can use the same container name as the previous step.
For Packages container name, enter the container name for packages. VMware recommends that you use a unique container name, but you can use the same container name as the previous step.
For Resources container name, enter the container name for resources. VMware recommends that you use a unique container name, but you can use the same container name as the previous step.
(Optional) To allow backup and restore for your Azure blobstores in TAS for VMs, select the Allow backup and restore check box.
Note: You must configure all listed storage containers to use soft deletes.
(Optional) To configure TAS for VMs to restore your containers to a different Azure storage account than the account where you take backups:
Click Save.
Note To configure backup and restore for a TAS for VMs installation that uses an S3-compatible blobstore, see Enabling External Blobstore Backups.
For production-level Operations Manager deployments on OpenStack, VMware recommends selecting External S3-compatible filestore. For more information about production-level Operations Manager deployments on OpenStack, see OpenStack Reference Architecture.
For more factors to consider when selecting file storage, see Configure File Storage in Configuring TAS for VMs for Upgrades.
To use an external S3-compatible filestore for TAS for VMs file storage:Select External S3-compatible filestore.
For URL endpoint, enter the https://
URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/
.
For Access key, enter the access key of the pcf-user
you created when you configured AWS for Operations Manager.
For Secret key, enter the secret key of the pcf-user
you created when configuring AWS for Operations Manager.
From the S3 signature version drop-down menu, select V4 signature. For more information about S3 signatures, see the AWS documentation.
For Region, enter the AWS region in which your S3 buckets are located. For example, us-west-2
.
To encrypt the contents of your S3 filestore, select the Allow server-side encryption check box. This option is only available for AWS S3.
(Optional) If you selected the Allow server-side encryption check box, you can also configure a KMS key in KMS key ID. TAS for VMs uses this KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, TAS for VMs uses the default AWS key. For more information, see the AWS documentation.
Deselect the Path-style S3 URLs (deprecated) check box. When this check box is deselected, the S3 bucket is accessed using the virtual-hosted model instead of the path-based model. The deprecated path-based model is removed from AWS as of September 30, 2020. For more information about S3 path deprecation, see the AWS News Blog.
Enter names for your S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Buildpacks bucket name | pcf-buildpacks-bucket |
This S3 bucket stores app buildpacks. |
Droplets bucket name | pcf-droplets-bucket |
This S3 bucket stores app droplets. VMware recommends that you use a unique bucket name for droplets, but you can also use the same name as above. |
Packages bucket name | pcf-packages-bucket |
This S3 bucket stores app packages. VMware recommends that you use a unique bucket name for packages, but you can also use the same name as above. |
Resources bucket name | pcf-resources-bucket |
This S3 bucket stores app resources. VMware recommends that you use a unique bucket name for app resources, but you can also use the same name as above. |
Configure these check boxes depending on whether your S3 buckets are versioned:
For Backup region, enter the name of the AWS region in which your backup S3 buckets are located. For example, us-west-2
. These are the buckets used to back up and restore the contents of your S3 filestore.
(Optional) Enter names for your backup S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Backup buildpacks bucket name | buildpacks-backup-bucket |
This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above. |
Backup droplets bucket name | droplets-backup-bucket |
This S3 bucket is used to back up and restore your droplets bucket. VMware recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above. |
Backup packages bucket name | packages-backup-bucket |
This S3 bucket is used to back up and restore your packages bucket. VMware recommends that you use a unique bucket name for package backups, but you can also use the same name as above. |
Click Save.
Note For more information about AWS S3 signatures, see the AWS documentation.
For production-level Operations Manager deployments on vSphere, VMware recommends selecting External S3-compatible filestore and using an external filestore. For more information about production-level Operations Manager deployments on vSphere, see vSphere Reference Architecture.
For more factors to consider when selecting file storage, see Configure File Storage in Configuring TAS for VMs for Upgrades.
To use an external S3-compatible filestore for TAS for VMs file storage:Select External S3-compatible filestore.
For URL endpoint, enter the https://
URL endpoint for your region. For example, https://s3.us-west-2.amazonaws.com/
.
For Access key, enter the access key of the pcf-user
you created when you configured AWS for Operations Manager.
For Secret key, enter the secret key of the pcf-user
you created when configuring AWS for Operations Manager.
From the S3 signature version drop-down menu, select V4 signature. For more information about S3 signatures, see the AWS documentation.
For Region, enter the AWS region in which your S3 buckets are located. For example, us-west-2
.
To encrypt the contents of your S3 filestore, select the Allow server-side encryption check box. This option is only available for AWS S3.
(Optional) If you selected the Allow server-side encryption check box, you can also configure a KMS key in KMS key ID. TAS for VMs uses this KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, TAS for VMs uses the default AWS key. For more information, see the AWS documentation.
Deselect the Path-style S3 URLs (deprecated) check box. When this check box is deselected, the S3 bucket is accessed using the virtual-hosted model instead of the path-based model. The deprecated path-based model is removed from AWS as of September 30, 2020. For more information about S3 path deprecation, see the AWS News Blog.
Enter names for your S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Buildpacks bucket name | pcf-buildpacks-bucket |
This S3 bucket stores app buildpacks. |
Droplets bucket name | pcf-droplets-bucket |
This S3 bucket stores app droplets. VMware recommends that you use a unique bucket name for droplets, but you can also use the same name as above. |
Packages bucket name | pcf-packages-bucket |
This S3 bucket stores app packages. VMware recommends that you use a unique bucket name for packages, but you can also use the same name as above. |
Resources bucket name | pcf-resources-bucket |
This S3 bucket stores app resources. VMware recommends that you use a unique bucket name for app resources, but you can also use the same name as above. |
Configure these check boxes depending on whether your S3 buckets are versioned:
For Backup region, enter the name of the AWS region in which your backup S3 buckets are located. For example, us-west-2
. These are the buckets used to back up and restore the contents of your S3 filestore.
(Optional) Enter names for your backup S3 buckets:
Tanzu Operations Manager Field | Value | Description |
---|---|---|
Backup buildpacks bucket name | buildpacks-backup-bucket |
This S3 bucket is used to back up and restore your buildpacks bucket. This bucket name must be different from the buckets you named above. |
Backup droplets bucket name | droplets-backup-bucket |
This S3 bucket is used to back up and restore your droplets bucket. VMware recommends that you use a unique bucket name for droplet backups, but you can also use the same name as above. |
Backup packages bucket name | packages-backup-bucket |
This S3 bucket is used to back up and restore your packages bucket. VMware recommends that you use a unique bucket name for package backups, but you can also use the same name as above. |
Click Save.
Note For more information about AWS S3 signatures, see the AWS documentation.