Here are instructions for configuring VMware Tanzu Application Service for VMs (TAS for VMs).
Before you begin this procedure, you must:
Ensure that you have successfully completed the steps to prepare your environment for Operations Manager and install and configure the BOSH Director. For more information, see the VMware Tanzu Operations Manager configuration topic for your IaaS.
If you plan to install the Operations Manager IPsec add-on, you must do so before installing any other tiles. VMware recommends installing IPsec immediately after Tanzu Operations Manager and before installing the TAS for VMs Runtime tile. For more information, see Installing the IPsec for VMware Tanzu.
Before you can configure TAS for VMs, you must add the TAS for VMs tile to your Tanzu Operations Manager Installation Dashboard.
To add the TAS for VMs tile to your Tanzu Operations Manager Installation Dashboard:
If you have not already downloaded TAS for VMs, log in to Broadcom Support and click VMware Tanzu Application Service for VMs.
From the Releases drop-down menu, select the release to install and choose one of the following:
.pivotal
file.Click Small Footprint TAS for VMs to download the Small Footprint TAS for VMs .pivotal
file. For more information, see Getting Started with Small Footprint TAS for VMs.
Go to the Tanzu Operations Manager Installation Dashboard.
Click Import a Product to add your tile to Operations Manager. For more information, see Adding and Deleting Products.
Click the VMware Tanzu Application Service for VMs tile in the Installation Dashboard.
For Azure environments, this configuration pane is Assign Networks and does not include AZ configuration.
Select Assign AZ and Networks. These are the Availability Zones (AZs) that you create when configuring BOSH Director.
Under Place singleton jobs, select an AZ. Tanzu Operations Manager runs any job with a single instance in this AZ.
Under Balance other jobs, select all AZs. Tanzu Operations Manager balances instances of jobs with more than one instance across the AZs that you specify.
For production deployments, VMware recommends at least three AZs for a highly available installation of TAS for VMs.
From the Network drop-down menu, select the network where you want to run TAS for VMs.
Click Save.
In the Domains pane, you configure a wildcard DNS record for both the apps domain and system domain.
To configure the Domains pane:
Select Domains.
Enter the name of your system domain in the System domain field. The system domain defines your target when you push apps to TAS for VMs, such as your load balancer. TAS for VMs assigns system components such as UAA and Apps Manager to subdomains under this domain.
Enter the name of your apps domain in the Apps domain field. The apps domain is the default domain that apps use for their hostnames. TAS for VMs hosts each app at subdomains under this domain. You can use the Cloud Foundry Command Line Interface (cf CLI) to add or delete subdomains assigned to individual apps.
Click Save.
For additional guidance based on your installation method, see the table below:
Installation Method | Guidance |
---|---|
Manual | Enter the domains you created when preparing your environment for Operations Manager. |
Terraform | Enter the values for sys\_domain and apps\_domain from the Terraform output. |
VMware recommends that you use the same domain name but different subdomain names for your system and app domains. Doing so allows you to use a single wildcard certificate for the domain while preventing apps from creating routes that overlap with system routes.
In the Networking pane, you configure security and routing services for your IaaS.
To configure the Networking pane:
For Gorouter IPs, see the guidance below:
Note If you choose to assign specific IP addresses in the Gorouter IPs field, ensure that these IP addresses are in the subnet that you configured for TAS for VMs in Tanzu Operations Manager.
Gorouter IPs field |
---|
|
For SSH Proxy IPs and TCP router IPs, see the guidance below:
2222
.Note: If you configure mutual TLS app identity verification, app containers accept incoming communication only from the Gorouter. This deactivates TCP routing.
Under Certificates and private keys for the Gorouter, you must provide at least one certificate and private key pair for the Gorouter. The Gorouter is configured to receive TLS communication by default. You can configure multiple certificates for the Gorouter.
Note When providing custom certificates, enter them in this order: wildcard
, Intermediate
, CA
. For more information, see the DigiCert documentation.
Note If you configured your Tanzu Operations Manager front end without a certificate, you can use this new certificate to finish configuring Tanzu Operations Manager. To configure your Tanzu Operations Manager front end certificate, see the Tanzu Operations Manager documentation.
Note Ensure that you add any certificates that you generate in this pane to your infrastructure load balancer.
Deselecting the Use HTTP/2 protocol check box introduces potential breaking changes for app routes.
x-forwarded-client-cert
(XFCC) HTTP headers based on where TLS is terminated for the first time in your deployment. The table below indicates which option to choose based on your deployment configuration:
Deployment Configuration | TLS Option | Additional Notes |
---|---|---|
|
Infrastructure load balancer | The Gorouter forwards the XFCC header when included in the request. |
|
Gorouter | The Gorouter strips the XFCC header if it is included in the request and forwards the client certificate received in the TLS handshake in a new XFCC header. Caution If you select the The Gorouter does not request client certificates option in the Gorouter behavior for client certificate validation field, the XFCC header cannot be delivered to apps. |
Requests to the platform fail upon upgrade if your load balancer is configured with client certificates and the Gorouter is not configured with the appropriate CA. To mitigate this issue, select The Gorouter does not request client certificates.
ECDHE-RSA-AES128-GCM-SHA256:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
. Important Specify cipher suites that are supported by the versions configured under Select the range of TLS versions supported by the Gorouter. For example, TLS v1.3 does not support configuring cipher suites. If you select TLSv1.3 only, you cannot configure cipher suites for the Gorouter.
AWS Classic Load Balancers do not support the TAS for VMs default cipher suites. For more information about configuring your AWS load balancers and Gorouter, see TLS Cipher suite support by AWS load balancers in Securing traffic into TAS for VMs.80
. 0
or no value. The default value is 500
. httpStartStop
event metrics emitted for each app request. If your deployment uses App Metrics, you can also find this information in your App Metrics deployment. For more information, see the App Metrics documentation. 900
. To accommodate larger uploads over connections with high latency, increase the value. Caution: If you select this option, GET request query parameters in Gorouter access logs may contain sensitive information.
IaaS | Guidance |
---|---|
AWS | Because AWS ELB has a default timeout of 60 seconds, VMware recommends configuring a value greater than 60 . |
Azure | By default, the Azure load balancer times out at 240 seconds without sending a TCP RST to clients. To force the load balancer to send the TCP RST, VMware recommends configuring a value lower than 240 . |
GCP | GCP has a default timeout of 600 seconds. For GCP HTTP load balancers, VMware recommends configuring a value greater than 600 . For GCP TCP load balancers, VMware recommends configuring a value less than 600 to force the load balancer to send a TCP RST. |
Other | Configure a value greater than that of the back end idle timeout for your load balancer. |
Note Do not set a front end idle timeout lower than six seconds.
900
. Referer
User-Agent
X-Forwarded-For
X-Forwarded-Proto
X-Vcap-Request-ID
443
. For AWS environments that are not using an Application Load Balancer, enter 4443
.The NSX-T integration only works for new installations of TAS for VMs. If your TAS for VMs tile is already deployed and configured to use Silk as its CNI, you cannot re-configure your deployment to use NSX-T.
1454
. Some configurations may require a smaller MTU value. For example, networks that use GRE tunnels may require a smaller MTU.1400
or lower to prevent Azure from fragmenting the packets. For more information, see the Azure documentation. 10.255.0.0/16
. The overlay network IP range you configure must not conflict with any other IP addresses in your network. Editing this property might cause container-to-container (C2C) networking downtime and security implications the next time you redeploy TAS for VMs.
/24
subnet, enter 24
. The default value is 24
. The minimum value you can configure is 2
. The maximum value you can configure is 30
. Configuring a smaller subnet allows more Diego Cells per TAS for VMs deployment, but fewer apps per Diego Cell. Conversely, configuring a larger subnet allows more apps per Diego Cell, but fewer Diego Cells per TAS for VMs deployment. The values that you configure in the Overlay subnet and Overlay subnet prefix length per Diego Cell fields determine the exact number of Diego Cells allowed per TAS for VMs deployment and apps allowed per Diego Cell. Editing this property might cause C2C networking downtime and security implications the next time you redeploy TAS for VMs.
4789
.
1
. For more information, see Policies in container-to-container networking and Restricting app access to internal TAS for VMs Components.100
.Deactivating the Enforce Silk network policy check box allows all app containers to access any other app container with no restrictions.
120
. You may need to increase this value if your deployment experiences timeout issues related to container-to-container networking. TCP routing is deactivated by default. You can enable this feature if your DNS sends TCP traffic through a load balancer rather than directly to a TCP router. To enable TCP routing:
Note: If you have mutual TLS app identity verification enabled, app containers accept incoming communication only from the Gorouter. This deactivates TCP routing.
Note If you configured AWS for Operations Manager manually, enter 1024-1123
which corresponds to the rules you created for pcf-tcp-elb
.
IaaS | Instructions |
---|---|
GCP | Specify the name of a GCP TCP load balancer in the LOAD BALANCER field of the TCP Router job in the Resource Config pane. You configure this later on in TAS for VMs. For more information, see Configure Resources. |
AWS | Specify the name of a TCP ELB in the LOAD BALANCER field of the TCP Router job in the Resource Config pane. You configure this later on in TAS for VMs. For more information, see Configure Resources. |
Azure | Specify the name of a Azure load balancer in the LOAD BALANCER field of the TCP Router job in the Resource Config pane. You configure this later on in TAS for VMs. For more information, see Configure Resources. |
OpenStack and vSphere |
|
JSESSIONID
. Some apps require a different cookie name. For example, Spring WebFlux requires SESSION
for the cookie name. Gorouter uses these cookies to support session affinity, or sticky sessions. For more information, see Session Affinity in HTTP Routing.In the App Containers pane, you configure microservice frameworks, private Docker registries, and other services that support your apps at the container level.
To configure the App Containers pane:
Select App Containers.
The Allow custom buildpacks check box governs the ability to pass a custom buildpack URL to the -b
option of the cf push
command. This check box is selected by default, letting developers use custom buildpacks when deploying apps. To disallow the use of custom buildpacks, deselect the Allow custom buildpacks check box. For more information about custom buildpacks, see Buildpacks.
The Allow SSH access to app containers check box controls SSH access to app instances. To allow SSH access across your deployment, select the Allow SSH access to app containers check box. To prevent all SSH access, deselect the Allow SSH access to app containers check box. For more information about SSH access permissions at the space and app level, see App SSH overview.
Important To allow SSH traffic, ensure that port 2222
is open on your load balancer.
You can give SSH access to an app only if an admin assigns you a Space Developer role in the space where the app runs. For more information, see Manage App Space Roles in Managing User Roles with Apps Manager.
To allow SSH access for new apps by default in spaces that allow SSH, select the Allow SSH access when an app is created check box. If you deselect this check box, developers can still allow SSH access after pushing their apps by running cf enable-ssh APP-NAME
, where APP-NAME
is the name of the app.
(Optional) To give apps that generally stay within their CPU entitlements priority access to extra CPU during CPU bursts, select the Allow CPU burst optimization check box. This check box is selected by default. When this check box is selected, apps that generally stay within their CPU entitlements have higher-priority access to extra CPU than apps that generally exceed their CPU entitlements. When this check box is deselected, all apps have equal access to extra CPU during CPU bursts. Whether you select or deselect this check box, all apps are always guaranteed the amount of CPU that is within their CPU entitlements.
Under Gorouter app identity verification, choose how the Gorouter verifies app identity to allow encryption and prevent misrouting:
This feature does not work if The Gorouter does not request client certificates is selected under Gorouter behavior for client certificate validation in the Networking pane.
For more information, see Preventing misrouting in HTTP Routing.
To configure TAS for VMs to run app instances in Docker containers, enter a comma-separated list of their IP address ranges in the Docker registry allow list field. For more information, see Using Docker Registries.
Under Diego Cell disk cleanup scheduling, select one of the following options:
The Allow containerd delegation check box governs whether Garden delegates container create and destroy operations to the containerd tool. This check box is selected by default. To disallow Garden delegating container create and destroy operations to containerd, deselect the Allow containerd delegation check box. For more information about the containerd tool, see the containerd website.
For Maximum in-flight container starts, enter the maximum number of started instances you want to allow across all Diego Cells in your deployment. Entering 0
sets no limit. The default value is 200
. For more information, see Set a maximum number of started containers in Configuring TAS for VMs for Upgrades.
Under NFSv3 volume services, select Allow or Do not allow. Deploying NFS volume services allows app developers to bind existing NFS volumes to their apps for shared file access. For more information, see Enabling Volume Services.
In a new installation of TAS for VMs, he NFSv3 volume services check box is selected by default. When you upgrade from an earlier version of TAS for VMs, the NFSv3 volume services check box is automatically configured the same way it was configured in the earlier version.
(Optional) To configure LDAP for NFSv3 volume services:
389
.cloud.example.com
typically uses ou=Users,dc=example,dc=com
as its LDAP user search base.UAA can only parse one certificate entered into this field. If you enter multiple certificates, UAA only uses the first one you entered and ignores the rest. You only need to include one root certificate or self-signed certificate.
(Optional) To deploy SMB volume services, select the Allow SMB volume services check box. Deploying SMB volume services allows developers to bind existing SMB shares to their apps. For more information, see Enabling Volume Services.
If you deploy SMB volume services, you must set the SMB Broker Errand to On in the Errands pane.
(Optional) To force all SMB shares to mount with the noserverino
mount option, select the Force noserverino mount option for SMB mounts check box.
(Optional) To modify the amount of time that health checks wait to receive a healthy response from an app before the app is declared unhealthy, enter the number of seconds you want the timeout period to last in Default health check timeout. If the health check does not receive a healthy response from a newly-started app within the configured timeout period, then the app is declared unhealthy. The default value is 60
. The maximum value you can configure is 600
. If you decrease the default health check timeout below its current value, existing apps with startup times greater than the new value might fail to start up.
(Optional) To limit the number of log lines an app instance can generate per second, select Enable under App log rate limit (deprecated) and enter an integer for Maximum app log lines per second. At minimum, VMware recommends using the default limit of 100
. This feature is deactivated by default.
Note This method of limiting log output from applications is deprecated. VMware recommends using the log rate limiting quota feature which provides more granular control over application log output.
Click Save.
In the App Developer Controls pane, you configure restrictions and default settings for your apps.
To configure the App Developer Controls pane:
Select App Developer Controls.
For Maximum staged droplet size, enter in MB the maximum allowed size of a staged app droplet.
For Maximum package size, enter in B the maximum allowed total size of all files in an app.
For Default app memory, enter in MB the default amount of memory you want to allocate to a newly-pushed app if no value is specified with cf push
.
For Default app memory quota per org, enter in MB the default memory limit you want to define for all apps in an org. The limit you configure in this field only applies to the first installation of TAS for VMs. After the initial installation, operators can change the default value through the cf CLI.
For Maximum disk quota per app in MB, enter in MB the maximum amount of disk allowed per app.
If you allow developers to push large apps, TAS for VMs might have trouble placing them on Diego Cells. Additionally, in the event of a system upgrade or an outage that causes a rolling deploy, larger apps might not successfully redeploy if there is insufficient disk capacity. Monitor your deployment to ensure that your Diego Cells have sufficient disk to run your apps.
For Default disk quota per app, enter in MB the amount of disk you want to allocate by default to a newly-pushed app if no value is specified with cf push
.
For Default log rate limit per app, enter in bytes per second number of log lines that a newly-pushed app is allowed to generate when no number is specified at push.
For Default service instance quota per org, enter the default service instance limit you want to define for all apps in an org. The limit you configure in this field only applies to the first installation of TAS for VMs. After the initial installation, operators can change the default value through the cf CLI.
For Staging timeout, enter in seconds the amount of time When you stage an app droplet with the Cloud Controller, the server times out after the number of seconds you specify in this field.
For Internal domains, enter one or more domains that apps use for internal DNS service discovery. If you specify a domain using cf push -d
, other TAS for VMs apps can reach the pushed app at APP-NAME.INTERNAL-DOMAIN
. This value defaults to apps.internal
.
(Optional) To allow developers to manage their own network policies for their apps, select the Allow space developers to manage network policies check box.
Click Save.
Setting appropriate ASGs is critical for a secure deployment. To acknowledge that once the TAS for VMs deployment completes, it is your responsibility to set the appropriate ASGs:
Select App Security Groups.
For You are responsible for setting the appropriate ASGs after TAS for VMs finishes deploying., enter X
.
Click Save.
For more information about ASGs, see App Security Groups. For more information about setting ASGs, see Restricting App Access to Internal TAS for VMs Components.
In the Authentication and Enterprise SSO pane, you configure your user store access.
To configure the Authentication and Enterprise SSO pane:
Select Authentication and Enterprise SSO.
To authenticate user sign-ons, your deployment can use one of three types of user database: the UAA server’s internal user store, an external SAML identity provider, or an external LDAP server. To configure the user database that your deployment uses to authenticate users, select one of the following options under User authentication mechanism:
Click Save.
In the UAA pane, you configure the User Account and Authentication (UAA) server.
To configure the UAA pane:
Select UAA.
Under UAA database location, select one of these options:
Note For GCP installations, VMware recommends using an external database on Google Cloud SQL.
(Optional) If you selected External database, complete the following fields:
Note The CA certificate field only works if your external database hostname matches a name specified in the certificate. This is not true with GCP CloudSQL.
(Optional) Under JWT issuer URI, enter the URI that UAA uses as the issuer when generating tokens.
Under SAML service provider credentials, enter a certificate and private key to be used by UAA as a SAML service provider for signing outgoing SAML authentication requests. You can provide an existing certificate and private key from your trusted Certificate Authority or generate a certificate. The domain *.login.SYSTEM-DOMAIN
must be associated with the certificate, where SYSTEM-DOMAIN
is the System domain you configured in the Domains pane.
Note The Operations Manager Single Sign-On Service and Operations Manager Spring Cloud Services tiles require the *.login.SYSTEM-DOMAIN
.
If the private key specified under SAML service provider credentials is password-protected, enter the password under Private key password.
(Optional) To override the default value, enter a custom SAML Entity ID in the SAML Entity ID override field. By default, the SAML Entity ID is http://login.SYSTEM-DOMAIN
, where SYSTEM-DOMAIN
is the System domain you configured in the Domains pane.
For Signature algorithm, choose an algorithm from the dropdown to use for signed requests and assertions. The default value is SHA256.
(Optional) In the Apps Manager access token lifetime, cf CLI access token lifetime, and cf CLI refresh token lifetime fields, change the lifetimes of tokens granted for Apps Manager and cf CLI login access and refresh. Most deployments use the defaults.
(Optional) In the Global login session maximum timeout and Global login session idle timeout fields, change the maximum number of seconds before a global login times out. These fields apply to:
(Optional) To customize the text prompts used for username and password from the cf CLI and Apps Manager login popup, enter values for Username label and Password label.
(Optional) The Proxy IPs regular expression field contains a pipe-separated set of regular expressions that UAA considers to be reverse proxy IP addresses. UAA respects the x-forwarded-for
and x-forwarded-proto
headers coming from IP addresses that match these regular expressions. To configure UAA to respond properly to Gorouter or HAProxy requests coming from a public IP address, append a regular expression or regular expressions to match the public IP address.
(Optional) To require URL encoding for UAA client basic authentication credentials, deactivate the Allow basic authentication credentials for UAA clients check box. This check box is activated by default. This represents the default behavior of UAA prior to UAA v74.0.0. URL encoding is defined by RFC6749. For more information, see RFC6749.
Note To require URL encoding for certain UAA clients without deactivating compatibility mode, use the `X-CF-ENCODED-CREDENTIALS=true` HTTP header.
ImportantIf you deactivate the Allow basic authentication credentials for UAA clients check box, URL encoding is required for all UAA client apps in your deployment. To avoid breaking changes, ensure that all client apps support URL encoding before you deactivate the check box.
(Optional) If you are using Single Sign-On for VMware Tanzu Application Service and you want to honor the CORS policy for custom identity zones, deactivate the Enforce system zone CORS policy across all identity zones check box. This check box is activated by default. If you use Single Sign-On, UAA creates custom identity zones. If you leave this check box activated, UAA ignores the CORS policy for custom identity zones and applies the system default identity zone CORS policy to all zones.
ImportantIf you deactivate the Enforce system zone CORS policy across all identity zones check box, apps that are integrated with Single Sign-On might experience downtime because the default CORS policy of the custom identity zones is more restrictive. To prevent downtime, you must explicitly set the CORS policy of the custom identity zones according to the needs of your apps. For more information, see the Single Sign-On documentation.
(Optional) To override the default UAA internal user password policies, see Configuring UAA password policy.
Click Save.
In the CredHub pane, you configure the CredHub server.
To configure the CredHub pane:
Select CredHub.
Under CredHub database location, select the location of your CredHub database. TAS for VMs includes this CredHub database for services to store their service instance credentials.
External database: If you select this option, configure these fields:
3306
.If you deploy TAS for VMs on AWS using Terraform, you can locate these values in your Terraform output:
rds_address
rds_port
rds_username
rds_password
(Optional) Under KMS plug-in providers, specify one or more Key Management Service (KMS) providers:
Under Internal encryption provider keys, specify one or more keys to use for encrypting and decrypting the values stored in the CredHub database.
For information about using additional keys for key rotation, see Rotating Runtime CredHub Encryption Keys.
(Optional) To configure CredHub to use an HSM, configure these fields:
1792
.If your deployment uses any Operations Manager services that support storing service instance credentials in CredHub and you want to activate this feature, select the Secure service instance credentials check box. For more information about using CredHub for securing service instance credentials, see Securing Service Instance Credentials with Runtime CredHub.
Click Save.
Go to the Resource Config pane.
Under the Job column of the CredHub row, ensure that the number of instances is set to 2
. This is the minimum instance count required for high availability.
To configure CredHub so it detects a proxy, complete the following fields:
http_proxy
for all HTTP requests across VMs.https_proxy
for all HTTPS requests across VMsClick Save.
In the Databases pane, you can configure TAS for VMs to use an internal MySQL database provided with Operations Manager, or you can configure an external database provider for the databases required by TAS for VMs.
If you are performing an upgrade, do not modify your existing internal database configuration, or you might lose data. You must migrate your existing data first before changing the configuration. For additional upgrade information, see Upgrading Operations Manager.
For GCP installations, VMware recommends selecting External and using Google Cloud SQL. Only use internal MySQL for non-production or test installations on GCP.
To configure internal databases for your deployment:
Under System databases location, select Internal MySQL clusters.
Click Save.
To configure high availability for your internal MySQL databases, see Configure Internal MySQL.
To configure external databases for your deployment:
Ensure that you have a database instance with the following databases created:
account
app_usage_service
autoscale
ccdb
credhub
diego
locket
networkpolicyserver
nfsvolume
notifications
routing
silk
uaa
Note: The steps to create external databases vary depending on your database type. For an example procedure, see Creating Databases for TAS for VMs.
In the TAS for VMs tile, select Databases.
Under System databases location, select External database server.
Important If you configure external databases, you cannot configure an internal database in the UAA pane.
For Hostname, enter the hostname of your external database server.
The Require hostname validation check box is selected by default. When this check box is selected and you configure your external databases to communicate over TLS, TAS for VMs verifies the hostname of the external database during communication between TAS for VMs and the external database.
CautionIf your deployment uses a GCP or Azure external database for TAS for VMs that is configured to use TLS, you must deselect the Require hostname validation check box. For more information, see Deactivate Hostname Validation for External Databases on GCP and Azure in Upgrade Preparation Checklist for Tanzu Operations Manager v2.10.
Important The Require hostname validation check box does not affect communication between TAS for VMs components and external CredHub databases. To configure hostname validation for the CredHub external database, see Configure CredHub.
For TCP port, enter the port of your external database server. If you are using GCP CloudSQL as your external database server, enter 3306
.
Each component that requires a relational database has two corresponding fields: one for the database username, and one for the database password. For each set of fields, specify a unique username that can access this specific database on the database server and a password for the provided username.
networkpolicyserver
database user has the ALL PRIVILEGES
permission.(Optional) To configure your external databases to communicate over TLS, enter a CA certificate in CA certificate.
Note: TAS for VMs does not currently support TLS communication for databases that do not include a matching hostname in their server certificate, such as Azure and GCP, unless you deselect the Require hostname validation check box and select the Skip hostname verification check box in the CredHub pane of the TAS for VMs tile. For more information, see the GCP documentation. To configure the Skip hostname verification check box, see Configure CredHub.
Click Save.
In the Internal MySQL pane, you configure the internal MySQL clusters for TAS for VMs. Only configure this section if you selected Internal MySQL clusters in the Databases pane.
To configure the Internal MySQL pane:
Select Internal MySQL.
For Replication canary time period, enter in seconds how frequently the canary checks for replication failure. The default value is 30
. Leave the default of 30 seconds or modify the value based on the needs of your deployment. Lower numbers cause the canary to run more frequently, which means that the canary reacts more quickly to replication failure but adds load to the database.
For Replication canary read delay, enter in seconds how long the canary waits before verifying that data is replicating across each MySQL node. The default value is 20
. Leave the default of 20 seconds or modify the value based on the needs of your deployment. Clusters under heavy load can experience a small replication lag as write-sets are committed across the nodes.
For Email address, enter the email address to which the MySQL service sends alerts when the cluster experiences a replication issue or when a node is not allowed to auto-rejoin the cluster.
The Allow command history check box is selected by default. When this check box is selected, command line history files can be created on MySQL nodes. To prohibit command line history files from being created on the MySQL nodes, deselect this check box.
To allow admin and read-only admin users to connect from any remote host, select the Allow remote admin access check box. When this check box is deselected, admins must bosh ssh
into each MySQL VM to connect as the MySQL super user.
Network configuration and ASG restrictions might still limit a client's ability to establish a connection with the databases.
For Cluster probe timeout, enter in seconds the maximum amount of time that a new node searches for existing cluster nodes. The default value is 10
.
For Maximum connections, enter the maximum number of concurrent connections allowed to the database. The default value is 3500
.
Under Server activity logging, select one of the following options:
Select Allow.
connect
: Tracks who connects to the system.query
: Tracks which queries are processed. Important Internal MySQL audit logs are not forwarded to the syslog server because they can contain personally identifying information (PII) and secrets.
You can use the download-logs
script to retrieve the logs, which each MySQL cluster node VM stores in /var/vcap/store/mysql_audit_logs/
. For more information, see Script to download MySQL logs for TAS for VMs or Tile HA Clusters in the VMware Tanzu Knowledge Base.
To prevent the MySQL service from logging audit events, select Do not allow.
For Load balancer healthy threshold, enter in seconds the amount of time to wait until reporting that the MySQL Proxy instance has started. This allows an external load balancer time to register the instance as healthy. The default value is 0
.
For Load balancer unhealthy threshold, enter in seconds the amount of time that the MySQL Proxy continues to accept connections before shutting down. During this period, the health check reports the MySQL Proxy instance as unhealthy to cause load balancers to fail over to other proxies. You must enter a value greater than or equal to the maximum time it takes your load balancer to consider a proxy instance unhealthy, given repeated failed health checks. The default value is 30
.
To allow MySQL Proxy to listen on port 3336
, select the Connect to inactive MySQL node check box. When you run MySQL in HA mode, this feature allows you to connect to a MySQL node that is not currently serving traffic, so that you can run auditing and reporting queries without affecting performance.
To configure MySQL Interruptor to prevent MySQL nodes with inconsistent data from writing to the MySQL database, select the Prevent node auto re-join check box.
Click Save.
For more information about how to monitor the node health of your MySQL Proxy instances, see Using the MySQL Proxy.
When using MySQL High Availability (HA) clusters (Galera) in TAS for VMs, and in the tile, there are many different logs that are useful when investigating customer issues. The download-logs
script is useful for gathering the complete set of logs for TAS for VMs or MySQL tile HA (Galera) clusters.
Get the latest download-logs
tool from the MySQL tile downloads on Tanzu Network.
Confirm that the download-logs
script is executable. Then run download-logs
with no arguments to see the help text:
$ chmod +x download-logs
$ ./download-logs
BOSH_DEPLOYMENT, BOSH_ENVIRONMENT, BOSH_CLIENT_SECRET, BOSH_CLIENT, and BOSH_CA_CERT are required environment variables
Usage:
-o (Required) The output directory
-X (Optional) Include audit and binary logs
This tool requires the bosh v2 cli and the following environment variables to be set:
BOSH_ENVIRONMENT
BOSH_CLIENT_SECRET
BOSH_CLIENT
BOSH_CA_CERT
BOSH_DEPLOYMENT
Optionally if you require communicating with your BOSH director through a gateway, you must set:
BOSH_GW_PRIVATE_KEY
BOSH_GW_USER
BOSH_GW_HOST
Get the environment variable information from the Director credentials under BOSH Commandline Credentials.
Note These will be at https://OPS-MANAGERFQDN/api/v0/deployed/director/credentials/bosh_commandline_credentials.
Get the deployment name using the command, ’
bosh deployments --column=name'
Set the variables for the BOSH CLI and the deployment. For example, when running ‘download-logs’ for this TAS deployment, the output would look like the example below:
$ export BOSH_CLIENT=ops_manager BOSH_CLIENT_SECRET=<secret> BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate BOSH_ENVIRONMENT=<bosh_director_ip>
$ export BOSH_DEPLOYMENT=<deployment_name>
Run download-logs
(on the Director VM). For example:
$ ./download-logs -o /tmp -X
Retrieving deployment and vm info...
Downloading deployment logs...
Using environment '10.193.78.11' as user 'admin' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)
Using deployment 'service-instance_ab436820-4d83-4ee6-b0f8-6606e749e405'
Task 71673 | 20:20:00 | Fetching logs for mysql/9abc28fa-fc09-4e32-9a25-3b6c979e4614 (0): Finding and packing log files (00:00:03)
Task 71673 | 20:20:01 | Fetching logs for mysql/c0bace44-6fb0-489b-b101-bf8de085766c (1): Finding and packing log files (00:00:04)
Task 71673 | 20:20:02 | Fetching group of logs: Packing log files together
Task 71673 Started Wed Jun 12 20:19:57 UTC 2019
Task 71673 Finished Wed Jun 12 20:20:02 UTC 2019
Task 71673 Duration 00:00:05
Task 71673 done
Downloading resource 'aca9b560-0618-4b9a-b127-aa3ec3cc9cce' to '/tmp/tmp.vjOqu015YS/service-instance_ab436820-4d83-4ee6-b0f8-6606e749e405-20190612-202004-901615727.tgz'...
0.00%
Succeeded
Downloading logs for: mysql/9abc28fa-fc09-4e32-9a25-3b6c979e4614
Downloading logs for: mysql/c0bace44-6fb0-489b-b101-bf8de085766c
Specify a passphrase of 6-8 words long. Do not use a private passphrase, you will need to share this passphrase with anyone who will decrypt this archive.
gpg: gpg-agent is not available in this session
Encrypted logs saved at /tmp/2019-06-12-20-19-54-mysql-logs.tar.gz.gpg
Using the script, move the resulting file ‘FILENAME.tar.gz.gpg’ to some location with access to the Pivotal or VMware support portal, upload it to the support request (SR), and provide support with the passphrase used.
(Optional) If you wish to extract the archive, you can use the following commands:
$ sudo gpg -d /tmp/FILENAME.tar.gz.gpg > /tmp/mysql-logs.tar.gz
$ sudo tar -zxvf /tmp/mysql-logs.tar.gz
Important Some blobstores, for example, Oracle Cloud Infrastructure Object Storage, do not support S3 Signature v4 Streaming. To use blobstores without S3 Signature v4 Streaming support with VMware Tanzu Application Service for VMs, deselect the Signature v4 streaming check box.
For more information, see AWS-S3 Signature v4 Streaming.
In File Storage, you determine where you want to place the file storage for your Cloud Controller.
To configure the File Storage pane:
Select File Storage.
For Maximum valid packages per app, enter the maximum number of recent valid packages that your app can store, not including the package for the current droplet. VMware recommends using the default value of 2
. However, you can lower the value if you have strict storage requirements and want to use less disk space.
For Maximum staged droplets per app, enter the maximum number of recent valid droplets that your app can store, not including the package for the current droplet. VMware recommends using the default value of 2
. However, you can lower the value if you have strict storage requirements and want to use less disk space.
Under File storage backup level, select what you want to back up from your blobstores:
For more information about the advantages and disadvantages of excluding droplets and packages, see File storage backup level.
To configure Cloud Controller filesystem, see Configuring file storage for TAS for VMs.
In the System Logging pane, you can configure system logging in TAS for VMs to forward log messages from TAS for VMs component VMs to an external service. VMware recommends forwarding logs to an external service for use in troubleshooting. If you do not fill these fields, platform logs are not forwarded but remain available on the component VMs and for download through Tanzu Operations Manager.
This procedure explains how to configure system logging for TAS for VMs component VMs. To forward logs from Operations Manager tiles to an external service, you must also configure system logging in each tile. For more information about configuring system logging, see the documentation for the given tiles.
To configure the System Logging pane:
Select System Logging.
For Syslog server address, enter the hostname or IP address of the syslog server.
For Syslog server port, enter the port of the syslog server. The default port for a syslog server is 514
.
Note The host must be reachable from the TAS for VMs network and accept UDP or TCP connections. Ensure the syslog server listens on external interfaces.
For Transport protocol, select a transport protocol for log forwarding.
(Optional) For Environment identifier, enter a custom label (e.g. the name of your foundation) to include in the structured data of forwarded syslog messages with the parameter name environment
.
For TLS encryption, select one of the following options:
(Optional) To include security events in the log stream, activate the Log Cloud Controller security events checkbox. When this checkbox is activated, TAS for VMs logs all API requests in the Common Event Format (CEF), including the endpoint, user, source IP address, and request result.
(Optional) To transmit logs over TCP, activate the Use TCP for file forwarding local transport checkbox. This prevents log truncation, but might cause performance issues.
The Do not forward debug logs check box is activated by default. To forward DEBUG
syslog messages to an external service, deactivate the checkbox.
Note Some TAS for VMs components generate a high volume of DEBUG
syslog messages. Activating the Do not forward debug logs checkbox prevents TAS for VMs components from forwarding the DEBUG
sysl messages to external services. However, TAS for VMs still writes the messages to the local disk.
For Custom rsyslog configuration, enter a custom syslog rule. For more information about adding custom syslog rules, see Customizing platform log forwarding.
Configure how TAS for VMs emits app logs and app metrics for ingestion in your deployment. The options include:
Option: | Configuration Procedure: |
---|---|
Use existing Firehose app log and metrics integrations |
|
Preserve existing Firehose integrations for app metrics, but use an alternate method for app log ingestion |
Caution Do not use this option if your deployment depends on partner log integrations.
|
Deactivate all Firehose integrations and use alternate methods for both app log and app metric ingestion | Caution Do not use this option if your deployment depends on any of these:
|
Field Descriptions:
The following table provides more details on field values:
Field Name | Description |
---|---|
Enable V1 Firehose | Activated by default. When this checkbox is activated, logs and metrics flow to the Loggregator V1 Firehose. |
Enable V2 Firehose | Activated by default. When this checkbox is activated, logs and metrics flow to the Loggregator V2 Firehose. |
Send default Loggregator drain metadata | Activated by default. When this checkbox is activated, TAS for VMs sends all metadata in app and aggregate syslog drains. Deactivating this checkbox can reduce logging to external databases by up to 50 percent. |
Do not forward app logs to the Firehose | Deactivated by default. When this checkbox is activated, TAS for VMs prevents the Firehose from emitting app logs, but still allows the Firehose to emit app metrics. Deactivating logs in Firehose helps reduce the load on TAS for VMs by allowing you to scale down Doppler and Traffic Controller VMs. |
Aggregate syslog drain destinations | Aggregate drains forward all app logs on your foundation to the endpoints that you provide in this field. To configure this field, enter a comma-separated list of syslog endpoints for aggregate log drains. Specify the endpoints in the format: syslog://HOSTNAME:PORT . To use TLS for sending logs, specify syslog-tls://HOSTNAME:PORT or https://HOSTNAME:PORT . |
(Optional) For System metrics scrape interval, the default value is 1m
, which configures TAS for VMs to send BOSH system metrics to your logging endpoint once per minute. To configure TAS for VMs to send metrics more or less frequently, modify the value in this field. For example, enter 2m
to send metrics every two minutes, or 10s
to send metrics every ten seconds. VMware recommends configuring a minimum interval of five seconds, or 5s
.
Click Save.
To configure Tanzu Operations Manager for system logging, see Settings page in Using the Tanzu Operations Manager interface.
In the Custom Branding pane, you can customize the appearance of the TAS for VMs login portal and Apps Manager. In the Apps Manager pane, you can configure the functionality of Apps Manager, as well as how it displays the pages for your apps in the Marketplace.
To configure the Custom Branding and Apps Manager panes:
Select Custom Branding.
Configure the fields in the Custom Branding pane. For more information about the Custom Branding configuration settings, see Custom-branding apps Manager.
Click Save.
Select Apps Manager.
(Optional) To allow inviting new users to Apps Manager, select the Allow invitations to Apps Manager check box. Space Managers can invite new users for a given space, Org Managers can invite new users for a given org, and Admins can invite new users across all orgs and spaces. For more information, see Invite new users in Managing User Roles with Apps Manager.
(Optional) To allow users to download documentation and cf CLI packages in air-gapped environments, select the Allow access to offline tools check box. When you slect this check box, the cf CLI installer is included in the Apps Manager BOSH release and updates each time you apply changes in Tanzu Operations Manager.
Under Included cf CLI packages, select either cf CLI v7 or cf CLI v8.
(Optional) To configure a custom name for your product page in the Marketplace, enter a name in Product name.
(Optional) To configure a custom name for the Marketplace page, enter a name in Marketplace name.
(Optional) To configure the Marketplace link in Apps Manager to go to your own marketplace, enter the URL for your marketplace in Marketplace URL.
(Optional) To configure secondary navigation links in the Apps Manager and Marketplace pages, enter link text and URLs under Secondary navigation links. You may configure up to 10 links in the Apps Manager secondary navigation.
(Optional) To display the prices for your services plans in the Marketplace, select the Display service plan prices in Marketplace check box.
(Optional) For Marketplace currencies, enter the currency codes and symbols you want to appear in the Marketplace as a JSON string. The default string is { "usd": "$", "eur": "€" }
.
(Optional) For Apps Manager buildpack, enter the name of the buildpack you want to use to deploy the Apps Manager app. The buildpack you specify must be for static content. The default buildpack is staticfile_buildpack
. If you do not specify a buildpack, TAS for VMs uses the detection process to determine a single buildpack to use. For more information about the detection process, see Buildpack detection in How Buildpacks Work.
(Optional) For Search Server buildpack, enter the name of the buildpack you want to use to deploy the Search Server app. The buildpack you specify must be a node-based buildpack. The default buildpack is nodejs_buildpack
. If you do not specify a buildpack, TAS for VMs uses the detection process to determine a single buildpack to use. For more information about the detection process, see Buildpack detection in How Buildpacks Work.
(Optional) For Invitations buildpack, enter the name of the buildpack you want to use to deploy the Invitations app. The buildpack you specify must be a node-based buildpack. The default buildpack is nodejs_buildpack
. If you do not specify a buildpack, TAS for VMs uses the detection process to determine a single buildpack to use. For more information about the detection process, see Buildpack detection in How Buildpacks Work.
The Apps Manager memory usage field configures the memory limit with which to deploy the Apps Manager app. If the app fails to start and shows an out of memory
error, increase the memory limit by modifying this field. Enter a value in MB. To use the default value of 128
, leave this field blank.
The Search Server memory usage field configures the memory limit with which to deploy the Search Server app. If the app fails to start and shows an out of memory
error, increase the memory limit by modifying this field. Enter a value in MB. To use the default value of 256
, leave this field blank.
The Invitations memory usage field configures the memory limit with which to deploy the Invitations app. If the app fails to start and shows an out of memory
error, increase the memory limit by modifying this field. Enter a value in MB. To use the default value of 256
, leave this field blank.
(Optional) If Apps Manager usage degrades Cloud Controller response times, you can configure Apps Manager polling interval to reduce the load on the Cloud Controller and ensure that Apps Manager remains available while you troubleshoot the Cloud Controller. VMware recommends that you only modify this field for as long as you are troubleshooting the Cloud Controller. If you do not modify this field back to its default value of 30
after you finish troubleshooting the Cloud Controller, it can degrade Apps Manager performance. You can modify this field in one of the following ways:
If you enter a value between 0
and 30
, the field automatically reverts to 30
.
0
. This stops Apps Manager from refreshing data automatically, but users can update displayed data by reloading Apps Manager manually.(Optional) If configuring the Apps Manager polling interval field above is not sufficient, you can further reduce the load on the Cloud Controller by also modifying App details polling interval. This field controls the rate at which Apps Manager polls for data when a user views the Overview page of an app. VMware recommends that you only modify this field for as long as you are troubleshooting the Cloud Controller. If you do not modify this field back to its default value of 10
after you finish troubleshooting the Cloud Controller, it can degrade Apps Manager performance. You can modify this field in one of the following ways:
If you enter a value between 0
and 10
, the field automatically reverts to 10
.
0
. This stops Apps Manager from refreshing data automatically, but users can update displayed data by reloading Apps Manager manually.To configure multi-foundation support for Apps Manager, enter a JSON object containing all the foundations you want Apps Manager to manage in Multi-foundation configuration (beta). Configuring multi-foundation support allows you to manage orgs, spaces, apps, and service instances from multiple TAS for VMs foundations from a single Apps Manager interface. For more information, see Configuring multi-foundation support in apps Manager.
For Redirect URIs, enter a comma-separated list of the URI for each additional foundation you configured in Multi-foundation configuration (beta).
Click Save.
In the Email Notifications pane, you can allow users to register their own Apps Manager accounts. TAS for VMs uses SMTP to email invitations and confirmations to Apps Manager users. If you do not need this service, leave this pane blank and deactivate the Notifications and Notifications UI errands in the Errands pane.
To configure the Email Notifications pane:
Select Email Notifications.
For From email, enter the email address from which email notifications are sent.
For SMTP server address, enter the SMTP address of the server that sends email notifications.
For SMTP server port, enter the port of the SMTP server that sends email notifications. For GCP, you must use port 2525
. Ports 25
and 587
are not allowed on GCP Compute Engine.
For SMTP server credentials, enter the user name and password for the SMTP server that sends email notifications.
(Optional) To configure your SMTP server to automatically create a secure TLS connection when sending email notifications, select the Use StartTLS protocol check box.
After you verify your authentication requirements with your email administrator, select None, Plain, or CRAMMD5 from the SMTP authentication mechanism drop-down menu. If you have no SMTP authentication requirements, select None.
If you selected CRAMMD5 as your authentication mechanism, enter a secret in the SMTP CRAMMD5 secret field.
Click Save.
Important If you do not configure the SMTP settings in the Email Notifications pane, an administrator for Apps Manager must create orgs and users through the cf CLI. For more information, see Creating and managing users with the cf CLI.
In the App Autoscaler pane, you configure the App Autoscaler service. To use App Autoscaler, you must create an instance of the service and bind it to an app. To create an instance of App Autoscaler and bind it to an app, see Set up App Autoscaler in Scaling an App Using App Autoscaler.
To configure the App Autoscaler pane:
Select App Autoscaler.
For App Autoscaler instance count, enter the number of instances of the App Autoscaler service you want to deploy. The default value is 3
. For high availability, set this number to 3
or higher. Larger environments might require more instances than the default number. VMware recommends one App Autoscaler instance for every 10 apps using App Autoscaler.
For App Autoscaler API instance count, enter the number of instances of the App Autoscaler API you want to deploy. The default value is 1
. Larger environments might require more instances than the default number.
For Metric collection interval, enter how many seconds of data collection you want App Autoscaler to evaluate when making scaling decisions. The minimum interval is 60 seconds, and the maximum interval is 3600 seconds. The default value is 120
. Increase this number if the metrics you use in your scaling rules are emitted less frequently than the existing metric collection interval.
For Scaling interval, enter in seconds how frequently App Autoscaler evaluates an app for scaling. The minimum interval is 15 seconds, and the maximum interval is 120 seconds. The default value is 35
.
To configure verbose logging for App Autoscaler, select the Allow verbose logging check box. Verbose logging is disallowed by default. Selecting the Allow verbose logging check box allows you to see more detailed logs. Verbose logs show specific reasons why App Autoscaler scaled the app, including information about instance limits and the status of App Autoscaler. For more information about App Autoscaler logs, see Manage App Autoscaler Notifications in Scaling an App Using App Autoscaler.
If you do not want the Autoscaler API to reuse HTTP connections, select the Disallow API connection pooling check box. This might be necessary if your front end idle timeout for the Gorouter is set to a low value, such as 1
. For more information, see Configuring front end idle timeout for Gorouter.
To allow App Autoscaler to email event notifications to space developers, select the Send email notifications check box. For more information about managing email notifications, see Manage App Autoscaler notifications in Scaling an App Using App Autoscaler.
Click Save.
In the Cloud Controller pane, you configure the Cloud Controller.
To configure the Cloud Controller pane:
Click Cloud Controller.
Enter your Cloud Controller database encryption key if all of the following are true:
Cloud Foundry API rate limiting prevents API consumers from overwhelming the platform API servers. Limits are imposed on a per-user or per-client basis and reset on an hourly interval. Under Cloud Foundry API rate limiting, select one of the following options:
2000
.100
.(Optional) For Database connection validation timeout, enter in seconds the database connection validation timeout period you want to configure. The default value is 3600
. To configure Cloud Controller to make an additional query to the database whenever connections are checked out from the pool, enter -1
. Configuring -1
in this field has performance implications.
(Optional) For Database read timeout, enter in seconds the database read timeout period you want to configure. The default value is 3600
.
(Optional) For Cloud Controller monit health check timeout, enter in seconds the amount of time to wait before an HTTP request from the Cloud Controller monit health check is closed. The default value is 6
.
(Optional) For Age of audit events pruned from Cloud Controller database, enter in days the age at which audit events are to be pruned from the Cloud Controller database. The default value is 31
.
(Optional) For Age of completed tasks pruned from Cloud Controller database, enter in days the age at which completed tasks are to be pruned from the Cloud Controller database. The default value is 31
.
(Optional) Rotate the Cloud Controller database (CCDB) encryption key using the Encryption key ledger field. For more information, see Rotating the Cloud Controller database encryption key.
For Available Stacks, configure the set of stacks you want to make available to app developers on the platform:
Select the stacks to expose to developers. cflinuxfs4
is the latest stack, based on Ubuntu Jammy Jellfish (22.04), and is recommended. cflinuxfs3
, an older stack based on Ubuntu Bionic Beaver (18.04), remains supported. Operators have the following configuration options:
cflinuxfs4
onlycflinuxfs3
and cflinuxfs4
Select a Default stack. This stack is used whenever an app is pushed with no stack specified. The available options are cflinuxfs3
and cflinuxfs4
. tanzu-jammy
is not supported as a default stack. If you select a stack list that does not include cflinuxfs3
, then cflinuxfs4
is used as the default stack.
(Optional) Number of local workers per Cloud Controller VM. Defaults to 2. See Scaling Local Workers for more information.
(Optional) Enable Prometheus metrics for Cloud Controller. Defaults to disabled. For more information, see Cloud Controller metrics.
(Optional) Enable StatsD metrics for Cloud Controller. Defaults to enabled. For more information, see Cloud Controller metrics.
(Optional) Enable Cloud Controller web server to utilize multiple CPU cores. This allows the Cloud controller API VMs to utilize multiple CPU cores. This uses the Ruby Puma web server instead of Thin. This feature is still beta, and uses a different format for primary metrics (Prometheus). Please consult your Broadcom support team before turning on. For more information on these properties see Scaling Puma Web Server.
Select the number of Puma Workers. This is the number of child process to run as part of Puma clustered mode. This should not exceed the number of CPU cores available to the Cloud Controller API VM. The number of CPU cores can be found on the Resource Config page in the Ops Manager UI.
Maximum number of threads per Puma Workers. For Cloud Controller running Puma to be able to handle as many concurrent requests as an API server running Thin, it’s recommended that Number of Workers x Number of Threads is greater or equal to 20. Default threads is 10
.
Maximum database connections per Puma Worker. The total number of connections is equal to Number of Workers x Maximum database connections per Worker. Be aware that incorrectly setting this may lead to too many connections for your Database. See Scaling Puma Web Server for more information.
Important If reducing the number of overall Cloud Controller API VMs consider increasing the number of local workers per Cloud Controller VM property. See Scaling Local Workers for more information.
Click Save.
In the Smoke Tests pane, you configure where smoke tests are run. In this location, the Smoke Tests errand pushes an app to a Operations Manager org to run basic functionality tests against the TAS for VMs tile after an installation or update. In the Errands pane, you can choose whether or not to run the Smoke Tests errand.
To configure the Smoke Tests pane:
Select Smoke Tests.
In the Smoke tests location, select one of the following options to configure where TAS for VMs pushes an app to run smoke tests:
system
org for running smoke tests and deletes the space afterwards.Click Save.
The Advanced Features pane includes new capabilities that might have certain constraints. Although these features are fully supported, VMware recommends caution when using them in production environments.
If you intend to deploy Diego Cells only through one or more isolation segment deployments, use this option to remove all Diego Cells from the TAS for VMs deployment. You might wish to do this to completely separate updates to Diego Cells from updates to the rest of the TAS for VMs deployment.
Important At least one isolation segment must deploy non-isolated Diego cell VMs so that the TAS for VMs installation has the shared Diego cells that are necessary to host system components that run as apps. Do not deploy app-based system components or run smoke-test errands until the non-isolated Diego cells in these isolation segment deployments are present.
If your apps do not use the full allocation of disk space and memory set in the Resource Config tab, you might want use this feature. These fields control the amount to overcommit disk and memory resources to each Diego Cell VM.
For example, you might want to use the overcommit if your apps use a small amount of disk and memory capacity compared to the amounts set in the Resource Config settings for Diego Cell.
Due to the risk of app failure and the deployment-specific nature of disk and memory use, VMware has no recommendation for how much, if any, memory or disk space to overcommit.
To use overcommit:
Select Advanced Features.
In the Diego Cell memory capacity field, enter in MB the total desired amount of Diego Cell memory. See the Diego Cell row in the Resource Config tab for the current Diego Cell memory capacity settings that this field overrides.
In the Diego Cell disk capacity field, enter in MB the total desired amount of Diego Cell disk capacity. See the Diego Cell row in the Resource Config tab for the current Diego Cell disk capacity settings that this field overrides.
Click Save.
Entries made to each of these two fields set the total amount of resources allocated, not the overage.
If your apps require a longer period of time to finish in-flight jobs and gracefully shut down, you can increase the graceful shutdown period. By default, this graceful shutdown period is set to 10 seconds.
When TAS for VMs requests a shutdown of an app, the processes in the container have a period of time to gracefully shut down before the processes are forcefully terminated. For more information, see Shutdown in App Container Lifecycle.
If you significantly increase the value of the graceful shutdown period, platform upgrades and updates might become slower. This is because each Diego Cell uses the graceful shutdown period when it is cleaning up evacuated app instances and waits for each app to gracefully shut down.
VMware recommends using isolation segments to separate apps that have different shutdown requirements to ensure Diego Cell update times are reliable. For more information, see Installing Isolation Segment.
To avoid unexpected behavior, you must ensure that App graceful shutdown period has the same value in all environments that have deployed apps.
To increase the app graceful shutdown period:
Select Advanced Features.
In the App graceful shutdown period field, enter the number of seconds that you want the platform to wait for an app instance to exit after it is signaled to gracefully shut down. The default and minimum value is 10
.
Click Save.
Some private networks require extra configuration so that internal file storage (WebDAV) can communicate with other TAS for VMs processes.
The Non-RFC-1918 private network allow list field is provided for deployments that use a non-RFC 1918 private network. This is typically a private network other than 10.0.0.0/8
, 172.16.0.0/12
, or 192.168.0.0/16
.
Most TAS for VMs deployments do not require any modifications to this field.
To add your private network to the allow list:
Select Advanced Features.
Append a new allow
rule to the existing contents of the Non-RFC-1918 private network allow list field. Include the word allow
, the network CIDR range to allow, and a semi-colon (;
) at the end. For example:
allow 172.99.0.0/24;
Click Save.
The cf CLI connection timeout field allows you to override the default five-second timeout of the Cloud Foundry Command Line Interface (cf CLI) used within your TAS for VMs deployment. This timeout affects the cf CLI command used to push TAS for VMs errand apps such as Notifications, App Autoscaler, and Apps Manager.
Set the value of this field to a higher value, in seconds, if you are experiencing domain name resolution timeouts when pushing errands in TAS for VMs.
To modify the value of the cf CLI connection timeout:
Select Advanced Features.
Enter a value, in seconds, to the cf CLI connection timeout field.
Click Save.
You can allow TLS communication for clients of the internal system database. This feature is in beta, and the Allow TLS for internal MySQL database (beta) check box is deselected by default. For more information about the internal system database, see Managing internal databases.
To allow TLS communication for clients of the internal system database:
Select Advanced Features.
Select the Allow TLS for internal MySQL database (beta) check box.
Click Save.
You can configure the maximum number of concurrent database connections that Diego and container networking components can have.
To configure the maximum number of concurrent database connections:
Select Advanced Features.
Enter a value in each field beginning with Maximum number of open connections… for a given component. The placeholder values for each field are the default values.
Click Save.
When there are not enough connections available, such as during a time of heavy load, components may experience degraded performance and sometimes failure. To resolve or prevent this, you can increase and fine-tune database connection limits for the component.
Decreasing the value of this field for a component might affect the amount of time it takes for it to respond to requests.
You can disallow rolling app deployments. For more information, see Rolling app deployments.
To disallow rolling app deployments:
Select Advanced Features.
Select the Disallow rolling app deployments check box.
Click Save.
By default, Log Cache keeps 100,000 envelopes per source. An envelope wraps an event and adds metadata. For sources that produce more than 100,000 envelopes, this default may not provide a long enough duration for you to specify a time period for a historical query.
To set the maximum number of envelopes stored per source above the default:
Select Advanced Features.
Enter the Maximum number of envelopes stored in Log Cache per source.
Click Save.
By default, Usage Service deletes granular data after 365 days.
To configure this retention period:
Select Advanced Features.
In the Usage Service data retention period field, enter the number of days of granular data you want to retain.
Click Save.
To avoid performance or data migration issues, VMware recommends that you do not retain data for longer than 365 days. Configuring this field does not affect monthly summary records.
For more information, see Usage data retention in Reporting App, Task, and Service Instance Usage.
Note: The Metric Registrar is included with TAS for VMs, and you configure it in the TAS for VMs tile. You do not install and configure Metric Registrar as a separate product tile.
In the Metric Registrar pane, you configure the Metric Registrar. The Metric Registrar allows TAS for VMs to convert structured logs into metrics. It also scrapes metrics endpoints and forwards the metrics to Loggregator.
If you configure the Metric Registrar, VMware recommends that you also run the Metric Registrar smoke test errand. For more information, see Configure smoke tests.
By default, the Metric Registrar is deployed.
To deactivate the Metric Registrar:
Select Metric Registrar.
Deselect the Deploy Metric Registrar check box.
Click Save.
The scraping interval defines how often the Metric Registrar polls custom metric endpoints. The default is 35 seconds.
To edit the Metric Registrar scraping interval:
Select Metric Registrar.
Edit the Endpoint scraping interval field.
Click Save.
To prevent the Metric Registrar from consuming the value of a metric or event tag, you can add the tag to the Blocked tags field. For example, if you tag your metrics with a customer_id
, you may want to add customer_id
to the list of blocked tags.
By default, the following tags are blocked to prevent interference with other products like App Metrics that use and rely on such tags.
deployment
job
index
id
To prevent the Metric Registrar from consuming the value of a metric or event tag:
Select Metric Registrar.
Add the desired tags to the Blocked tags field in a comma-separated list.
Click Save.
The App instance metrics limit per scraping interval field defines how many metrics are emitted per app instance per interval. If the number of metrics an app instance generates within a scraping interval exceeds the configured limit, no metrics are emitted. Each individual Prometheus value is counted as one metric, including multiple values within the same metric family. By default, there is no limit.
To configure a metrics limit per scraping interval for app instances:
Select Metric Registrar.
Edit the App instance metrics limit per scraping interval field.
Click Save.
Errands are scripts that Tanzu Operations Manager runs automatically when it installs or uninstalls a product, such as a new version of TAS for VMs. There are two types of errands: post-deploy errands run after the product is installed, and pre-delete errands run before the product in uninstalled.
By default, Tanzu Operations Manager always runs all errands.
In the Errands pane, you can change these run rules. For each errand, you can select On to run it always or Off to never run it.
For more information about how Tanzu Operations Manager manages errands, see Managing Errands in Tanzu Operations Manager.
Note: Several errands, such as App Autoscaler and Notifications, deploy apps that provide services for your deployment. When one of these apps is running, selecting Off for the corresponding errand on a subsequent installation does not stop the app.
The Smoke Test Errand verifies that your deployment can do the following:
Caution If you deactivated both the V1 and V2 Firehoses in the System Logging pane of the TAS for VMs tile, you must also deactivate the Smoke Test Errand. Otherwise, TAS for VMs fails to deploy.
The Usage Service Errand deploys the Usage Service app on which Apps Manager depends.
The Offline Docs Errand deploys an offline documentation app that can be used in air-gapped environments.
The Apps Manager Errand deploys Apps Manager, a dashboard for managing apps, services, orgs, users, and spaces. Until you deploy Apps Manager, you must perform these functions through the cf CLI. After you first deploy Apps Manager, VMware recommends setting this errand to Off for subsequent TAS for VMs deployments. For more information about Apps Manager, see Getting started with apps Manager.
The Notifications Errand deploys an API for sending email notifications to your TAS for VMs platform users.
Important The Notifications app requires you to configure a user name and password for your SMTP server, even if you select None
from the SMTP authentication mechanism drop-down menu in the Email Notifications pane of the TAS for VMs tile. To configure a user name and password for your SMTP server, see Configure email notifications.
The Notifications UI Errand deploys a dashboard through which users can manage their notification subscriptions.
The App Autoscaler Errand pushes the App Autoscaler app, which allows you to configure your apps to automatically scale in response to changes in their usage load. For more information, see Scaling an App Using App Autoscaler.
The App Autoscaler Smoke Test Errand runs smoke tests against App Autoscaler.
The NFS Broker Errand pushes the NFS Broker app, which supports NFS volume services for TAS for VMs. For more information, see Enable NFS Volume Services in Enabling Volume Services.
The Metric Registrar Smoke Test Errand verifies that the Metric Registrar can access custom metrics that an app emits and convert them into Loggregator metrics.
The Metric Registrar Smoke Test errand runs only if the Metric Registrar is deployed. For more information about configuring the Metric Registrar, see Metric Registrar and Custom App Metrics.
The SMB Broker Application Errand pushes the SMB Broker app, which supports SMB volume services for TAS for VMs. For more information, see Enable SMB volume services in Enabling Volume Services.
The Rotate CC Database Key translates sensitive data in the Cloud Controller database to the key that is currently marked as Primary in the Cloud Controller pane of the TAS for VMs tile. You configure this key in the Encryption key ledger field of the Cloud Controller pane.
In the Resource Config pane, you must associate load balancers with the VMs in your deployment to allow the VMs to receive traffic. For more information, see Configure Load Balancing for TAS for VMs.
Note The Resource Config pane has fewer VMs if you are installing Small Footprint TAS for VMs. For more information, see Getting Started with Small Footprint TAS for VMs.
Note Small Footprint TAS for VMs does not default to a highly available configuration. It defaults to the minimum configuration. To make Small Footprint TAS for VMs highly available, scale the Compute, Router, and Database VMs to 3
instances and scale the Control VM to 2
instances.
TAS for VMs defaults to a highly available resource configuration. However, you might need to follow additional procedures to make your deployment highly available. For more information, see High availability in TAS for VMs and Scaling TAS for VMs.
If you do not want a highly available resource configuration, you must scale down your instances manually by navigating to the Resource Config section and using the drop-down menus under Instances for each job.
By default, TAS for VMs also uses an internal filestore and internal databases. If you configure TAS for VMs to use external resources, you can deactivate the corresponding system provided resources in Tanzu Operations Manager to reduce costs and administrative overhead.
To deactivate specific VMs in Tanzu Operations Manager:
Select Resource Config.
If you configured TAS for VMs to use an external S3-compatible filestore, enter 0
in Instances in the File Storage field.
If you selected External when configuring the UAA, System, and CredHub databases, edit these fields:
0
in Instances.0
in Instances.0
in Instances.If you deactivated TCP routing, enter 0
Instances in the TCP Router field.
Click Save.
This step is only required if your Tanzu Operations Manager deployment does not already have the stemcell version that TAS for VMs requires. For more information about importing stemcells, see Importing and managing Stemcells.
To download the stemcell version TAS for VMs requires:
Go to the Stemcell product page on Broadcom Support. You may need to log in.
Download the appropriate stemcell version for your IaaS.
Go to the Tanzu Operations Manager Installation Dashboard.
Click Stemcell Library.
Click Import Stemcell to import the stemcell .tgz
file that you downloaded in a previous step.
When you are prompted, stage your stemcell by selecting the Tanzu Operations Manager product check box.
Click Apply Stemcell to Products.
To complete your installation of TAS for VMs:
Click the Installation Dashboard link to return to the Tanzu Operations Manager Installation Dashboard.
Click Review Pending Changes.
Click Apply Changes. The install process generally takes a minimum of 90 minutes to complete. When the installation process completes successfully, the Changed Applied window appears.
Click Close or Return to Installation Dashboard.