Install Tanzu RabbitMQ for Kubernetes using the Carvel toolchain. Follow the procedures in this information to complete the Tanzu RabbitMQ for Kubernetes installation.
ImportantIf you plan to deploy Tanzu RabbitMQ for Kubernetes to a Kubernetes cluster on an airgapped network, you must first follow the instructions on the kapp-controller docs to relocate the package
rabbitmq-kubernetes.packages.broadcom.com/tanzu-rabbitmq-package-repo
to your internal registry. In the following instructions, you should use the URL of your registry anywhere there are references torabbitmq-kubernetes.packages.broadcom.com
.
Follow these steps to complete the installation:
At this point, your Tanzu RabbitMQ for Kubernetes installation should be complete, if you want, you can now complete the following post installation tasks:
As per the RabbitMQ clustering guide, the number of Kubernetes nodes should be an odd number to ensure a tie does not occur. The minimum number of nodes should be 3.
For quorum queues, you must ensure that a majority of the nodes are running at any point in time. For example:
Scenario 1: If only one node can fail at a time, a three node cluster is sufficient because three minus one equals two (3-1=2). The majority of two nodes are running.
Scenario 2: If two nodes can fail at the same time, then a five node cluster is required.
Each Kubernetes node must be on a different Availability Zone in a cloud or rack to minimize the risk of failing at the same time.
For high availability, it is essential to spread the brokers across different nodes evenly so that there isn't any Kubernetes node that has more than one broker, this is also known as using maxSkew: 1
. For more information, refer to the kubernetes documentation demonstrating it in use.
The minimum number of cores per broker is 2:
spec:
resources:
requests:
cpu: 2
limits:
cpu: 2
Depending on the workload, additional cores can be required because more CPUs can run multiple queues at the same time.
For more information, refer to the RabbitMQ Deployment Guidelines.
Complete the following steps to access the login credentials and docker images for your release.
Before you install VMware Tanzu RabbitMQ for Kubernetes, ensure you have:
kubectl
installed. You can get kubectl
for your system by downloading it here.kapp-controller
and secretgen-controller
controllers on your Kubernetes cluster. After installation, you can verify that Tanzu Cluster Essentials is installed correctly by checking the relevant pods are running. Run this command: $ kubectl get all -n kapp-controller && kubectl get all -n secretgen-controller
You should see that the STATUS
is running for the kapp-controller
and secretgen-controller
pods.ImportantIf you plan to deploy Tanzu RabbitMQ for Kubernetes to a Kubernetes cluster on an airgapped network, you must first follow the instructions on the kapp-controller docs to relocate the package
rabbitmq-kubernetes.packages.broadcom.com/tanzu-rabbitmq-package-repo
to your internal registry. In the following instructions, you should use the URL of your registry anywhere there are references torabbitmq-kubernetes.packages.broadcom.com
.
You must export an imagePullSecret using secretgen-controller so that the components of the package can pull from the registry containing the Tanzu RabbitMQ package.
To do this, complete the following steps:
Create a Secret
object on your cluster with type
: kubernetes.io/dockerconfigjson
if it is not already created. This secret type includes the authenticated credentials for the registry, which contains the Tanzu RabbitMQ package. It is highly recommended that these credentials provide only read-only access to the registry.
The following is an example of the yaml manifest, which you can use to create the Secret
object. Remember, the Secret type
must be set to kubernetes.io/dockerconfigjson
.
---
apiVersion: v1
kind: Secret
metadata:
name: tanzu-rabbitmq-registry-creds # could be any name
namespace: secrets-ns # could be any namespace
type: kubernetes.io/dockerconfigjson # needs to be this type
stringData:
.dockerconfigjson: |
{
"auths": {
"rabbitmq-kubernetes.packages.broadcom.com": { # update to your own registry url if the package is placed in another location
"username": "user...",
"password": "password...",
"auth": ""
}
}
}
Deploy the Secret
object by running the following command. Replace <secret_filename> with the name of the Secret
object yaml file which you created above.
kubectl apply -f <secret_filename>.yml
Alternatively, you can create and deploy the Secret
object using kubectl instead.
kubectl create secret docker-registry tanzu-rabbitmq-registry-creds -n secrets-ns --docker-server "rabbitmq-kubernetes.packages.broadcom.com" --docker-username "user." --docker-password "password..."
Create a SecretExport
object on your cluster if it is not already created. The SecretExport
object provides the imagePullSecret to any namespace where Tanzu RabbitMQ is being installed.
The following is an example of the yaml manifest, which you can use to create the SecretExport
object.
---
apiVersion: secretgen.carvel.dev/v1alpha1
kind: SecretExport
metadata:
name: tanzu-rabbitmq-registry-creds # must match source secret name
namespace: secrets-ns # must match source secret namespace
spec:
toNamespaces:
- "*" # star means export is available for all namespaces
Deploy the SecretExport
object by running the following command. Replace <secretexport_filename> with the name of the SecretExport
object yaml file which you created above.
kubectl apply -f <secretexport_filename>.yml
You must install the Tanzu RabbitMQ PackageRepository
to provide the versioned Tanzu RabbitMQ packages to your cluster. To do this, complete the following steps:
PackageRepository
object yaml file with the following contents. Replace BUNDLE_VERSION
below with the Tanzu RabbitMQ release version. Replace TAG
below with the variant that you are installing. The available TAG
variants are:-arm64
for ARM64
-arm64-fips
for ARM64 with FIPS enabled
-fips
for AMD64 with FIPS enabled
There is no TAG
for AMD64, the image for AMD64 is rabbitmq-kubernetes.packages.broadcom.com/tanzu-rabbitmq-package-repo:${BUNDLE_VERSION}
Finally, replace rabbitmq-kubernetes.packages.broadcom.com
with your own registry url if the package is placed in another location.
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageRepository
metadata:
name: tanzu-rabbitmq-repo
spec:
fetch:
imgpkgBundle:
image: rabbitmq-kubernetes.packages.broadcom.com/tanzu-rabbitmq-package-repo:${BUNDLE_VERSION}{-TAG}
Deploy the PackageRepository
object by running the following command. Replace <PackageRepository_object_filename> with the name of the PackageRepository
yaml file that you used when you created the object.
kapp deploy -a tanzu-rabbitmq-repo -f <PackageRepository_object_filename>.yml -y
After the PackageRepository
object is deployed, verify that the expected packages are visible to your client by running the following command:
$ kubectl get packages
NAME PACKAGEMETADATA NAME VERSION AGE
auditlogger.rabbitmq.tanzu.vmware.com.0.4.0 auditlogger.rabbitmq.tanzu.vmware.com 0.4.0 42s
cert-manager.rabbitmq.tanzu.vmware.com.1.5.3+rmq cert-manager.rabbitmq.tanzu.vmware.com 1.5.3+rmq 42s
rabbitmq.tanzu.vmware.com.1.2.0 rabbitmq.tanzu.vmware.com 1.2.0 42s
rabbitmq.tanzu.vmware.com.1.2.1 rabbitmq.tanzu.vmware.com 1.2.1 42s
rabbitmq.tanzu.vmware.com.1.2.2 rabbitmq.tanzu.vmware.com 1.2.2 42s
rabbitmq.tanzu.vmware.com.1.3.0 rabbitmq.tanzu.vmware.com 1.3.0 42s
You need a ServiceAccount
during the Tanzu RabbitMQ installation to create the resources provided by Tanzu RabbitMQ. This service account is used to create cluster-scope objects such as CustomResourceDefinitions so it must have the correct permissions to create objects on any namespace.
For a list of the permissions that are required by this service account and the yaml file syntax that you need to create the service account, see Service Account Permissions.
Take note of the name of this service account, you need it in later steps of the installation.
cert-manager v1.5.0 or higher is required for Tanzu RabbitMQ for Kubernetes. It must be installed on your cluster now before you can install the Tanzu RabbitMQ Package later.
Important: If you have cert-manager previously installed on your Kubernetes cluster, you can skip this section. To check whether cert-manager is installed, search for installed cert-manager api-resources. For example:
$ kubectl api-resources | grep 'cert-manager'
challenges acme.cert-manager.io/v1 true Challenge
orders acme.cert-manager.io/v1 true Order
certificaterequests cr,crs cert-manager.io/v1 true CertificateRequest
certificates cert,certs cert-manager.io/v1 true Certificate
clusterissuers cert-manager.io/v1 false ClusterIssuer
issuers cert-manager.io/v1 true Issuer
Install cert-manager using one of the options below depending on the type of network your cluster is running in.
If your cluster has access to the public internet, install cert-manager from its GitHub release by running the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.5.3/cert-manager.yaml
cert-manager is included as an ancillary package in the Tanzu RabbitMQ for Kubernetes PackageRepository, therefore it is available even on airgapped networks. To install cert-manager from this PackageRepository within your network, complete the following steps:
Create a PackageInstall
object for cert-manager with the following yaml manifest. Replace ${SERVICE_ACCOUNT}
with the name of the ServiceAccount
that you created previously in Create a ServiceAccount.
---
apiVersion: v1
kind: Secret
metadata:
name: cert-manager-overlay
stringData:
add-placeholder-secret.yml: |
apiVersion: v1
kind: Secret
metadata:
name: tanzu-rabbitmq-registry-creds
annotations:
secretgen.carvel.dev/image-pull-secret: ""
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30K
add-placeholder-namespace.yml: |
#@ load("@ytt:data", "data")
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Secret","metadata": {"name":"tanzu-rabbitmq-registry-creds"}}),expects="1+"
---
metadata:
#@overlay/match missing_ok=True
namespace: #@ data.values.namespace
add-image-pull-secret.yml: |
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Deployment"}),expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
imagePullSecrets:
- name: tanzu-rabbitmq-registry-creds
---
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: cert-manager-rabbitmq
annotations:
ext.packaging.carvel.dev/ytt-paths-from-secret-name.0: cert-manager-overlay
spec:
serviceAccountName: ${SERVICE_ACCOUNT} # Replace with service account name
packageRef:
refName: cert-manager.rabbitmq.tanzu.vmware.com
versionSelection:
constraints: 1.5.3+rmq
Deploy cert-manager by running the following command. Replace <certmanagerpackageinstall_filename> with the name of the PackageInstall
object for cert-manager yaml file that you used when you created the object.
kapp deploy -a cert-manager-rabbitmq -f <certmanagerpackageinstall_filename>.yml -y
You can now install the Tanzu RabbitMQ package to install the Tanzu RabbitMQ Cluster Operator, Message Topology Operator, and Standby Replication Operator on your cluster. To do this, complete the following steps:
Create the PackageInstall
object using the following manifest. In your PackageInstall
object yaml file, replace <SERVICE_ACCOUNT> with the name of your ServiceAccount
and <BUNDLE_VERSION> with the version of Tanzu RabbitMQ you are installing.
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: tanzu-rabbitmq
spec:
serviceAccountName: ${SERVICE_ACCOUNT} # Replace with service account name
packageRef:
refName: rabbitmq.tanzu.vmware.com
versionSelection:
constraints: ${BUNDLE_VERSION} # Replace with release version
Deploy the PackageInstall
object by running the following command. Replace <PackageInstall_object_filename> with the name of the PackageInstall
yaml file that you used when you created the object.
kapp deploy -a tanzu-rabbitmq -f <PackageInstall_object_filename>.yml -y
By default, the Operators are installed in the "rabbitmq-system" namespace. If you want to change this, enter your own namespace name as a value in the PackageInstall
via a secret as outlined below. This will install the operators in the provided namespace, however the operators will watch all namespaces for RabbitmqCluster objects to reconcile.
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: tanzu-rabbitmq
spec:
serviceAccountName: ${SERVICE_ACCOUNT} # Replace with service account name
packageRef:
refName: rabbitmq.tanzu.vmware.com
versionSelection:
constraints: ${BUNDLE_VERSION} # Replace with release version
values:
- secretRef:
name: tanzu-rabbitmq-values
---
apiVersion: v1
kind: Secret
metadata:
name: tanzu-rabbitmq-values
stringData:
values.yml: |
---
namespace: ${NAMESPACE} # Replace with the target install namespace
At this point you may choose to set up observability on your cluster. To do so, you can follow the instructions in the Cluster Operator repo or the RabbitMQ website.
Once this installation is complete, you can now create your RabbitMQ objects with the Carvel tooling. Since the RabbitMQ container image will also require authentication, you will need to provide the same imagePullSecrets as earlier.
If creating RabbitmqClusters in the same namespace as the operators were installed, simply add the existing secret:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: my-tanzu-rabbit
namespace: rabbitmq-system
spec:
replicas: 1
imagePullSecrets:
- name: tanzu-rabbitmq-registry-creds
Important: If you are creating clusters in a different namespace, you must create a placeholder Secret, and direct the RabbitmqCluster to use it:
---
apiVersion: v1
kind: Secret
metadata:
name: tanzu-rabbitmq-registry-creds
namespace: my-namespace
annotations:
secretgen.carvel.dev/image-pull-secret: ""
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30K
---
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: my-tanzu-rabbit
namespace: my-namespace
spec:
replicas: 1
imagePullSecrets:
- name: tanzu-rabbitmq-registry-creds
Since we earlier created a SecretExport object, this placeholder secret will be autopopulated by secretgen-controller, and the RabbitmqCluster will be able to authenticate to the registry using this Secret.
If the RabbitmqClusters that are managed by the Messaging Topology and Standby Replication Operators are configured to serve management over HTTPS with self signed certificates, then these operators must trust the Certificate Authority (CA) that signed the TLS certificates, which are being used by the RabbitmqClusters.
One or more trusted certificates must be mounted as volumes to the trust store of the Topology Operator Pod in the /etc/ssl/certs/
directory.
To set up the operators to trust the Certificate Authority (CA) that signed the TLS certificates which are being used by the RabbitmqClusters, complete the following steps:
Create a Kubernetes Secret containing the certificate of the CA that signed the RabbitMQ server's certificate:
kubectl -n rabbitmq-system create secret generic rabbitmq-ca --from-file=ca.crt=$CA_PATH
Apply a PackageInstall extension that mounts the Secret to both the Messaging Topology and Standby Replication Operators in the correct path. You can refer to the following yaml as an example.
---
apiVersion: v1
kind: Secret
metadata:
name: resource-overlay
namespace: # same namespace as the rabbitmq PackageInstall
stringData:
overlay.yaml: |
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata": {"name": "messaging-topology-operator"}}),expects=1
---
spec:
template:
spec:
containers:
#@overlay/match by=overlay.subset({"name": "manager"}),expects=1
- volumeMounts:
#@overlay/append
- mountPath: /etc/ssl/certs/rabbitmq-ca.crt
name: rabbitmq-ca
subPath: ca.crt
volumes:
#@overlay/append
- name: rabbitmq-ca
secret:
defaultMode: 420
secretName: rabbitmq-ca
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata": {"name": "standby-replication-operator"}}),expects=1
---
spec:
template:
spec:
containers:
#@overlay/match by=overlay.subset({"name": "manager"}),expects=1
- volumeMounts:
#@overlay/append
- mountPath: /etc/ssl/certs/rabbitmq-ca.crt
name: rabbitmq-ca
subPath: ca.crt
volumes:
#@overlay/append
- name: rabbitmq-ca
secret:
defaultMode: 420
secretName: rabbitmq-ca
Apply the above PackageInstall extension by:
kubectl apply -f resource-overlay.yaml
Update the annotations of the PackageInstall that you used to install the operators to reference the resource-overlay
secret. You can refer to the following yaml as an example.
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: tanzu-rabbitmq
annotations:
ext.packaging.carvel.dev/ytt-paths-from-secret-name.0: resource-overlay
spec:
serviceAccountName: tanzu
packageRef:
refName: rabbitmq.tanzu.vmware.com
versionSelection:
constraints: 1.2.1
Update the PackageInstall by:
kapp deploy -a tanzu-rabbitmq-repo -f repo.yml -y
The operators are now setup to trust the Certificate Authority (CA) that signed the TLS certificates which are being used by the RabbitmqClusters.
Starting in VMware Tanzu RabbitMQ for Kubernetes version 1.4.1, a new audit-logger
package is included. You can optionally install this package to audit events that are triggered by RabbitMQ clusters deployed on the Kubernetes cluster.
Prequisite: The audit-logger
package is based on the rabbitmq_event_exchange
plugin. This plugin must be deployed on your RabbitMQ cluster before you can use the audit-logger
package. The event exchange plugin can be deployed on your RabbitMQ cluster like this for example:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: my-tanzu-rabbit
namespace: my-namespace
spec:
replicas: 1
imagePullSecrets:
- name: tanzu-rabbitmq-registry-creds
rabbitmq:
additionalPlugins:
- rabbitmq_event_exchange
Refer to event exchange plugin documentation for more details on the events that are produced and can be logged.
audit-logger
is included in the Tanzu RabbitMQ for Kubernetes PackageRepository. To install audit-logger
from this PackageRepository within your network, complete the following steps:
The audit-logger container image requires authentication, in the installation namespace, you must provide the same imagePullSecrets as earlier.
Create the PackageInstall
object using the following manifest. In your PackageInstall
object yaml file, replace <SERVICE_ACCOUNT> with the name of your ServiceAccount
and <AUDIT_LOGGER_VERSION> with the audit-logger version. The current audit-logger version is 0.4.0.
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: audit-logger
spec:
serviceAccountName: ${SERVICE_ACCOUNT} # Replace with service account name
packageRef:
refName: auditlogger.rabbitmq.tanzu.vmware.com
versionSelection:
constraints: ${AUDIT_LOGGER_VERSION} # Replace with audit-logger version
PackageInstall
object by running the following command. Replace <PackageInstall_object_filename> with the name of the PackageInstall yaml file that you used when you created the object. kapp deploy -a rabbitmq-audit-logger -f <PackageInstall_object_filename>.yml -y
By default, the audit-logger
package is installed in the rabbitmq-system
namespace. If you want to change the installation namespace, specify a tanzu-audit-logger-values
secret in the PackageInstall
object by doing this:
apiVersion: packaging.carvel.dev/v1alpha1
kind: PackageInstall
metadata:
name: audit-logger
spec:
serviceAccountName: ${SERVICE_ACCOUNT} # Replace with service account name
packageRef:
refName: auditlogger.rabbitmq.tanzu.vmware.com
versionSelection:
constraints: ${BUNDLE_VERSION} # Replace with release version
values:
- secretRef:
name: tanzu-audit-logger-values
tanzu-audit-logger-values
is a secret containing two values: namespace
and audityml
. namespace
is the namespace where the audit-logger packaged is installed. audityml
is a property of the secret which includes the configuration options. audityml
is passed to the audit-logger
package during the audit-logger
installation. If the audityml
is not specified, audit-logger
uses default values.
To configure the values-properties.yml
file (note, you can choose your own file name, these instructions are using values-properties.yml
for example purposes.) using the tanzu-audit-logger-values
secret, complete the following steps. Note, the following is just an example of the audit-logger
configuration in the values-properties.yml
file. The full list of configuration options is in Configuration Options for audit-logger.
values-properties.yml
file, include the audit-logger
configuration:namespace: rabbitmq-system
audityml: "eventFilter: \"queue.*\" \ntargets: \n kubernetesServiceDiscovery:\n - namespace: \"rabbitmq-system\""
kubectl create secret generic tanzu-audit-logger-values --from-file=values.yml=values-properties.yml
The secret tanzu-audit-logger-values
contains a values.yml
field with the namespace
and audityml
values.
audit-logger
pod to view the event logs that are produced by the audit-logger
:kubectl logs audit-logger-56c456c89b-jlv6f -n rabbitmq-system
The complete list of configuration options for audit-logger
are:
# Defines the filter of events to audit on the RabbitMQ cluster.
# The full list of events are defined in the plugin: https://github.com/rabbitmq/rabbitmq-server/tree/main/deps/rabbitmq_event_exchange#events
# It is possible to use asterisks to match groups of events, e.g. queue.* for all queue events.
eventFilter: "*.*"
# Name of the queue to declare in order to consume audit logs.
# If this queue already exists, it will be used to consume events.
queue: rabbitmq-audit-logger-internal
# List of targets from which to consume audit logs.
targets:
# List of static endpoints from which to consume audit events.
static:
# URI of the RabbitMQ instance to connect to.
- uri: "amqp://guest:guest@localhost:5672/"
# Configuration to dynamically discover RabbitmqClusters from the Kubernetes API, and collect logs from them automatically,
# when the audit logger is running as a Pod in a Kubernetes Cluster.
# Note that the Pod must be bound to a ServiceAccount with permissions to access RabbitmqClusters and Secrets at a
# Cluster scope.
kubernetesServiceDiscovery:
# Namespace of RabbitmqCluster objects to collect logs from. If unset, the audit logger will collect logs from
# all RabbitmqCluster objects on the Kubernetes cluster, and will need cluster-scope RBAC permissions to function.
- namespace: ""
# Configuration for the output of collected audit logs.
output:
console:
# Whether to log audited events to stdout.
enabled: true
# Format log events are written to stdout.
# Possible values: human, json
format: json
file:
# Whether to log audited events to disk.
enabled: false
# Path of the logfile to be written to disk. If it does not exist, it will be created.
path: /var/log/rabbitmq-audit/audit.log
# Format log events are written to disk.
# Possible values: human, json
format: json
retry:
# The interval to wait before attempting to reconnect to RabbitMQ in the event of a connection failure.
# Must be a valid time string, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
reconnection: 5s
# The interval to wait before attempting to consume audit events from RabbitMQ in the event of a AMQP channel error.
# Must be a valid time string, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
consume: 2s
Follow these instructions to setup and configure Warm Standby Replication.
To uninstall Tanzu RabbitMQ for Kubernetes, delete the Tanzu RabbitMQ package which then deletes the Tanzu RabbitMQ Cluster Operator, Message Topology Operator, and Standby Replication Operator from your cluster. Run the following command. Replace <object>
with the name of the PackageInstall
object, which is tanzu-rabbitmq
in this documentation.
kapp delete -a PackageInstall <object>