etcd is a distributed key-value store designed to securely store data across a cluster. etcd is widely used in production on account of its reliability, fault-tolerance and ease of use.
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/etcd
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository.
This chart bootstraps a etcd deployment on a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/etcd
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
These commands deploy etcd on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcePreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
The Bitnami etcd chart can be used to bootstrap an etcd cluster, easy to scale and with available features to implement disaster recovery. It uses static discovery configured via environment variables to bootstrap the etcd cluster. Based on the number of initial replicas, and using the A records added to the DNS configuration by the headless service, the chart can calculate every advertised peer URL.
The chart makes use of some extra elements offered by Kubernetes to ensure the bootstrapping is successful:
Learn more about etcd discovery, Pod Management Policies and recording “not ready” pods.
Here is an example of the environment configuration bootstrapping an etcd cluster with 3 replicas:
Member | Variable | Value |
---|---|---|
0 | ETCD_NAME | etcd-0 |
0 | ETCD_INITIAL_ADVERTISE_PEER_URLS | http://etcd-0.etcd-headless.default.svc.cluster.local:2380 |
——— | ———————————- | ——————————————————————————————————————————————————————————————————- |
1 | ETCD_NAME | etcd-1 |
1 | ETCD_INITIAL_ADVERTISE_PEER_URLS | http://etcd-1.etcd-headless.default.svc.cluster.local:2380 |
——— | ———————————- | ——————————————————————————————————————————————————————————————————- |
2 | ETCD_NAME | etcd-2 |
2 | ETCD_INITIAL_ADVERTISE_PEER_URLS | http://etcd-2.etcd-headless.default.svc.cluster.local:2380 |
——— | ———————————- | ——————————————————————————————————————————————————————————————————- |
* | ETCD_INITIAL_CLUSTER_STATE | new |
* | ETCD_INITIAL_CLUSTER_TOKEN | etcd-cluster-k8s |
* | ETCD_INITIAL_CLUSTER | etcd-0=http://etcd-0.etcd-headless.default.svc.cluster.local:2380,etcd-1=http://etcd-1.etcd-headless.default.svc.cluster.local:2380,etcd-2=http://etcd-2.etcd-headless.default.svc.cluster.local:2380 |
The probes (readiness & liveness) are delayed 60 seconds by default, to give the etcd replicas time to start and find each other. After that period, the etcdctl endpoint health command is used to periodically perform health checks on every replica.
The Bitnami etcd chart uses etcd reconfiguration operations to add/remove members of the cluster during scaling.
When scaling down, a “pre-stop” lifecycle hook is used to ensure that the etcdctl member remove
command is executed. The hook stores the output of this command in the persistent volume attached to the etcd pod. This hook is also executed when the pod is manually removed using the kubectl delete pod
command or rescheduled by Kubernetes for any reason. This implies that the cluster can be scaled up/down without human intervention.
Here is an example to explain how this works:
If, for whatever reason, the “pre-stop” hook fails at removing the member, the initialization logic is able to detect that something went wrong by checking the etcdctl member remove
command output that was stored in the persistent volume. It then uses the etcdctl member update
command to add back the member. In this case, the cluster isn’t automatically scaled down/up while the pod is recovered. Therefore, when other members attempt to connect to the pod, it may cause warnings or errors like the one below:
E | rafthttp: failed to dial XXXXXXXX on stream Message (peer XXXXXXXX failed to find local node YYYYYYYYY)
I | rafthttp: peer XXXXXXXX became inactive (message send to peer failed)
W | rafthttp: health check for peer XXXXXXXX could not connect: dial tcp A.B.C.D:2380: i/o timeout
Learn more about etcd runtime configuration and how to safely drain a Kubernetes node.
When updating the etcd StatefulSet (such as when upgrading the chart version via the helm upgrade command), every pod must be replaced following the StatefulSet update strategy.
The chart uses a “RollingUpdate” strategy by default and with default Kubernetes values. In other words, it updates each Pod, one at a time, in the same order as Pod termination (from the largest ordinal to the smallest). It will wait until an updated Pod is “Running” and “Ready” prior to updating its predecessor.
Learn more about StatefulSet update strategies.
If, for whatever reason, (N-1)/2 members of the cluster fail and the “pre-stop” hooks also fail at removing them from the cluster, the cluster disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates. Under this circumstance, the only possible solution is usually to restore the cluster from a snapshot.
IMPORTANT: All members should restore using the same snapshot.
The Bitnami etcd chart solves this problem by optionally offering a Kubernetes cron job that periodically snapshots the keyspace and stores it in a RWX volume. In case the cluster disastrously fails, the pods will automatically try to restore it using the last avalable snapshot.
Learn how to enable this disaster recovery feature.
The chart also sets by default a “soft” Pod AntiAffinity to reduce the risk of the cluster failing disastrously.
Learn more about etcd recovery, Kubernetes cron jobs and pod affinity and anti-affinity
The etcd chart can be configured with Role-based access control and TLS encryption to improve its security.
In order to enable Role-Based Access Control for etcd, set the following parameters:
auth.rbac.create=true
auth.rbac.rootPassword=ETCD_ROOT_PASSWORD
These parameters create a root
user with an associate root
role with access to everything. The remaining users will use the guest
role and won’t have permissions to do anything.
In order to enable secure transport between peer nodes deploy the helm chart with these options:
auth.peer.secureTransport=true
auth.peer.useAutoTLS=true
In order to enable secure transport between client and server, create a secret containing the certificate and key files and the CA used to sign the client certificates. In this case, create the secret and then deploy the chart with these options:
auth.client.secureTransport=true
auth.client.enableAuthentication=true
auth.client.existingSecret=etcd-client-certs
Learn more about the etcd security model and how to generate self-signed certificates for etcd.
The Bitnami etcd Helm chart supports automatic disaster recovery by periodically snapshotting the keyspace. If the cluster permanently loses more than (N-1)/2 members, it tries to recover the cluster from a previous snapshot.
Enable this feature with the following parameters:
persistence.enabled=true
disasterRecovery.enabled=true
disasterRecovery.pvc.size=2Gi
disasterRecovery.pvc.storageClassName=nfs
If the startFromSnapshot.*
parameters are used at the same time as the disasterRecovery.*
parameters, the PVC provided via the startFromSnapshot.existingClaim
parameter will be used to store the periodical snapshots.
NOTE: The disaster recovery feature requires volumes with ReadWriteMany access mode.
Two different approaches are available to back up and restore this Helm Chart:
This method involves the following steps:
NOTE: Under this approach, it is important to create the new deployment on the destination cluster using the same credentials as the original deployment on the source cluster.
This method involves copying the persistent data volumes for the etcd nodes and reusing them in a new deployment with Velero, an open source Kubernetes backup/restore tool. This method is only suitable when:
This method involves the following steps:
The metrics exposed by etcd can be exposed to be scraped by Prometheus. Metrics can be scraped from within the cluster using any of the following approaches:
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics/cluster"
prometheus.io/port: "9000"
If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.
In order to use custom configuration parameters, two options are available:
extraEnvVars
property. Alternatively, you can use a ConfigMap or a Secret with the environment variables using the extraEnvVarsCM
or the extraEnvVarsSecret
properties.extraEnvVars:
- name: ETCD_AUTO_COMPACTION_RETENTION
value: "0"
- name: ETCD_HEARTBEAT_INTERVAL
value: "150"
etcd.conf.yml
: The etcd chart allows mounting a custom etcd.conf.yml
file as ConfigMap. In order to so, you can use the configuration
property. Alternatively, you can use an existing ConfigMap using the existingConfigmap
parameter.Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
autoCompactionMode
, by default periodic. Valid values: “periodic”, “revision”.
autoCompactionRetention
for mvcc key value store in hour, by default 0, means disabled.You can enable auto compaction by using following parameters:
autoCompactionMode=periodic
autoCompactionRetention=10m
If you have a need for additional containers to run within the same pod as the etcd app (e.g. an additional metrics or logging exporter), you can do so via the sidecars
config parameter. Simply define your container according to the Kubernetes container spec.
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Similarly, you can add extra init containers using the initContainers
parameter.
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
There are cases where you may want to deploy extra objects, such a ConfigMap containing your app’s configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy
parameter.
This chart allows you to set your custom affinity using the affinity
parameter. Find more information about Pod’s affinity in the kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset
, podAntiAffinityPreset
, or nodeAffinityPreset
parameters.
The Bitnami etcd image stores the etcd data at the /bitnami/etcd
path of the container. Persistent Volume Claims are used to keep the data across statefulsets.
The chart mounts a Persistent Volume volume at this location. The volume is created using dynamic volume provisioning by default. An existing PersistentVolumeClaim can also be defined for this purpose.
If you encounter errors when working with persistent volumes, refer to our troubleshooting guide for persistent volumes.
Name | Description | Value |
---|---|---|
global.imageRegistry |
Global Docker image registry | "" |
global.imagePullSecrets |
Global Docker registry secret names as an array | [] |
global.defaultStorageClass |
Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass |
DEPRECATED: use global.defaultStorageClass instead | "" |
global.compatibility.openshift.adaptSecurityContext |
Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion |
Force target Kubernetes version (using Helm capabilities if not set) | "" |
nameOverride |
String to partially override common.names.fullname template (will maintain the release name) | "" |
fullnameOverride |
String to fully override common.names.fullname template | "" |
commonLabels |
Labels to add to all deployed objects | {} |
commonAnnotations |
Annotations to add to all deployed objects | {} |
clusterDomain |
Default Kubernetes cluster domain | cluster.local |
extraDeploy |
Array of extra objects to deploy with the release | [] |
diagnosticMode.enabled |
Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command |
Command to override all containers in the deployment | ["sleep"] |
diagnosticMode.args |
Args to override all containers in the deployment | ["infinity"] |
Name | Description | Value |
---|---|---|
image.registry |
etcd image registry | REGISTRY_NAME |
image.repository |
etcd image name | REPOSITORY_NAME/etcd |
image.digest |
etcd image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag | "" |
image.pullPolicy |
etcd image pull policy | IfNotPresent |
image.pullSecrets |
etcd image pull secrets | [] |
image.debug |
Enable image debug mode | false |
auth.rbac.create |
Switch to enable RBAC authentication | true |
auth.rbac.allowNoneAuthentication |
Allow to use etcd without configuring RBAC authentication | true |
auth.rbac.rootPassword |
Root user password. The root user is always root |
"" |
auth.rbac.existingSecret |
Name of the existing secret containing credentials for the root user | "" |
auth.rbac.existingSecretPasswordKey |
Name of key containing password to be retrieved from the existing secret | "" |
auth.token.enabled |
Enables token authentication | true |
auth.token.type |
Authentication token type. Allowed values: ‘simple’ or ‘jwt’ | jwt |
auth.token.privateKey.filename |
Name of the file containing the private key for signing the JWT token | jwt-token.pem |
auth.token.privateKey.existingSecret |
Name of the existing secret containing the private key for signing the JWT token | "" |
auth.token.signMethod |
JWT token sign method | RS256 |
auth.token.ttl |
JWT token TTL | 10m |
auth.client.secureTransport |
Switch to encrypt client-to-server communications using TLS certificates | false |
auth.client.useAutoTLS |
Switch to automatically create the TLS certificates | false |
auth.client.existingSecret |
Name of the existing secret containing the TLS certificates for client-to-server communications | "" |
auth.client.enableAuthentication |
Switch to enable host authentication using TLS certificates. Requires existing secret | false |
auth.client.certFilename |
Name of the file containing the client certificate | cert.pem |
auth.client.certKeyFilename |
Name of the file containing the client certificate private key | key.pem |
auth.client.caFilename |
Name of the file containing the client CA certificate | "" |
auth.peer.secureTransport |
Switch to encrypt server-to-server communications using TLS certificates | false |
auth.peer.useAutoTLS |
Switch to automatically create the TLS certificates | false |
auth.peer.existingSecret |
Name of the existing secret containing the TLS certificates for server-to-server communications | "" |
auth.peer.enableAuthentication |
Switch to enable host authentication using TLS certificates. Requires existing secret | false |
auth.peer.certFilename |
Name of the file containing the peer certificate | cert.pem |
auth.peer.certKeyFilename |
Name of the file containing the peer certificate private key | key.pem |
auth.peer.caFilename |
Name of the file containing the peer CA certificate | "" |
autoCompactionMode |
Auto compaction mode, by default periodic. Valid values: “periodic”, “revision”. | "" |
autoCompactionRetention |
Auto compaction retention for mvcc key value store in hour, by default 0, means disabled | "" |
initialClusterState |
Initial cluster state. Allowed values: ‘new’ or ‘existing’ | "" |
initialClusterToken |
Initial cluster token. Can be used to protect etcd from cross-cluster-interaction, which might corrupt the clusters. | etcd-cluster-k8s |
logLevel |
Sets the log level for the etcd process. Allowed values: ‘debug’, ‘info’, ‘warn’, ‘error’, ‘panic’, ‘fatal’ | info |
maxProcs |
Limits the number of operating system threads that can execute user-level | "" |
removeMemberOnContainerTermination |
Use a PreStop hook to remove the etcd members from the etcd cluster on container termination | true |
configuration |
etcd configuration. Specify content for etcd.conf.yml | "" |
existingConfigmap |
Existing ConfigMap with etcd configuration | "" |
extraEnvVars |
Extra environment variables to be set on etcd container | [] |
extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars | "" |
extraEnvVarsSecret |
Name of existing Secret containing extra env vars | "" |
command |
Default container command (useful when using custom images) | [] |
args |
Default container args (useful when using custom images) | [] |
Name | Description | Value |
---|---|---|
replicaCount |
Number of etcd replicas to deploy | 1 |
updateStrategy.type |
Update strategy type, can be set to RollingUpdate or OnDelete. | RollingUpdate |
podManagementPolicy |
Pod management policy for the etcd statefulset | Parallel |
automountServiceAccountToken |
Mount Service Account token in pod | false |
hostAliases |
etcd pod host aliases | [] |
lifecycleHooks |
Override default etcd container hooks | {} |
containerPorts.client |
Client port to expose at container level | 2379 |
containerPorts.peer |
Peer port to expose at container level | 2380 |
containerPorts.metrics |
Metrics port to expose at container level when metrics.useSeparateEndpoint is true | 9090 |
podSecurityContext.enabled |
Enabled etcd pods’ Security Context | true |
podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
podSecurityContext.fsGroup |
Set etcd pod’s Security Context fsGroup | 1001 |
containerSecurityContext.enabled |
Enabled etcd containers’ Security Context | true |
containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
containerSecurityContext.runAsUser |
Set etcd containers’ Security Context runAsUser | 1001 |
containerSecurityContext.runAsGroup |
Set etcd containers’ Security Context runAsUser | 1001 |
containerSecurityContext.runAsNonRoot |
Set Controller container’s Security Context runAsNonRoot | true |
containerSecurityContext.privileged |
Set primary container’s Security Context privileged | false |
containerSecurityContext.allowPrivilegeEscalation |
Set primary container’s Security Context allowPrivilegeEscalation | false |
containerSecurityContext.readOnlyRootFilesystem |
Set container’s Security Context readOnlyRootFilesystem | true |
containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
containerSecurityContext.seccompProfile.type |
Set container’s Security Context seccomp profile | RuntimeDefault |
resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | micro |
resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
livenessProbe.enabled |
Enable livenessProbe | true |
livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 60 |
livenessProbe.periodSeconds |
Period seconds for livenessProbe | 30 |
livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 5 |
livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 5 |
livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
readinessProbe.enabled |
Enable readinessProbe | true |
readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 60 |
readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 5 |
readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 5 |
readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
startupProbe.enabled |
Enable startupProbe | false |
startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 0 |
startupProbe.periodSeconds |
Period seconds for startupProbe | 10 |
startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 5 |
startupProbe.failureThreshold |
Failure threshold for startupProbe | 60 |
startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
customLivenessProbe |
Override default liveness probe | {} |
customReadinessProbe |
Override default readiness probe | {} |
customStartupProbe |
Override default startup probe | {} |
extraVolumes |
Optionally specify extra list of additional volumes for etcd pods | [] |
extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for etcd container(s) | [] |
extraVolumeClaimTemplates |
Optionally specify extra list of additional volumeClaimTemplates for etcd container(s) | [] |
initContainers |
Add additional init containers to the etcd pods | [] |
sidecars |
Add additional sidecar containers to the etcd pods | [] |
podAnnotations |
Annotations for etcd pods | {} |
podLabels |
Extra labels for etcd pods | {} |
podAffinityPreset |
Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
podAntiAffinityPreset |
Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
soft |
nodeAffinityPreset.type |
Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
nodeAffinityPreset.key |
Node label key to match. Ignored if affinity is set. |
"" |
nodeAffinityPreset.values |
Node label values to match. Ignored if affinity is set. |
[] |
affinity |
Affinity for pod assignment | {} |
nodeSelector |
Node labels for pod assignment | {} |
tolerations |
Tolerations for pod assignment | [] |
terminationGracePeriodSeconds |
Seconds the pod needs to gracefully terminate | "" |
schedulerName |
Name of the k8s scheduler (other than default) | "" |
priorityClassName |
Name of the priority class to be used by etcd pods | "" |
runtimeClassName |
Name of the runtime class to be used by pod(s) | "" |
shareProcessNamespace |
Enable shared process namespace in a pod. | false |
topologySpreadConstraints |
Topology Spread Constraints for pod assignment | [] |
persistentVolumeClaimRetentionPolicy.enabled |
Controls if and how PVCs are deleted during the lifecycle of a StatefulSet | false |
persistentVolumeClaimRetentionPolicy.whenScaled |
Volume retention behavior when the replica count of the StatefulSet is reduced | Retain |
persistentVolumeClaimRetentionPolicy.whenDeleted |
Volume retention behavior that applies when the StatefulSet is deleted | Retain |
Name | Description | Value |
---|---|---|
service.type |
Kubernetes Service type | ClusterIP |
service.enabled |
create second service if equal true | true |
service.clusterIP |
Kubernetes service Cluster IP | "" |
service.ports.client |
etcd client port | 2379 |
service.ports.peer |
etcd peer port | 2380 |
service.ports.metrics |
etcd metrics port when metrics.useSeparateEndpoint is true | 9090 |
service.nodePorts.client |
Specify the nodePort client value for the LoadBalancer and NodePort service types. | "" |
service.nodePorts.peer |
Specify the nodePort peer value for the LoadBalancer and NodePort service types. | "" |
service.nodePorts.metrics |
Specify the nodePort metrics value for the LoadBalancer and NodePort service types. The metrics port is only exposed when metrics.useSeparateEndpoint is true. | "" |
service.clientPortNameOverride |
etcd client port name override | "" |
service.peerPortNameOverride |
etcd peer port name override | "" |
service.metricsPortNameOverride |
etcd metrics port name override. The metrics port is only exposed when metrics.useSeparateEndpoint is true. | "" |
service.loadBalancerIP |
loadBalancerIP for the etcd service (optional, cloud specific) | "" |
service.loadBalancerSourceRanges |
Load Balancer source ranges | [] |
service.externalIPs |
External IPs | [] |
service.externalTrafficPolicy |
%%MAIN_CONTAINER_NAME%% service external traffic policy | Cluster |
service.extraPorts |
Extra ports to expose (normally used with the sidecar value) |
[] |
service.annotations |
Additional annotations for the etcd service | {} |
service.sessionAffinity |
Session Affinity for Kubernetes service, can be “None” or “ClientIP” | None |
service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
service.headless.annotations |
Annotations for the headless service. | {} |
Name | Description | Value |
---|---|---|
persistence.enabled |
If true, use a Persistent Volume Claim. If false, use emptyDir. | true |
persistence.storageClass |
Persistent Volume Storage Class | "" |
persistence.annotations |
Annotations for the PVC | {} |
persistence.labels |
Labels for the PVC | {} |
persistence.accessModes |
Persistent Volume Access Modes | ["ReadWriteOnce"] |
persistence.size |
PVC Storage Request for etcd data volume | 8Gi |
persistence.selector |
Selector to match an existing Persistent Volume | {} |
Name | Description | Value |
---|---|---|
volumePermissions.enabled |
Enable init container that changes the owner and group of the persistent volume(s) mountpoint to runAsUser:fsGroup |
false |
volumePermissions.image.registry |
Init container volume-permissions image registry | REGISTRY_NAME |
volumePermissions.image.repository |
Init container volume-permissions image name | REPOSITORY_NAME/os-shell |
volumePermissions.image.digest |
Init container volume-permissions image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag | "" |
volumePermissions.image.pullPolicy |
Init container volume-permissions image pull policy | IfNotPresent |
volumePermissions.image.pullSecrets |
Specify docker-registry secret names as an array | [] |
volumePermissions.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | nano |
volumePermissions.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
Name | Description | Value |
---|---|---|
networkPolicy.enabled |
Enable creation of NetworkPolicy resources | true |
networkPolicy.allowExternal |
Don’t require client label for connections | true |
networkPolicy.allowExternalEgress |
Allow the pod to access any range of port and all destinations. | true |
networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy | [] |
networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
Name | Description | Value |
---|---|---|
metrics.enabled |
Expose etcd metrics | false |
metrics.useSeparateEndpoint |
Use a separate endpoint for exposing metrics | false |
metrics.podAnnotations |
Annotations for the Prometheus metrics on etcd pods | {} |
metrics.podMonitor.enabled |
Create PodMonitor Resource for scraping metrics using PrometheusOperator | false |
metrics.podMonitor.namespace |
Namespace in which Prometheus is running | monitoring |
metrics.podMonitor.interval |
Specify the interval at which metrics should be scraped | 30s |
metrics.podMonitor.scrapeTimeout |
Specify the timeout after which the scrape is ended | 30s |
metrics.podMonitor.additionalLabels |
Additional labels that can be used so PodMonitors will be discovered by Prometheus | {} |
metrics.podMonitor.scheme |
Scheme to use for scraping | http |
metrics.podMonitor.tlsConfig |
TLS configuration used for scrape endpoints used by Prometheus | {} |
metrics.podMonitor.relabelings |
Prometheus relabeling rules | [] |
metrics.prometheusRule.enabled |
Create a Prometheus Operator PrometheusRule (also requires metrics.enabled to be true and metrics.prometheusRule.rules ) |
false |
metrics.prometheusRule.namespace |
Namespace for the PrometheusRule Resource (defaults to the Release Namespace) | "" |
metrics.prometheusRule.additionalLabels |
Additional labels that can be used so PrometheusRule will be discovered by Prometheus | {} |
metrics.prometheusRule.rules |
Prometheus Rule definitions | [] |
Name | Description | Value |
---|---|---|
startFromSnapshot.enabled |
Initialize new cluster recovering an existing snapshot | false |
startFromSnapshot.existingClaim |
Existing PVC containing the etcd snapshot | "" |
startFromSnapshot.snapshotFilename |
Snapshot filename | "" |
disasterRecovery.enabled |
Enable auto disaster recovery by periodically snapshotting the keyspace | false |
disasterRecovery.cronjob.schedule |
Schedule in Cron format to save snapshots | */30 * * * * |
disasterRecovery.cronjob.historyLimit |
Number of successful finished jobs to retain | 1 |
disasterRecovery.cronjob.snapshotHistoryLimit |
Number of etcd snapshots to retain, tagged by date | 1 |
disasterRecovery.cronjob.snapshotsDir |
Directory to store snapshots | /snapshots |
disasterRecovery.cronjob.podAnnotations |
Pod annotations for cronjob pods | {} |
disasterRecovery.cronjob.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if disasterRecovery.cronjob.resources is set (disasterRecovery.cronjob.resources is recommended for production). | nano |
disasterRecovery.cronjob.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
disasterRecovery.cronjob.nodeSelector |
Node labels for cronjob pods assignment | {} |
disasterRecovery.cronjob.tolerations |
Tolerations for cronjob pods assignment | [] |
disasterRecovery.cronjob.podLabels |
Labels that will be added to pods created by cronjob | {} |
disasterRecovery.cronjob.serviceAccountName |
Specifies the service account to use for disaster recovery cronjob | "" |
disasterRecovery.pvc.existingClaim |
A manually managed Persistent Volume and Claim | "" |
disasterRecovery.pvc.size |
PVC Storage Request | 2Gi |
disasterRecovery.pvc.storageClassName |
Storage Class for snapshots volume | nfs |
disasterRecovery.pvc.subPath |
Path within the volume from which to mount | "" |
Name | Description | Value |
---|---|---|
serviceAccount.create |
Enable/disable service account creation | true |
serviceAccount.name |
Name of the service account to create or use | "" |
serviceAccount.automountServiceAccountToken |
Enable/disable auto mounting of service account token | false |
serviceAccount.annotations |
Additional annotations to be included on the service account | {} |
serviceAccount.labels |
Additional labels to be included on the service account | {} |
Name | Description | Value |
---|---|---|
pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | 51% |
pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable | "" |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
helm install my-release \
--set auth.rbac.rootPassword=secretpassword oci://REGISTRY_NAME/REPOSITORY_NAME/etcd
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The above command sets the etcd root
account password to secretpassword
.
NOTE: Once this chart is deployed, it is not possible to change the application’s access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application’s built-in administrative tools if available.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/etcd
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
. Tip: You can use the default values.yaml
Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.
This major bump changes the following security defaults:
runAsGroup
is changed from 0
to 1001
readOnlyRootFilesystem
is set to true
resourcesPreset
is changed from none
to the minimum size working in our test suites (NOTE: resourcesPreset
is not meant for production usage, but resources
adapted to your use case).global.compatibility.openshift.adaptSecurityContext
is changed from disabled
to auto
.This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.
This version adds a new label app.kubernetes.io/component=etcd
to the StatefulSet and pods. Due to this change, the StatefulSet will be replaced (as it’s not possible to add additional spec.selector.matchLabels
to an existing StatefulSet) and the pods will be recreated. To upgrade to this version from a previous version, you need to run the following steps:
Add new label to your pods
kubectl label pod my-release-0 app.kubernetes.io/component=etcd
# Repeat for all etcd pods, based on configured .replicaCount (excluding the etcd snappshoter pod, if .disasterRecovery.enabled is set to true)
Remove the StatefulSet keeping the pods:
kubectl delete statefulset my-release --cascade=orphan
Upgrade your cluster:
helm upgrade my-release oci://REGISTRY_NAME/REPOSITORY_NAME/etcd --set auth.rbac.rootPassword=$ETCD_ROOT_PASSWORD
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
This version reverts the change in the previous major bump (7.0.0). Now the default etcd
branch is 3.5
again once confirmed by the etcd developers that this version is production-ready once solved the data corruption issue.
This version changes the default etcd
branch to 3.4
as suggested by etcd developers. In order to migrate the data follow the official etcd instructions.
This version introduces several features and performance improvements:
kubectl scale
command. Using helm upgrade
to recalculate available endpoints is no longer needed.etcd.initialClusterState
is renamed to initialClusterState
.statefulset.replicaCount
is renamed to replicaCount
.statefulset.podManagementPolicy
is renamed to podManagementPolicy
.statefulset.updateStrategy
and statefulset.rollingUpdatePartition
are merged into updateStrategy
.securityContext.*
is deprecated in favor of podSecurityContext
and containerSecurityContext
.configFileConfigMap
is deprecated in favor of configuration
and existingConfigmap
.envVarsConfigMap
is deprecated in favor of extraEnvVars
, extraEnvVarsCM
and extraEnvVarsSecret
.allowNoneAuthentication
is renamed to auth.rbac.allowNoneAuthentication
.extraDeploy
to deploy any extra desired object.initContainers
and sidecars
to define custom init containers and sidecars.extraVolumes
, extraVolumeMounts
and extraVolumeClaimTemplates
to define custom volumes, mount points and volume claim templates.lifecycleHooks
parameter.command
and args
parameters.Consequences:
This version introduces bitnami/common
, a library chart as a dependency. More documentation about this new utility could be found here. Please, make sure that you have updated the chart dependencies before executing any upgrade.
On November 13, 2020, Helm v2 support formally ended. This major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.
In this release we addressed a vulnerability that showed the ETCD_ROOT_PASSWORD
environment variable in the application logs. Users are advised to update immediately. More information in this issue.
Backwards compatibility is not guaranteed. The following notables changes were included:
To upgrade from previous charts versions, create a snapshot of the keyspace and restore it in a new etcd cluster. Only v3 API data can be restored. You can use the command below to upgrade your chart by starting a new cluster using an existing snapshot, available in an existing PVC, to initialize the members:
helm install new-release oci://REGISTRY_NAME/REPOSITORY_NAME/etcd \
--set statefulset.replicaCount=3 \
--set persistence.enabled=true \
--set persistence.size=8Gi \
--set startFromSnapshot.enabled=true \
--set startFromSnapshot.existingClaim=my-claim \
--set startFromSnapshot.snapshotFilename=my-snapshot.db
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
Backwards compatibility is not guaranteed unless you modify the labels used on the chart’s deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is etcd:
kubectl delete statefulset etcd --cascade=false
Copyright © 2024 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.