Bitnami package for OpenSearch

OpenSearch is a scalable open-source solution for search, analytics, and observability. Features full-text queries, natural language processing, custom dictionaries, amongst others.

Overview of OpenSearch

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/opensearch

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository.

Introduction

This chart bootstraps a OpenSearch deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/opensearch

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy OpenSearch on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcePreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Change OpenSearch version

To modify the OpenSearch version used in this chart you can specify a valid image tag using the image.tag parameter. For example, image.tag=X.Y.Z. This approach is also applicable to other images like exporters.

Default kernel settings

Currently, OpenSearch requires some changes in the kernel of the host machine to work as expected. If those values are not set in the underlying operating system, the OS containers fail to boot with ERROR messages. More information about these requirements can be found here:

This chart uses a privileged initContainer to change those settings in the Kernel by running: sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536. You can disable the initContainer using the sysctlImage.enabled=false parameter.

Adding extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property.

extraEnvVars:
  - name: OPENSEARCH_VERSION
    value: 7.0

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values.

Using custom init scripts

For advanced operations, the Bitnami OpenSearch charts allows using custom init scripts that will be mounted inside /docker-entrypoint.init-db. You can include the file directly in your values.yaml with initScripts, or use a ConfigMap or a Secret (in case of sensitive data) for mounting these extra scripts. In this case you use the initScriptsCM and initScriptsSecret values.

initScriptsCM=special-scripts
initScriptsSecret=special-scripts-sensitive

Snapshot and restore operations

As it’s described in the official documentation, it’s necessary to register a snapshot repository before you can perform snapshot and restore operations.

This chart allows you to configure OpenSearch to use a shared file system to store snapshots. To do so, you need to mount a RWX volume on every OpenSearch node, and set the parameter snapshotRepoPath with the path where the volume is mounted. In the example below, you can find the values to set when using a NFS Persistent Volume:

extraVolumes:
  - name: snapshot-repository
    nfs:
      server: nfs.example.com # Please change this to your NFS server
      path: /share1
extraVolumeMounts:
  - name: snapshot-repository
    mountPath: /snapshots
snapshotRepoPath: "/snapshots"

Sidecars and Init Containers

If you have a need for additional containers to run within the same pod as OpenSearch components (e.g. an additional metrics or logging exporter), you can do so via the XXX.sidecars parameter(s), where XXX is placeholder you need to replace with the actual component(s). Simply define your container according to the Kubernetes container spec.

sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Similarly, you can add extra init containers using the initContainers parameter.

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname

Setting Pod’s affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod’s affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Persistence

The Bitnami OpenSearch image stores the OpenSearch data at the /bitnami/opensearch/data path of the container.

By default, the chart mounts a Persistent Volume at this location. The volume is created using dynamic volume provisioning. See the Parameters section to configure the PVC.

Adjust permissions of persistent volume mountpoint

As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Parameters

Global parameters

Name Description Value
global.imageRegistry Global Docker image registry ""
global.imagePullSecrets Global Docker registry secret names as an array []
global.defaultStorageClass Global default StorageClass for Persistent Volume(s) ""
global.storageClass DEPRECATED: use global.defaultStorageClass instead ""
global.compatibility.openshift.adaptSecurityContext Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) auto

Common parameters

Name Description Value
kubeVersion Override Kubernetes version ""
nameOverride String to partially override common.names.fullname ""
fullnameOverride String to fully override common.names.fullname ""
commonLabels Labels to add to all deployed objects {}
commonAnnotations Annotations to add to all deployed objects {}
clusterDomain Kubernetes cluster domain name cluster.local
extraDeploy Array of extra objects to deploy with the release []
namespaceOverride String to fully override common.names.namespace ""
diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden) false
diagnosticMode.command Command to override all containers in the deployment ["sleep"]
diagnosticMode.args Args to override all containers in the deployment ["infinity"]

OpenSearch cluster Parameters

Name Description Value
clusterName OpenSearch cluster name open
containerPorts.restAPI OpenSearch REST API port 9200
containerPorts.transport OpenSearch Transport port 9300
plugins Comma, semi-colon or space separated list of plugins to install at initialization ""
snapshotRepoPath File System snapshot repository path ""
config Override opensearch configuration {}
extraConfig Append extra configuration to the opensearch node configuration {}
extraHosts A list of external hosts which are part of this cluster []
extraVolumes A list of volumes to be added to the pod []
extraVolumeMounts A list of volume mounts to be added to the pod []
initScripts Dictionary of init scripts. Evaluated as a template. {}
initScriptsCM ConfigMap with the init scripts. Evaluated as a template. ""
initScriptsSecret Secret containing /docker-entrypoint-initdb.d scripts to be executed at initialization time that contain sensitive data. Evaluated as a template. ""
extraEnvVars Array containing extra env vars to be added to all pods (evaluated as a template) []
extraEnvVarsCM ConfigMap containing extra env vars to be added to all pods (evaluated as a template) ""
extraEnvVarsSecret Secret containing extra env vars to be added to all pods (evaluated as a template) ""
sidecars Add additional sidecar containers to the all opensearch node pod(s) []
initContainers Add additional init containers to the all opensearch node pod(s) []
useIstioLabels Use this variable to add Istio labels to all pods true
image.registry OpenSearch image registry REGISTRY_NAME
image.repository OpenSearch image repository REPOSITORY_NAME/opensearch
image.digest OpenSearch image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag ""
image.pullPolicy OpenSearch image pull policy IfNotPresent
image.pullSecrets OpenSearch image pull secrets []
image.debug Enable OpenSearch image debug mode false
security.enabled Enable X-Pack Security settings false
security.adminPassword Password for ‘admin’ user ""
security.logstashPassword Password for Logstash ""
security.existingSecret Name of the existing secret containing the OpenSearch password and ""
security.fipsMode Configure opensearch with FIPS 140 compliant mode false

OpenSearch admin parameters

Name Description Value
security.tls.admin.existingSecret Existing secret containing the certificates for admin ""
security.tls.admin.certKey Key containing the crt for admin certificate (defaults to admin.crt) ""
security.tls.admin.keyKey Key containing the key for admin certificate (defaults to admin.key) ""
security.tls.restEncryption Enable SSL/TLS encryption for OpenSearch REST API. false
security.tls.autoGenerated Create self-signed TLS certificates. true
security.tls.verificationMode Verification mode for SSL communications. full
security.tls.master.existingSecret Existing secret containing the certificates for the master nodes ""
security.tls.master.certKey Key containing the crt for master nodes certificate (defaults to tls.crt) ""
security.tls.master.keyKey Key containing the key for master nodes certificate (defaults to tls.key) ""
security.tls.master.caKey Key containing the ca for master nodes certificate (defaults to ca.crt) ""
security.tls.data.existingSecret Existing secret containing the certificates for the data nodes ""
security.tls.data.certKey Key containing the crt for data nodes certificate (defaults to tls.crt) ""
security.tls.data.keyKey Key containing the key for data nodes certificate (defaults to tls.key) ""
security.tls.data.caKey Key containing the ca for data nodes certificate (defaults to ca.crt) ""
security.tls.ingest.existingSecret Existing secret containing the certificates for the ingest nodes ""
security.tls.ingest.certKey Key containing the crt for ingest nodes certificate (defaults to tls.crt) ""
security.tls.ingest.keyKey Key containing the key for ingest nodes certificate (defaults to tls.key) ""
security.tls.ingest.caKey Key containing the ca for ingest nodes certificate (defaults to ca.crt) ""
security.tls.coordinating.existingSecret Existing secret containing the certificates for the coordinating nodes ""
security.tls.coordinating.certKey Key containing the crt for coordinating nodes certificate (defaults to tls.crt) ""
security.tls.coordinating.keyKey Key containing the key for coordinating nodes certificate (defaults to tls.key) ""
security.tls.coordinating.caKey Key containing the ca for coordinating nodes certificate (defaults to ca.crt) ""
security.tls.keystoreFilename Name of the keystore file opensearch.keystore.jks
security.tls.truststoreFilename Name of the truststore opensearch.truststore.jks
security.tls.usePemCerts Use this variable if your secrets contain PEM certificates instead of JKS/PKCS12 false
security.tls.passwordsSecret Existing secret containing the Keystore and Truststore passwords, or key password if PEM certs are used ""
security.tls.keystorePassword Password to access the JKS/PKCS12 keystore or PEM key when they are password-protected. ""
security.tls.truststorePassword Password to access the JKS/PKCS12 truststore when they are password-protected. ""
security.tls.keyPassword Password to access the PEM key when they are password-protected. ""
security.tls.secretKeystoreKey Name of the secret key containing the Keystore password ""
security.tls.secretTruststoreKey Name of the secret key containing the Truststore password ""
security.tls.secretKey Name of the secret key containing the PEM key password ""
security.tls.nodesDN A comma separated list of DN for nodes ""
security.tls.adminDN A comma separated list of DN for admins ""

Traffic Exposure Parameters

Name Description Value
service.type OpenSearch service type ClusterIP
service.ports.restAPI OpenSearch service REST API port 9200
service.ports.transport OpenSearch service transport port 9300
service.nodePorts.restAPI Node port for REST API ""
service.nodePorts.transport Node port for REST API ""
service.clusterIP OpenSearch service Cluster IP ""
service.loadBalancerIP OpenSearch service Load Balancer IP ""
service.loadBalancerSourceRanges OpenSearch service Load Balancer sources []
service.externalTrafficPolicy OpenSearch service external traffic policy Cluster
service.annotations Additional custom annotations for OpenSearch service {}
service.extraPorts Extra ports to expose in OpenSearch service (normally used with the sidecars value) []
service.sessionAffinity Session Affinity for Kubernetes service, can be “None” or “ClientIP” None
service.sessionAffinityConfig Additional settings for the sessionAffinity {}
ingress.enabled Enable ingress record generation for OpenSearch false
ingress.pathType Ingress path type ImplementationSpecific
ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
ingress.hostname Default host for the ingress record opensearch.local
ingress.path Default path for the ingress record /
ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
ingress.tls Enable TLS configuration for the host defined at ingress.hostname parameter false
ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
ingress.secrets Custom TLS certificates as secrets []
ingress.extraRules Additional rules to be covered with this ingress record []

Master-eligible nodes parameters

Name Description Value
master.masterOnly Deploy the OpenSearch master-eligible nodes as master-only nodes. Recommended for high-demand deployments. true
master.replicaCount Number of master-eligible replicas to deploy 2
master.extraRoles Append extra roles to the node role []
master.pdb.create Enable/disable a Pod Disruption Budget creation true
master.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
master.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both master.pdb.minAvailable and master.pdb.maxUnavailable are empty. ""
master.nameOverride String to partially override opensearch.master.fullname ""
master.fullnameOverride String to fully override opensearch.master.fullname ""
master.servicenameOverride String to fully override opensearch.master.servicename ""
master.annotations Annotations for the master statefulset {}
master.updateStrategy.type Master-eligible nodes statefulset strategy type RollingUpdate
master.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production). small
master.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
master.heapSize OpenSearch master-eligible node heap size. 128m
master.podSecurityContext.enabled Enabled master-eligible pods’ Security Context true
master.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
master.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
master.podSecurityContext.supplementalGroups Set filesystem extra groups []
master.podSecurityContext.fsGroup Set master-eligible pod’s Security Context fsGroup 1001
master.containerSecurityContext.enabled Enabled containers’ Security Context true
master.containerSecurityContext.seLinuxOptions Set SELinux options in container nil
master.containerSecurityContext.runAsUser Set containers’ Security Context runAsUser 1001
master.containerSecurityContext.runAsGroup Set containers’ Security Context runAsGroup 1001
master.containerSecurityContext.runAsNonRoot Set container’s Security Context runAsNonRoot true
master.containerSecurityContext.privileged Set container’s Security Context privileged false
master.containerSecurityContext.readOnlyRootFilesystem Set container’s Security Context readOnlyRootFilesystem true
master.containerSecurityContext.allowPrivilegeEscalation Set container’s Security Context allowPrivilegeEscalation false
master.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
master.containerSecurityContext.seccompProfile.type Set container’s Security Context seccomp profile RuntimeDefault
master.automountServiceAccountToken Mount Service Account token in pod false
master.hostAliases master-eligible pods host aliases []
master.podLabels Extra labels for master-eligible pods {}
master.podAnnotations Annotations for master-eligible pods {}
master.podAffinityPreset Pod affinity preset. Ignored if master.affinity is set. Allowed values: soft or hard ""
master.podAntiAffinityPreset Pod anti-affinity preset. Ignored if master.affinity is set. Allowed values: soft or hard ""
master.nodeAffinityPreset.type Node affinity preset type. Ignored if master.affinity is set. Allowed values: soft or hard ""
master.nodeAffinityPreset.key Node label key to match. Ignored if master.affinity is set ""
master.nodeAffinityPreset.values Node label values to match. Ignored if master.affinity is set []
master.affinity Affinity for master-eligible pods assignment {}
master.nodeSelector Node labels for master-eligible pods assignment {}
master.tolerations Tolerations for master-eligible pods assignment []
master.priorityClassName master-eligible pods’ priorityClassName ""
master.schedulerName Name of the k8s scheduler (other than default) for master-eligible pods ""
master.terminationGracePeriodSeconds In seconds, time the given to the OpenSearch Master pod needs to terminate gracefully ""
master.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
master.podManagementPolicy podManagementPolicy to manage scaling operation of OpenSearch master pods Parallel
master.startupProbe.enabled Enable/disable the startup probe (master nodes pod) false
master.startupProbe.initialDelaySeconds Delay before startup probe is initiated (master nodes pod) 90
master.startupProbe.periodSeconds How often to perform the probe (master nodes pod) 10
master.startupProbe.timeoutSeconds When the probe times out (master nodes pod) 5
master.startupProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (master nodes pod) 1
master.startupProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
master.livenessProbe.enabled Enable/disable the liveness probe (master-eligible nodes pod) true
master.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (master-eligible nodes pod) 90
master.livenessProbe.periodSeconds How often to perform the probe (master-eligible nodes pod) 10
master.livenessProbe.timeoutSeconds When the probe times out (master-eligible nodes pod) 5
master.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) 1
master.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
master.readinessProbe.enabled Enable/disable the readiness probe (master-eligible nodes pod) true
master.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (master-eligible nodes pod) 90
master.readinessProbe.periodSeconds How often to perform the probe (master-eligible nodes pod) 10
master.readinessProbe.timeoutSeconds When the probe times out (master-eligible nodes pod) 5
master.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (master-eligible nodes pod) 1
master.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
master.customStartupProbe Override default startup probe {}
master.customLivenessProbe Override default liveness probe {}
master.customReadinessProbe Override default readiness probe {}
master.command Override default container command (useful when using custom images) []
master.args Override default container args (useful when using custom images) []
master.lifecycleHooks for the master-eligible container(s) to automate configuration before or after startup {}
master.extraEnvVars Array with extra environment variables to add to master-eligible nodes []
master.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for master-eligible nodes ""
master.extraEnvVarsSecret Name of existing Secret containing extra env vars for master-eligible nodes ""
master.extraVolumes Optionally specify extra list of additional volumes for the master-eligible pod(s) []
master.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the master-eligible container(s) []
master.sidecars Add additional sidecar containers to the master-eligible pod(s) []
master.initContainers Add additional init containers to the master-eligible pod(s) []
master.persistence.enabled Enable persistence using a PersistentVolumeClaim true
master.persistence.storageClass Persistent Volume Storage Class ""
master.persistence.existingClaim Existing Persistent Volume Claim ""
master.persistence.existingVolume Existing Persistent Volume for use as volume match label selector to the volumeClaimTemplate. Ignored when master.persistence.selector is set. ""
master.persistence.selector Configure custom selector for existing Persistent Volume. Overwrites master.persistence.existingVolume {}
master.persistence.annotations Persistent Volume Claim annotations {}
master.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
master.persistence.size Persistent Volume Size 8Gi
master.serviceAccount.create Specifies whether a ServiceAccount should be created false
master.serviceAccount.name Name of the service account to use. If not set and create is true, a name is generated using the fullname template. ""
master.serviceAccount.automountServiceAccountToken Automount service account token for the server service account false
master.serviceAccount.annotations Annotations for service account. Evaluated as a template. Only used if create is true. {}
master.networkPolicy.enabled Enable creation of NetworkPolicy resources true
master.networkPolicy.allowExternal The Policy model to apply true
master.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
master.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
master.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
master.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
master.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}
master.autoscaling.vpa.enabled Enable VPA false
master.autoscaling.vpa.annotations Annotations for VPA resource {}
master.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
master.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
master.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
master.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod Auto
master.autoscaling.hpa.enabled Enable HPA for APISIX Data Plane false
master.autoscaling.hpa.minReplicas Minimum number of APISIX Data Plane replicas 3
master.autoscaling.hpa.maxReplicas Maximum number of APISIX Data Plane replicas 11
master.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
master.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
master.metrics.enabled Enable master-eligible node metrics false
master.metrics.service.ports.metrics master-eligible node metrics service port 80
master.metrics.service.clusterIP master-eligible node metrics service Cluster IP ""
master.metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator false
master.metrics.serviceMonitor.namespace Namespace which Prometheus is running in ""
master.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. ""
master.metrics.serviceMonitor.interval Interval at which metrics should be scraped 30s
master.metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended 10s
master.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping []
master.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion []
master.metrics.serviceMonitor.selector ServiceMonitor selector labels {}
master.metrics.serviceMonitor.honorLabels honorLabels chooses the metric’s labels on collisions with target labels false
master.metrics.rules.enabled Enable render extra rules for PrometheusRule object false
master.metrics.rules.spec Rules to render into the PrometheusRule object []
master.metrics.rules.selector Selector for the PrometheusRule object {}
master.metrics.rules.namespace Namespace where to create the PrometheusRule object monitoring
master.metrics.rules.additionalLabels Additional lables to add to the PrometheusRule object {}

Data-only nodes parameters

Name Description Value
data.replicaCount Number of data-only replicas to deploy 2
data.extraRoles Append extra roles to the node role []
data.pdb.create Enable/disable a Pod Disruption Budget creation true
data.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
data.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both data.pdb.minAvailable and data.pdb.maxUnavailable are empty. ""
data.nameOverride String to partially override opensearch.data.fullname ""
data.fullnameOverride String to fully override opensearch.data.fullname ""
data.servicenameOverride String to fully override opensearch.data.servicename ""
data.annotations Annotations for the data statefulset {}
data.updateStrategy.type Data-only nodes statefulset strategy type RollingUpdate
data.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if data.resources is set (data.resources is recommended for production). medium
data.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
data.heapSize OpenSearch data node heap size. 1024m
data.podSecurityContext.enabled Enabled data pods’ Security Context true
data.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
data.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
data.podSecurityContext.supplementalGroups Set filesystem extra groups []
data.podSecurityContext.fsGroup Set data pod’s Security Context fsGroup 1001
data.containerSecurityContext.enabled Enabled containers’ Security Context true
data.containerSecurityContext.seLinuxOptions Set SELinux options in container nil
data.containerSecurityContext.runAsUser Set containers’ Security Context runAsUser 1001
data.containerSecurityContext.runAsGroup Set containers’ Security Context runAsGroup 1001
data.containerSecurityContext.runAsNonRoot Set container’s Security Context runAsNonRoot true
data.containerSecurityContext.privileged Set container’s Security Context privileged false
data.containerSecurityContext.readOnlyRootFilesystem Set container’s Security Context readOnlyRootFilesystem true
data.containerSecurityContext.allowPrivilegeEscalation Set container’s Security Context allowPrivilegeEscalation false
data.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
data.containerSecurityContext.seccompProfile.type Set container’s Security Context seccomp profile RuntimeDefault
data.automountServiceAccountToken Mount Service Account token in pod false
data.hostAliases data pods host aliases []
data.podLabels Extra labels for data pods {}
data.podAnnotations Annotations for data pods {}
data.podAffinityPreset Pod affinity preset. Ignored if data.affinity is set. Allowed values: soft or hard ""
data.podAntiAffinityPreset Pod anti-affinity preset. Ignored if data.affinity is set. Allowed values: soft or hard ""
data.nodeAffinityPreset.type Node affinity preset type. Ignored if data.affinity is set. Allowed values: soft or hard ""
data.nodeAffinityPreset.key Node label key to match. Ignored if data.affinity is set ""
data.nodeAffinityPreset.values Node label values to match. Ignored if data.affinity is set []
data.affinity Affinity for data pods assignment {}
data.nodeSelector Node labels for data pods assignment {}
data.tolerations Tolerations for data pods assignment []
data.priorityClassName data pods’ priorityClassName ""
data.schedulerName Name of the k8s scheduler (other than default) for data pods ""
data.terminationGracePeriodSeconds In seconds, time the given to the OpenSearch data pod needs to terminate gracefully ""
data.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
data.podManagementPolicy podManagementPolicy to manage scaling operation of OpenSearch data pods Parallel
data.startupProbe.enabled Enable/disable the startup probe (data nodes pod) false
data.startupProbe.initialDelaySeconds Delay before startup probe is initiated (data nodes pod) 90
data.startupProbe.periodSeconds How often to perform the probe (data nodes pod) 10
data.startupProbe.timeoutSeconds When the probe times out (data nodes pod) 5
data.startupProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
data.startupProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
data.livenessProbe.enabled Enable/disable the liveness probe (data nodes pod) true
data.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (data nodes pod) 90
data.livenessProbe.periodSeconds How often to perform the probe (data nodes pod) 10
data.livenessProbe.timeoutSeconds When the probe times out (data nodes pod) 5
data.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
data.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
data.readinessProbe.enabled Enable/disable the readiness probe (data nodes pod) true
data.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (data nodes pod) 90
data.readinessProbe.periodSeconds How often to perform the probe (data nodes pod) 10
data.readinessProbe.timeoutSeconds When the probe times out (data nodes pod) 5
data.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
data.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
data.customStartupProbe Override default startup probe {}
data.customLivenessProbe Override default liveness probe {}
data.customReadinessProbe Override default readiness probe {}
data.command Override default container command (useful when using custom images) []
data.args Override default container args (useful when using custom images) []
data.lifecycleHooks for the data container(s) to automate configuration before or after startup {}
data.extraEnvVars Array with extra environment variables to add to data nodes []
data.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for data nodes ""
data.extraEnvVarsSecret Name of existing Secret containing extra env vars for data nodes ""
data.extraVolumes Optionally specify extra list of additional volumes for the data pod(s) []
data.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the data container(s) []
data.sidecars Add additional sidecar containers to the data pod(s) []
data.initContainers Add additional init containers to the data pod(s) []
data.persistence.enabled Enable persistence using a PersistentVolumeClaim true
data.persistence.storageClass Persistent Volume Storage Class ""
data.persistence.existingClaim Existing Persistent Volume Claim ""
data.persistence.existingVolume Existing Persistent Volume for use as volume match label selector to the volumeClaimTemplate. Ignored when data.persistence.selector is set. ""
data.persistence.selector Configure custom selector for existing Persistent Volume. Overwrites data.persistence.existingVolume {}
data.persistence.annotations Persistent Volume Claim annotations {}
data.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
data.persistence.size Persistent Volume Size 8Gi
data.serviceAccount.create Specifies whether a ServiceAccount should be created false
data.serviceAccount.name Name of the service account to use. If not set and create is true, a name is generated using the fullname template. ""
data.serviceAccount.automountServiceAccountToken Automount service account token for the server service account false
data.serviceAccount.annotations Annotations for service account. Evaluated as a template. Only used if create is true. {}
data.networkPolicy.enabled Enable creation of NetworkPolicy resources true
data.networkPolicy.allowExternal The Policy model to apply true
data.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
data.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
data.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
data.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
data.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}
data.autoscaling.vpa.enabled Enable VPA false
data.autoscaling.vpa.annotations Annotations for VPA resource {}
data.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
data.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
data.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
data.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod Auto
data.autoscaling.hpa.enabled Enable HPA for APISIX Data Plane false
data.autoscaling.hpa.minReplicas Minimum number of APISIX Data Plane replicas 3
data.autoscaling.hpa.maxReplicas Maximum number of APISIX Data Plane replicas 11
data.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
data.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
data.metrics.enabled Enable data node metrics false
data.metrics.service.ports.metrics data node metrics service port 80
data.metrics.service.clusterIP data node metrics service Cluster IP ""
data.metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator false
data.metrics.serviceMonitor.namespace Namespace which Prometheus is running in ""
data.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. ""
data.metrics.serviceMonitor.interval Interval at which metrics should be scraped 30s
data.metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended 10s
data.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping []
data.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion []
data.metrics.serviceMonitor.selector ServiceMonitor selector labels {}
data.metrics.serviceMonitor.honorLabels honorLabels chooses the metric’s labels on collisions with target labels false
data.metrics.rules.enabled Enable render extra rules for PrometheusRule object false
data.metrics.rules.spec Rules to render into the PrometheusRule object []
data.metrics.rules.selector Selector for the PrometheusRule object {}
data.metrics.rules.namespace Namespace where to create the PrometheusRule object monitoring
data.metrics.rules.additionalLabels Additional lables to add to the PrometheusRule object {}

Coordinating-only nodes parameters

Name Description Value
coordinating.replicaCount Number of coordinating-only replicas to deploy 2
coordinating.extraRoles Append extra roles to the node role []
coordinating.pdb.create Enable/disable a Pod Disruption Budget creation true
coordinating.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
coordinating.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both coordinating.pdb.minAvailable and coodinating.pdb.maxUnavailable are empty. ""
coordinating.nameOverride String to partially override opensearch.coordinating.fullname ""
coordinating.fullnameOverride String to fully override opensearch.coordinating.fullname ""
coordinating.servicenameOverride String to fully override opensearch.coordinating.servicename ""
coordinating.annotations Annotations for the coordinating-only statefulset {}
coordinating.updateStrategy.type Coordinating-only nodes statefulset strategy type RollingUpdate
coordinating.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if coordinating.resources is set (coordinating.resources is recommended for production). small
coordinating.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
coordinating.heapSize OpenSearch coordinating node heap size. 128m
coordinating.podSecurityContext.enabled Enabled coordinating-only pods’ Security Context true
coordinating.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
coordinating.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
coordinating.podSecurityContext.supplementalGroups Set filesystem extra groups []
coordinating.podSecurityContext.fsGroup Set coordinating-only pod’s Security Context fsGroup 1001
coordinating.containerSecurityContext.enabled Enabled containers’ Security Context true
coordinating.containerSecurityContext.seLinuxOptions Set SELinux options in container nil
coordinating.containerSecurityContext.runAsUser Set containers’ Security Context runAsUser 1001
coordinating.containerSecurityContext.runAsGroup Set containers’ Security Context runAsGroup 1001
coordinating.containerSecurityContext.runAsNonRoot Set container’s Security Context runAsNonRoot true
coordinating.containerSecurityContext.privileged Set container’s Security Context privileged false
coordinating.containerSecurityContext.readOnlyRootFilesystem Set container’s Security Context readOnlyRootFilesystem true
coordinating.containerSecurityContext.allowPrivilegeEscalation Set container’s Security Context allowPrivilegeEscalation false
coordinating.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
coordinating.containerSecurityContext.seccompProfile.type Set container’s Security Context seccomp profile RuntimeDefault
coordinating.automountServiceAccountToken Mount Service Account token in pod false
coordinating.hostAliases coordinating-only pods host aliases []
coordinating.podLabels Extra labels for coordinating-only pods {}
coordinating.podAnnotations Annotations for coordinating-only pods {}
coordinating.podAffinityPreset Pod affinity preset. Ignored if coordinating.affinity is set. Allowed values: soft or hard ""
coordinating.podAntiAffinityPreset Pod anti-affinity preset. Ignored if coordinating.affinity is set. Allowed values: soft or hard ""
coordinating.nodeAffinityPreset.type Node affinity preset type. Ignored if coordinating.affinity is set. Allowed values: soft or hard ""
coordinating.nodeAffinityPreset.key Node label key to match. Ignored if coordinating.affinity is set ""
coordinating.nodeAffinityPreset.values Node label values to match. Ignored if coordinating.affinity is set []
coordinating.affinity Affinity for coordinating-only pods assignment {}
coordinating.nodeSelector Node labels for coordinating-only pods assignment {}
coordinating.tolerations Tolerations for coordinating-only pods assignment []
coordinating.priorityClassName coordinating-only pods’ priorityClassName ""
coordinating.schedulerName Name of the k8s scheduler (other than default) for coordinating-only pods ""
coordinating.terminationGracePeriodSeconds In seconds, time the given to the OpenSearch coordinating pod needs to terminate gracefully ""
coordinating.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
coordinating.podManagementPolicy podManagementPolicy to manage scaling operation of OpenSearch coordinating pods Parallel
coordinating.startupProbe.enabled Enable/disable the startup probe (coordinating-only nodes pod) false
coordinating.startupProbe.initialDelaySeconds Delay before startup probe is initiated (coordinating-only nodes pod) 90
coordinating.startupProbe.periodSeconds How often to perform the probe (coordinating-only nodes pod) 10
coordinating.startupProbe.timeoutSeconds When the probe times out (coordinating-only nodes pod) 5
coordinating.startupProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) 1
coordinating.startupProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
coordinating.livenessProbe.enabled Enable/disable the liveness probe (coordinating-only nodes pod) true
coordinating.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (coordinating-only nodes pod) 90
coordinating.livenessProbe.periodSeconds How often to perform the probe (coordinating-only nodes pod) 10
coordinating.livenessProbe.timeoutSeconds When the probe times out (coordinating-only nodes pod) 5
coordinating.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) 1
coordinating.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
coordinating.readinessProbe.enabled Enable/disable the readiness probe (coordinating-only nodes pod) true
coordinating.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (coordinating-only nodes pod) 90
coordinating.readinessProbe.periodSeconds How often to perform the probe (coordinating-only nodes pod) 10
coordinating.readinessProbe.timeoutSeconds When the probe times out (coordinating-only nodes pod) 5
coordinating.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (coordinating-only nodes pod) 1
coordinating.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
coordinating.customStartupProbe Override default startup probe {}
coordinating.customLivenessProbe Override default liveness probe {}
coordinating.customReadinessProbe Override default readiness probe {}
coordinating.command Override default container command (useful when using custom images) []
coordinating.args Override default container args (useful when using custom images) []
coordinating.lifecycleHooks for the coordinating-only container(s) to automate configuration before or after startup {}
coordinating.extraEnvVars Array with extra environment variables to add to coordinating-only nodes []
coordinating.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for coordinating-only nodes ""
coordinating.extraEnvVarsSecret Name of existing Secret containing extra env vars for coordinating-only nodes ""
coordinating.extraVolumes Optionally specify extra list of additional volumes for the coordinating-only pod(s) []
coordinating.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the coordinating-only container(s) []
coordinating.sidecars Add additional sidecar containers to the coordinating-only pod(s) []
coordinating.initContainers Add additional init containers to the coordinating-only pod(s) []
coordinating.serviceAccount.create Specifies whether a ServiceAccount should be created false
coordinating.serviceAccount.name Name of the service account to use. If not set and create is true, a name is generated using the fullname template. ""
coordinating.serviceAccount.automountServiceAccountToken Automount service account token for the server service account false
coordinating.serviceAccount.annotations Annotations for service account. Evaluated as a template. Only used if create is true. {}
coordinating.networkPolicy.enabled Enable creation of NetworkPolicy resources true
coordinating.networkPolicy.allowExternal The Policy model to apply true
coordinating.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
coordinating.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
coordinating.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
coordinating.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
coordinating.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}
coordinating.autoscaling.vpa.enabled Enable VPA false
coordinating.autoscaling.vpa.annotations Annotations for VPA resource {}
coordinating.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
coordinating.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
coordinating.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
coordinating.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod Auto
coordinating.autoscaling.hpa.enabled Enable HPA for APISIX Data Plane false
coordinating.autoscaling.hpa.minReplicas Minimum number of APISIX Data Plane replicas 3
coordinating.autoscaling.hpa.maxReplicas Maximum number of APISIX Data Plane replicas 11
coordinating.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
coordinating.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
coordinating.metrics.enabled Enable coordinating node metrics false
coordinating.metrics.service.ports.metrics coordinating node metrics service port 80
coordinating.metrics.service.clusterIP coordinating node metrics service Cluster IP ""
coordinating.metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator false
coordinating.metrics.serviceMonitor.namespace Namespace which Prometheus is running in ""
coordinating.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. ""
coordinating.metrics.serviceMonitor.interval Interval at which metrics should be scraped 30s
coordinating.metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended 10s
coordinating.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping []
coordinating.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion []
coordinating.metrics.serviceMonitor.selector ServiceMonitor selector labels {}
coordinating.metrics.serviceMonitor.honorLabels honorLabels chooses the metric’s labels on collisions with target labels false
coordinating.metrics.rules.enabled Enable render extra rules for PrometheusRule object false
coordinating.metrics.rules.spec Rules to render into the PrometheusRule object []
coordinating.metrics.rules.selector Selector for the PrometheusRule object {}
coordinating.metrics.rules.namespace Namespace where to create the PrometheusRule object monitoring
coordinating.metrics.rules.additionalLabels Additional lables to add to the PrometheusRule object {}

Ingest-only nodes parameters

Name Description Value
ingest.enabled Enable ingest nodes true
ingest.replicaCount Number of ingest-only replicas to deploy 2
ingest.extraRoles Append extra roles to the node role []
ingest.pdb.create Enable/disable a Pod Disruption Budget creation true
ingest.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
ingest.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both ingest.pdb.minAvailable and ingest.pdb.maxUnavailable are empty. ""
ingest.nameOverride String to partially override opensearch.ingest.fullname ""
ingest.fullnameOverride String to fully override opensearch.ingest.fullname ""
ingest.servicenameOverride String to fully override ingest.master.servicename ""
ingest.annotations Annotations for the ingest statefulset {}
ingest.updateStrategy.type Ingest-only nodes statefulset strategy type RollingUpdate
ingest.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if ingest.resources is set (ingest.resources is recommended for production). small
ingest.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
ingest.heapSize OpenSearch ingest-only node heap size. 128m
ingest.podSecurityContext.enabled Enabled ingest-only pods’ Security Context true
ingest.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
ingest.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
ingest.podSecurityContext.supplementalGroups Set filesystem extra groups []
ingest.podSecurityContext.fsGroup Set ingest-only pod’s Security Context fsGroup 1001
ingest.containerSecurityContext.enabled Enabled containers’ Security Context true
ingest.containerSecurityContext.seLinuxOptions Set SELinux options in container nil
ingest.containerSecurityContext.runAsUser Set containers’ Security Context runAsUser 1001
ingest.containerSecurityContext.runAsGroup Set containers’ Security Context runAsGroup 1001
ingest.containerSecurityContext.runAsNonRoot Set container’s Security Context runAsNonRoot true
ingest.containerSecurityContext.privileged Set container’s Security Context privileged false
ingest.containerSecurityContext.readOnlyRootFilesystem Set container’s Security Context readOnlyRootFilesystem true
ingest.containerSecurityContext.allowPrivilegeEscalation Set container’s Security Context allowPrivilegeEscalation false
ingest.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
ingest.containerSecurityContext.seccompProfile.type Set container’s Security Context seccomp profile RuntimeDefault
ingest.automountServiceAccountToken Mount Service Account token in pod false
ingest.hostAliases ingest-only pods host aliases []
ingest.podLabels Extra labels for ingest-only pods {}
ingest.podAnnotations Annotations for ingest-only pods {}
ingest.podAffinityPreset Pod affinity preset. Ignored if ingest.affinity is set. Allowed values: soft or hard ""
ingest.podAntiAffinityPreset Pod anti-affinity preset. Ignored if ingest.affinity is set. Allowed values: soft or hard ""
ingest.nodeAffinityPreset.type Node affinity preset type. Ignored if ingest.affinity is set. Allowed values: soft or hard ""
ingest.nodeAffinityPreset.key Node label key to match. Ignored if ingest.affinity is set ""
ingest.nodeAffinityPreset.values Node label values to match. Ignored if ingest.affinity is set []
ingest.affinity Affinity for ingest-only pods assignment {}
ingest.nodeSelector Node labels for ingest-only pods assignment {}
ingest.tolerations Tolerations for ingest-only pods assignment []
ingest.priorityClassName ingest-only pods’ priorityClassName ""
ingest.schedulerName Name of the k8s scheduler (other than default) for ingest-only pods ""
ingest.terminationGracePeriodSeconds In seconds, time the given to the OpenSearch ingest pod needs to terminate gracefully ""
ingest.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
ingest.podManagementPolicy podManagementPolicy to manage scaling operation of OpenSearch ingest pods Parallel
ingest.startupProbe.enabled Enable/disable the startup probe (ingest-only nodes pod) false
ingest.startupProbe.initialDelaySeconds Delay before startup probe is initiated (ingest-only nodes pod) 90
ingest.startupProbe.periodSeconds How often to perform the probe (ingest-only nodes pod) 10
ingest.startupProbe.timeoutSeconds When the probe times out (ingest-only nodes pod) 5
ingest.startupProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (ingest-only nodes pod) 1
ingest.startupProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
ingest.livenessProbe.enabled Enable/disable the liveness probe (ingest-only nodes pod) true
ingest.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (ingest-only nodes pod) 90
ingest.livenessProbe.periodSeconds How often to perform the probe (ingest-only nodes pod) 10
ingest.livenessProbe.timeoutSeconds When the probe times out (ingest-only nodes pod) 5
ingest.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (ingest-only nodes pod) 1
ingest.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
ingest.readinessProbe.enabled Enable/disable the readiness probe (ingest-only nodes pod) true
ingest.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (ingest-only nodes pod) 90
ingest.readinessProbe.periodSeconds How often to perform the probe (ingest-only nodes pod) 10
ingest.readinessProbe.timeoutSeconds When the probe times out (ingest-only nodes pod) 5
ingest.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (ingest-only nodes pod) 1
ingest.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
ingest.customStartupProbe Override default startup probe {}
ingest.customLivenessProbe Override default liveness probe {}
ingest.customReadinessProbe Override default readiness probe {}
ingest.command Override default container command (useful when using custom images) []
ingest.args Override default container args (useful when using custom images) []
ingest.lifecycleHooks for the ingest-only container(s) to automate configuration before or after startup {}
ingest.extraEnvVars Array with extra environment variables to add to ingest-only nodes []
ingest.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for ingest-only nodes ""
ingest.extraEnvVarsSecret Name of existing Secret containing extra env vars for ingest-only nodes ""
ingest.extraVolumes Optionally specify extra list of additional volumes for the ingest-only pod(s) []
ingest.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the ingest-only container(s) []
ingest.sidecars Add additional sidecar containers to the ingest-only pod(s) []
ingest.initContainers Add additional init containers to the ingest-only pod(s) []
ingest.serviceAccount.create Specifies whether a ServiceAccount should be created false
ingest.serviceAccount.name Name of the service account to use. If not set and create is true, a name is generated using the fullname template. ""
ingest.serviceAccount.automountServiceAccountToken Automount service account token for the server service account false
ingest.serviceAccount.annotations Annotations for service account. Evaluated as a template. Only used if create is true. {}
ingest.networkPolicy.enabled Enable creation of NetworkPolicy resources true
ingest.networkPolicy.allowExternal The Policy model to apply true
ingest.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
ingest.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
ingest.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
ingest.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
ingest.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}
ingest.autoscaling.vpa.enabled Enable VPA false
ingest.autoscaling.vpa.annotations Annotations for VPA resource {}
ingest.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
ingest.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
ingest.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
ingest.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod Auto
ingest.autoscaling.hpa.enabled Enable HPA for APISIX Data Plane false
ingest.autoscaling.hpa.minReplicas Minimum number of APISIX Data Plane replicas 3
ingest.autoscaling.hpa.maxReplicas Maximum number of APISIX Data Plane replicas 11
ingest.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
ingest.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
ingest.service.enabled Enable Ingest-only service false
ingest.service.type OpenSearch ingest-only service type ClusterIP
ingest.service.ports.restAPI OpenSearch service REST API port 9200
ingest.service.ports.transport OpenSearch service transport port 9300
ingest.service.nodePorts.restAPI Node port for REST API ""
ingest.service.nodePorts.transport Node port for REST API ""
ingest.service.clusterIP OpenSearch ingest-only service Cluster IP ""
ingest.service.loadBalancerIP OpenSearch ingest-only service Load Balancer IP ""
ingest.service.loadBalancerSourceRanges OpenSearch ingest-only service Load Balancer sources []
ingest.service.externalTrafficPolicy OpenSearch ingest-only service external traffic policy Cluster
ingest.service.extraPorts Extra ports to expose (normally used with the sidecar value) []
ingest.service.annotations Additional custom annotations for OpenSearch ingest-only service {}
ingest.service.sessionAffinity Session Affinity for Kubernetes service, can be “None” or “ClientIP” None
ingest.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
ingest.ingress.enabled Enable ingress record generation for OpenSearch false
ingest.ingress.pathType Ingress path type ImplementationSpecific
ingest.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
ingest.ingress.hostname Default host for the ingress record opensearch-ingest.local
ingest.ingress.path Default path for the ingress record /
ingest.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
ingest.ingress.tls Enable TLS configuration for the host defined at ingress.hostname parameter false
ingest.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
ingest.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
ingest.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
ingest.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
ingest.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
ingest.ingress.secrets Custom TLS certificates as secrets []
ingest.ingress.extraRules Additional rules to be covered with this ingress record []
ingest.metrics.enabled Enable ingest node metrics false
ingest.metrics.service.ports.metrics ingest node metrics service port 80
ingest.metrics.service.clusterIP ingest node metrics service Cluster IP ""
ingest.metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using PrometheusOperator false
ingest.metrics.serviceMonitor.namespace Namespace which Prometheus is running in ""
ingest.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. ""
ingest.metrics.serviceMonitor.interval Interval at which metrics should be scraped 30s
ingest.metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended 10s
ingest.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping []
ingest.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion []
ingest.metrics.serviceMonitor.selector ServiceMonitor selector labels {}
ingest.metrics.serviceMonitor.honorLabels honorLabels chooses the metric’s labels on collisions with target labels false
ingest.metrics.rules.enabled Enable render extra rules for PrometheusRule object false
ingest.metrics.rules.spec Rules to render into the PrometheusRule object []
ingest.metrics.rules.selector Selector for the PrometheusRule object {}
ingest.metrics.rules.namespace Namespace where to create the PrometheusRule object monitoring
ingest.metrics.rules.additionalLabels Additional lables to add to the PrometheusRule object {}

Init Container Parameters

Name Description Value
volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work) false
volumePermissions.image.registry Init container volume-permissions image registry REGISTRY_NAME
volumePermissions.image.repository Init container volume-permissions image name REPOSITORY_NAME/os-shell
volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag ""
volumePermissions.image.pullPolicy Init container volume-permissions image pull policy IfNotPresent
volumePermissions.image.pullSecrets Init container volume-permissions image pull secrets []
volumePermissions.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). nano
volumePermissions.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
sysctlImage.enabled Enable kernel settings modifier image true
sysctlImage.registry Kernel settings modifier image registry REGISTRY_NAME
sysctlImage.repository Kernel settings modifier image repository REPOSITORY_NAME/os-shell
sysctlImage.digest Kernel settings modifier image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag ""
sysctlImage.pullPolicy Kernel settings modifier image pull policy IfNotPresent
sysctlImage.pullSecrets Kernel settings modifier image pull secrets []
sysctlImage.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if sysctlImage.resources is set (sysctlImage.resources is recommended for production). nano
sysctlImage.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}

OpenSearch Dashboards Parameters

Name Description Value
dashboards.enabled Enables OpenSearch Dashboards deployment false
dashboards.image.registry OpenSearch Dashboards image registry REGISTRY_NAME
dashboards.image.repository OpenSearch Dashboards image repository REPOSITORY_NAME/opensearch-dashboards
dashboards.image.digest OpenSearch Dashboards image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag ""
dashboards.image.pullPolicy OpenSearch Dashboards image pull policy IfNotPresent
dashboards.image.pullSecrets OpenSearch Dashboards image pull secrets []
dashboards.image.debug Enable OpenSearch Dashboards image debug mode false
dashboards.service.type OpenSearch Dashboards service type ClusterIP
dashboards.service.ports.http OpenSearch Dashboards service web UI port 5601
dashboards.service.nodePorts.http Node port for web UI ""
dashboards.service.clusterIP OpenSearch Dashboards service Cluster IP ""
dashboards.service.loadBalancerIP OpenSearch Dashboards service Load Balancer IP ""
dashboards.service.loadBalancerSourceRanges OpenSearch Dashboards service Load Balancer sources []
dashboards.service.externalTrafficPolicy OpenSearch Dashboards service external traffic policy Cluster
dashboards.service.annotations Additional custom annotations for OpenSearch Dashboards service {}
dashboards.service.extraPorts Extra ports to expose in OpenSearch Dashboards service (normally used with the sidecars value) []
dashboards.service.sessionAffinity Session Affinity for Kubernetes service, can be “None” or “ClientIP” None
dashboards.service.sessionAffinityConfig Additional settings for the sessionAffinity {}
dashboards.ingress.enabled Enable ingress record generation for OpenSearch Dashboards false
dashboards.ingress.pathType Ingress path type ImplementationSpecific
dashboards.ingress.apiVersion Force Ingress API version (automatically detected if not set) ""
dashboards.ingress.hostname Default host for the ingress record opensearch-dashboards.local
dashboards.ingress.path Default path for the ingress record /
dashboards.ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. {}
dashboards.ingress.tls Enable TLS configuration for the host defined at dashboards.ingress.hostname parameter false
dashboards.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm false
dashboards.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) ""
dashboards.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record []
dashboards.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host []
dashboards.ingress.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record []
dashboards.ingress.secrets Custom TLS certificates as secrets []
dashboards.ingress.extraRules Additional rules to be covered with this ingress record []
dashboards.containerPorts.http OpenSearch Dashboards HTTP port 5601
dashboards.password Password for OpenSearch Dashboards ""
dashboards.replicaCount Number of data-only replicas to deploy 1
dashboards.pdb.create Enable/disable a Pod Disruption Budget creation true
dashboards.pdb.minAvailable Minimum number/percentage of pods that should remain scheduled ""
dashboards.pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both dashboards.pdb.minAvailable and dashboards.pdb.maxUnavailable are empty. ""
dashboards.nameOverride String to partially override opensearch.dashboards.fullname ""
dashboards.fullnameOverride String to fully override opensearch.dashboards.fullname ""
dashboards.servicenameOverride String to fully override opensearch.dashboards.servicename ""
dashboards.updateStrategy.type Data-only nodes statefulset strategy type RollingUpdate
dashboards.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if dashboards.resources is set (dashboards.resources is recommended for production). small
dashboards.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads) {}
dashboards.heapSize OpenSearch data node heap size. 1024m
dashboards.podSecurityContext.enabled Enabled data pods’ Security Context true
dashboards.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy Always
dashboards.podSecurityContext.sysctls Set kernel settings using the sysctl interface []
dashboards.podSecurityContext.supplementalGroups Set filesystem extra groups []
dashboards.podSecurityContext.fsGroup Set dashboards pod’s Security Context fsGroup 1001
dashboards.containerSecurityContext.enabled Enabled containers’ Security Context true
dashboards.containerSecurityContext.seLinuxOptions Set SELinux options in container nil
dashboards.containerSecurityContext.runAsUser Set containers’ Security Context runAsUser 1001
dashboards.containerSecurityContext.runAsGroup Set containers’ Security Context runAsGroup 1001
dashboards.containerSecurityContext.runAsNonRoot Set container’s Security Context runAsNonRoot true
dashboards.containerSecurityContext.privileged Set container’s Security Context privileged false
dashboards.containerSecurityContext.readOnlyRootFilesystem Set container’s Security Context readOnlyRootFilesystem true
dashboards.containerSecurityContext.allowPrivilegeEscalation Set container’s Security Context allowPrivilegeEscalation false
dashboards.containerSecurityContext.capabilities.drop List of capabilities to be dropped ["ALL"]
dashboards.containerSecurityContext.seccompProfile.type Set container’s Security Context seccomp profile RuntimeDefault
dashboards.automountServiceAccountToken Mount Service Account token in pod false
dashboards.hostAliases data pods host aliases []
dashboards.podLabels Extra labels for data pods {}
dashboards.podAnnotations Annotations for data pods {}
dashboards.podAffinityPreset Pod affinity preset. Ignored if dashboards.affinity is set. Allowed values: soft or hard ""
dashboards.podAntiAffinityPreset Pod anti-affinity preset. Ignored if dashboards.affinity is set. Allowed values: soft or hard ""
dashboards.nodeAffinityPreset.type Node affinity preset type. Ignored if dashboards.affinity is set. Allowed values: soft or hard ""
dashboards.nodeAffinityPreset.key Node label key to match. Ignored if dashboards.affinity is set ""
dashboards.nodeAffinityPreset.values Node label values to match. Ignored if dashboards.affinity is set []
dashboards.affinity Affinity for data pods assignment {}
dashboards.nodeSelector Node labels for data pods assignment {}
dashboards.tolerations Tolerations for data pods assignment []
dashboards.priorityClassName data pods’ priorityClassName ""
dashboards.schedulerName Name of the k8s scheduler (other than default) for data pods ""
dashboards.terminationGracePeriodSeconds In seconds, time the given to the OpenSearch data pod needs to terminate gracefully ""
dashboards.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template []
dashboards.startupProbe.enabled Enable/disable the startup probe (data nodes pod) false
dashboards.startupProbe.initialDelaySeconds Delay before startup probe is initiated (data nodes pod) 120
dashboards.startupProbe.periodSeconds How often to perform the probe (data nodes pod) 10
dashboards.startupProbe.timeoutSeconds When the probe times out (data nodes pod) 5
dashboards.startupProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
dashboards.startupProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
dashboards.livenessProbe.enabled Enable/disable the liveness probe (data nodes pod) true
dashboards.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated (data nodes pod) 180
dashboards.livenessProbe.periodSeconds How often to perform the probe (data nodes pod) 20
dashboards.livenessProbe.timeoutSeconds When the probe times out (data nodes pod) 5
dashboards.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
dashboards.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 8
dashboards.readinessProbe.enabled Enable/disable the readiness probe (data nodes pod) true
dashboards.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated (data nodes pod) 120
dashboards.readinessProbe.periodSeconds How often to perform the probe (data nodes pod) 10
dashboards.readinessProbe.timeoutSeconds When the probe times out (data nodes pod) 5
dashboards.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) 1
dashboards.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded 5
dashboards.customStartupProbe Override default startup probe {}
dashboards.customLivenessProbe Override default liveness probe {}
dashboards.customReadinessProbe Override default readiness probe {}
dashboards.command Override default container command (useful when using custom images) []
dashboards.args Override default container args (useful when using custom images) []
dashboards.lifecycleHooks for the data container(s) to automate configuration before or after startup {}
dashboards.extraEnvVars Array with extra environment variables to add to data nodes []
dashboards.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for data nodes ""
dashboards.extraEnvVarsSecret Name of existing Secret containing extra env vars for data nodes ""
dashboards.extraVolumes Optionally specify extra list of additional volumes for the data pod(s) []
dashboards.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the data container(s) []
dashboards.sidecars Add additional sidecar containers to the data pod(s) []
dashboards.initContainers Add additional init containers to the data pod(s) []
dashboards.serviceAccount.create Specifies whether a ServiceAccount should be created false
dashboards.serviceAccount.name Name of the service account to use. If not set and create is true, a name is generated using the fullname template. ""
dashboards.serviceAccount.automountServiceAccountToken Automount service account token for the server service account false
dashboards.serviceAccount.annotations Annotations for service account. Evaluated as a template. Only used if create is true. {}
dashboards.networkPolicy.enabled Enable creation of NetworkPolicy resources true
dashboards.networkPolicy.allowExternal The Policy model to apply true
dashboards.networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations. true
dashboards.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy []
dashboards.networkPolicy.extraEgress Add extra ingress rules to the NetworkPolicy []
dashboards.networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces {}
dashboards.networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces {}
dashboards.autoscaling.vpa.enabled Enable VPA false
dashboards.autoscaling.vpa.annotations Annotations for VPA resource {}
dashboards.autoscaling.vpa.controlledResources VPA List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory []
dashboards.autoscaling.vpa.maxAllowed VPA Max allowed resources for the pod {}
dashboards.autoscaling.vpa.minAllowed VPA Min allowed resources for the pod {}
dashboards.autoscaling.vpa.updatePolicy.updateMode Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod Auto
dashboards.autoscaling.hpa.enabled Enable HPA for APISIX Data Plane false
dashboards.autoscaling.hpa.minReplicas Minimum number of APISIX Data Plane replicas 3
dashboards.autoscaling.hpa.maxReplicas Maximum number of APISIX Data Plane replicas 11
dashboards.autoscaling.hpa.targetCPU Target CPU utilization percentage ""
dashboards.autoscaling.hpa.targetMemory Target Memory utilization percentage ""
dashboards.tls.enabled Enable TLS for OpenSearch Dashboards webserver false
dashboards.tls.existingSecret Existing secret containing the certificates for OpenSearch Dashboards webserver ""
dashboards.tls.autoGenerated Create self-signed TLS certificates. true
dashboards.persistence.enabled Enable persistence using Persistent Volume Claims false
dashboards.persistence.mountPath Path to mount the volume at. /bitnami/opensearch-dashboards
dashboards.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services ""
dashboards.persistence.storageClass Storage class of backing PVC ""
dashboards.persistence.annotations Persistent Volume Claim annotations {}
dashboards.persistence.accessModes Persistent Volume Access Modes ["ReadWriteOnce"]
dashboards.persistence.size Size of data volume 8Gi
dashboards.persistence.existingClaim The name of an existing PVC to use for persistence ""
dashboards.persistence.selector Selector to match an existing Persistent Volume for OpenSearch data PVC {}
dashboards.persistence.dataSource Custom PVC data source {}

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set name=my-open,client.service.port=8080 \
  oci://REGISTRY_NAME/REPOSITORY_NAME/opensearch

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the OpenSearch cluster name to my-open and REST port number to 8080.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/opensearch

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml.

Troubleshooting

Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.

Upgrading

To 1.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

License

Copyright © 2024 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

check-circle-line exclamation-circle-line close-line
Scroll to top icon