SeaweedFS is a simple and highly scalable distributed file system.
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/seaweedfs
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository.
Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.
This chart bootstraps a SeaweedFS deployment in a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/seaweedfs
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys SeaweedFS on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
values (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcePreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
You may want to have SeaweedFS Filer Server connect to an external database rather than installing one inside your cluster. Typical reasons for this are to use a managed database service, or to share a common database server for all your applications. To achieve this, the chart allows you to specify credentials for an external database with the externalDatabase
parameter. You should also disable the MariaDB installation with the mariadb.enabled
option. Here is an example:
mariadb.enabled=false
externalDatabase.enabled=true
externalDatabase.store=mariadb
externalDatabase.host=myexternalhost
externalDatabase.user=myuser
externalDatabase.password=mypassword
externalDatabase.database=mydatabase
externalDatabase.port=3306
In addition, the “filemeta” table must be created in the external database before starting SeaweedFS.
USE DATABASE_NAME;
CREATE TABLE IF NOT EXISTS filemeta (
`dirhash` BIGINT NOT NULL COMMENT 'first 64 bits of MD5 hash value of directory field',
`name` VARCHAR(766) NOT NULL COMMENT 'directory or file name',
`directory` TEXT NOT NULL COMMENT 'full path to parent directory',
`meta` LONGBLOB,
PRIMARY KEY (`dirhash`, `name`)
) DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
\c DATABASE_NAME;
CREATE TABLE IF NOT EXISTS filemeta (
dirhash BIGINT,
name VARCHAR(65535),
directory VARCHAR(65535),
meta bytea,
PRIMARY KEY (dirhash, name)
);
Note: You need to substitute the placeholder
DATABASE_NAME
with the actual database name.
You can also rely on a K8s job to create the table during the Helm chart installation. To do so, set the externalDatabase.initDatabaseJob.enabled
parameter to true
.
This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.
To enable Ingress integration, set master.ingress.enabled
to true
. Please other SweaweedFS components can also be exposed via Ingress by setting the corresponding ingress.enabled
parameter to true
(e.g. s3.ingress.enabled
, filer.ingress.enabled
, etc).
The most common scenario is to have one host name mapped to the deployment. In this case, the master.ingress.hostname
property can be used to set the host name. The master.ingress.tls
parameter can be used to add the TLS configuration for this host.
However, it is also possible to have more than one host. To facilitate this, the master.ingress.extraHosts
parameter (if available) can be set with the host names specified as an array. The master.ingress.extraTLS
parameter (if available) can also be used to add the TLS configuration for extra hosts.
NOTE: For each host specified in the
master.ingress.extraHosts
parameter, it is necessary to set a name, path, and any annotations that the Ingress controller should know about. Not all annotations are supported by all Ingress controllers, but this annotation reference document lists the annotations supported by many popular Ingress controllers.
Adding the TLS parameter (where available) will cause the chart to generate HTTPS URLs, and the application will be available on port 443. The actual TLS secrets do not have to be generated by this chart. However, if TLS is enabled, the Ingress record will not work until the TLS secret exists.
Learn more about Ingress controllers.
Security enhancements can be enabled by setting security.enabled
and security.mTLS.enabled
to true
. This will enable the following security features:
You can manually create the required TLS certificates for each SeaweedFS component or relying on the chart auto-generation capabilities. The chart supports two different ways to auto-generate the required certificates:
security.mTLS.autoGenerated.enabled
to true
and security.mTLS.autoGenerated.engine
to helm
.security.mTLS.autoGenerated.enabled
to true
and security.mTLS.autoGenerated.engine
to cert-manager
. Please note it’s supported to use an existing Issuer/ClusterIssuer for issuing the TLS certificates by setting the security.mTLS.autoGenerated.certManager.existingIssuer
and security.mTLS.autoGenerated.certManager.existingIssuerKind
parameters.Authentication can be enabled in the SeaweedFS S3 API by setting the s3.auth.enabled
parameter to true
. You can provide your custom authentication configuration creating a secret with the configuration and setting the s3.auth.cexistingSecret
parameter with the name of the secret. Alternatively, you can rely on the chart to create a basic configuration with two main users: admin
and read-only
. You can provide the admin user credentials using the s3.auth.adminAccessKeyId
and s3.auth.adminSecretAccessKey
parameters, and the read-only user credentials using the s3.auth.readAccessKeyId
and s3.auth.readSecretAccessKey
parameters.
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars
property.
master:
extraEnvVars:
- name: LOG_LEVEL
value: error
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM
or the extraEnvVarsSecret
values.
If additional containers are needed in the same pod as SeaweedFS (such as additional metrics or logging exporters), they can be defined using the sidecars
parameter.
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
If these sidecars export extra ports, extra port definitions can be added using the service.extraPorts
parameter (where available), as shown in the example below:
service:
extraPorts:
- name: extraPort
port: 11311
targetPort: 11311
NOTE: This Helm chart already includes sidecar containers for the Prometheus exporters (where applicable). These can be activated by adding the
--enable-metrics=true
parameter at deployment time. Thesidecars
parameter should therefore only be used for any extra sidecar containers.
If additional init containers are needed in the same pod, they can be defined using the initContainers
parameter. Here is an example:
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Learn more about sidecar containers and init containers.
This chart allows you to set your custom affinity using the affinity
parameter. Find more information about Pod affinity in the kubernetes documentation.
As an alternative, use one of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset
, podAntiAffinityPreset
, or nodeAffinityPreset
parameters.
The Bitnami SeaweedFS image stores the data and configurations at the /bitnami
path of the container. Persistent Volume Claims are used to keep the data across deployments.
If you encounter errors when working with persistent volumes, refer to our troubleshooting guide for persistent volumes.
Name | Description | Value |
---|---|---|
global.imageRegistry |
Global Docker image registry | "" |
global.imagePullSecrets |
Global Docker registry secret names as an array | [] |
global.defaultStorageClass |
Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass |
DEPRECATED: use global.defaultStorageClass instead | "" |
global.compatibility.openshift.adaptSecurityContext |
Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion |
Override Kubernetes version | "" |
nameOverride |
String to partially override common.names.name | "" |
fullnameOverride |
String to fully override common.names.fullname | "" |
namespaceOverride |
String to fully override common.names.namespace | "" |
commonLabels |
Labels to add to all deployed objects | {} |
commonAnnotations |
Annotations to add to all deployed objects | {} |
clusterDomain |
Kubernetes cluster domain name | cluster.local |
extraDeploy |
Array of extra objects to deploy with the release | [] |
diagnosticMode.enabled |
Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command |
Command to override all containers in the chart release | ["sleep"] |
diagnosticMode.args |
Args to override all containers in the chart release | ["infinity"] |
image.registry |
SeaweedFS image registry | REGISTRY_NAME |
image.repository |
SeaweedFS image repository | REPOSITORY_NAME/seaweedfs |
image.digest |
SeaweedFS image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) | "" |
image.pullPolicy |
SeaweedFS image pull policy | IfNotPresent |
image.pullSecrets |
SeaweedFS image pull secrets | [] |
image.debug |
Enable SeaweedFS image debug mode | false |
security.enabled |
Enable Security settings | false |
security.corsAllowedOrigins |
CORS allowed origins | * |
security.jwtSigning.volumeWrite |
Enable JWT signing for volume write operations | true |
security.jwtSigning.volumeRead |
Enable JWT signing for volume read operations | false |
security.jwtSigning.filerWrite |
Enable JWT signing for filer write operations | false |
security.jwtSigning.filerRead |
Enable JWT signing for filer read operations | false |
security.mTLS.enabled |
Enable mTLS for gRPC communications | false |
security.mTLS.autoGenerated.enabled |
Enable automatic generation of certificates for mTLS | false |
security.mTLS.autoGenerated.engine |
Mechanism to generate the certificates (allowed values: helm, cert-manager) | helm |
security.mTLS.autoGenerated.certManager.existingIssuer |
The name of an existing Issuer to use for generating the certificates (only for cert-manager engine) |
"" |
security.mTLS.autoGenerated.certManager.existingIssuerKind |
Existing Issuer kind, defaults to Issuer (only for cert-manager engine) |
"" |
security.mTLS.autoGenerated.certManager.keyAlgorithm |
Key algorithm for the certificates (only for cert-manager engine) |
RSA |
security.mTLS.autoGenerated.certManager.keySize |
Key size for the certificates (only for cert-manager engine) |
2048 |
security.mTLS.autoGenerated.certManager.duration |
Duration for the certificates (only for cert-manager engine) |
2160h |
security.mTLS.autoGenerated.certManager.renewBefore |
Renewal period for the certificates (only for cert-manager engine) |
360h |
security.mTLS.ca |
CA certificate for mTLS. Ignored if security.mTLS.existingCASecret is set |
"" |
security.mTLS.existingCASecret |
The name of an existing Secret containing the CA certificate for mTLS | "" |
security.mTLS.master.cert |
Master Server certificate for mTLS. Ignored if security.mTLS.master.existingSecret is set |
"" |
security.mTLS.master.key |
Master Server key for mTLS. Ignored if security.mTLS.master.existingSecret is set |
"" |
security.mTLS.master.existingSecret |
The name of an existing Secret containing the Master Server certificates for mTLS | "" |
security.mTLS.volume.cert |
Volume Server certificate for mTLS. Ignored if security.mTLS.volume.existingSecret is set |
"" |
security.mTLS.volume.key |
Volume Server key for mTLS. Ignored if security.mTLS.volume.existingSecret is set |
"" |
security.mTLS.volume.existingSecret |
The name of an existing Secret containing the Volume Server certificates for mTLS | "" |
security.mTLS.filer.cert |
Filer certificate for mTLS. Ignored if security.mTLS.filer.existingSecret is set |
"" |
security.mTLS.filer.key |
Filer key for mTLS. Ignored if security.mTLS.filer.existingSecret is set |
"" |
security.mTLS.filer.existingSecret |
The name of an existing Secret containing the Filer certificates for mTLS | "" |
security.mTLS.client.cert |
Client certificate for mTLS. Ignored if security.mTLS.client.existingSecret is set |
"" |
security.mTLS.client.key |
Client key for mTLS. Ignored if security.mTLS.client.existingSecret is set |
"" |
security.mTLS.client.existingSecret |
The name of an existing Secret containing the Client certificates for mTLS | "" |
clusterDefault |
Default SeaweedFS cluster name | sw |
Name | Description | Value |
---|---|---|
master.replicaCount |
Number of Master Server replicas to deploy | 1 |
master.containerPorts.http |
Master Server HTTP container port | 9333 |
master.containerPorts.grpc |
Master Server GRPC container port | 19333 |
master.containerPorts.metrics |
Master Server metrics container port | 9327 |
master.extraContainerPorts |
Optionally specify extra list of additional ports for Master Server containers | [] |
master.livenessProbe.enabled |
Enable livenessProbe on Master Server containers | true |
master.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 30 |
master.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
master.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 30 |
master.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
master.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
master.readinessProbe.enabled |
Enable readinessProbe on Master Server containers | true |
master.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 30 |
master.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
master.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 30 |
master.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
master.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
master.startupProbe.enabled |
Enable startupProbe on Master Server containers | false |
master.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
master.startupProbe.periodSeconds |
Period seconds for startupProbe | 5 |
master.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
master.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
master.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
master.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
master.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
master.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
master.resourcesPreset |
Set Master Server container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production). | nano |
master.resources |
Set Master Server container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
master.podSecurityContext.enabled |
Enable Master Server pods’ Security Context | true |
master.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy for Master Server pods | Always |
master.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface for Master Server pods | [] |
master.podSecurityContext.supplementalGroups |
Set filesystem extra groups for Master Server pods | [] |
master.podSecurityContext.fsGroup |
Set fsGroup in Master Server pods’ Security Context | 1001 |
master.containerSecurityContext.enabled |
Enabled Master Server container’ Security Context | true |
master.containerSecurityContext.seLinuxOptions |
Set SELinux options in Master Server container | {} |
master.containerSecurityContext.runAsUser |
Set runAsUser in Master Server container’ Security Context | 1001 |
master.containerSecurityContext.runAsGroup |
Set runAsGroup in Master Server container’ Security Context | 1001 |
master.containerSecurityContext.runAsNonRoot |
Set runAsNonRoot in Master Server container’ Security Context | true |
master.containerSecurityContext.readOnlyRootFilesystem |
Set readOnlyRootFilesystem in Master Server container’ Security Context | true |
master.containerSecurityContext.privileged |
Set privileged in Master Server container’ Security Context | false |
master.containerSecurityContext.allowPrivilegeEscalation |
Set allowPrivilegeEscalation in Master Server container’ Security Context | false |
master.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped in Master Server container | ["ALL"] |
master.containerSecurityContext.seccompProfile.type |
Set seccomp profile in Master Server container | RuntimeDefault |
master.logLevel |
Master Server log level [0|1|2|3|4] | 1 |
master.bindAddress |
Master Server bind address | 0.0.0.0 |
master.config |
Master Server configuration | "" |
master.existingConfigmap |
The name of an existing ConfigMap with your custom configuration for Master Server | "" |
master.command |
Override default Master Server container command (useful when using custom images) | [] |
master.args |
Override default Master Server container args (useful when using custom images) | [] |
master.automountServiceAccountToken |
Mount Service Account token in Master Server pods | false |
master.hostAliases |
Master Server pods host aliases | [] |
master.statefulsetAnnotations |
Annotations for Master Server statefulset | {} |
master.podLabels |
Extra labels for Master Server pods | {} |
master.podAnnotations |
Annotations for Master Server pods | {} |
master.podAffinityPreset |
Pod affinity preset. Ignored if master.affinity is set. Allowed values: soft or hard |
"" |
master.podAntiAffinityPreset |
Pod anti-affinity preset. Ignored if master.affinity is set. Allowed values: soft or hard |
soft |
master.nodeAffinityPreset.type |
Node affinity preset type. Ignored if master.affinity is set. Allowed values: soft or hard |
"" |
master.nodeAffinityPreset.key |
Node label key to match. Ignored if master.affinity is set |
"" |
master.nodeAffinityPreset.values |
Node label values to match. Ignored if master.affinity is set |
[] |
master.affinity |
Affinity for Master Server pods assignment | {} |
master.nodeSelector |
Node labels for Master Server pods assignment | {} |
master.tolerations |
Tolerations for Master Server pods assignment | [] |
master.updateStrategy.type |
Master Server statefulset strategy type | RollingUpdate |
master.podManagementPolicy |
Pod management policy for Master Server statefulset | Parallel |
master.priorityClassName |
Master Server pods’ priorityClassName | "" |
master.topologySpreadConstraints |
Topology Spread Constraints for Master Server pod assignment spread across your cluster among failure-domains | [] |
master.schedulerName |
Name of the k8s scheduler (other than default) for Master Server pods | "" |
master.terminationGracePeriodSeconds |
Seconds Master Server pods need to terminate gracefully | "" |
master.lifecycleHooks |
for Master Server containers to automate configuration before or after startup | {} |
master.extraEnvVars |
Array with extra environment variables to add to Master Server containers | [] |
master.extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars for Master Server containers | "" |
master.extraEnvVarsSecret |
Name of existing Secret containing extra env vars for Master Server containers | "" |
master.extraVolumes |
Optionally specify extra list of additional volumes for the Master Server pods | [] |
master.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Master Server containers | [] |
master.sidecars |
Add additional sidecar containers to the Master Server pods | [] |
master.initContainers |
Add additional init containers to the Master Server pods | [] |
master.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
master.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
master.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both master.pdb.minAvailable and master.pdb.maxUnavailable are empty. |
"" |
master.autoscaling.enabled |
Enable autoscaling for master | false |
master.autoscaling.minReplicas |
Minimum number of master replicas | "" |
master.autoscaling.maxReplicas |
Maximum number of master replicas | "" |
master.autoscaling.targetCPU |
Target CPU utilization percentage | "" |
master.autoscaling.targetMemory |
Target Memory utilization percentage | "" |
Name | Description | Value |
---|---|---|
master.service.type |
Master Server service type | ClusterIP |
master.service.ports.http |
Master Server service HTTP port | 9333 |
master.service.ports.grpc |
Master Server service GRPC port | 19333 |
master.service.nodePorts.http |
Node port for HTTP | "" |
master.service.nodePorts.grpc |
Node port for GRPC | "" |
master.service.clusterIP |
Master Server service Cluster IP | "" |
master.service.loadBalancerIP |
Master Server service Load Balancer IP | "" |
master.service.loadBalancerSourceRanges |
Master Server service Load Balancer sources | [] |
master.service.externalTrafficPolicy |
Master Server service external traffic policy | Cluster |
master.service.annotations |
Additional custom annotations for Master Server service | {} |
master.service.extraPorts |
Extra ports to expose in Master Server service (normally used with the sidecars value) |
[] |
master.service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
master.service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
master.service.headless.annotations |
Annotations for the headless service. | {} |
master.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created for Master Server | true |
master.networkPolicy.allowExternal |
Don’t require server label for connections | true |
master.networkPolicy.allowExternalEgress |
Allow the Master Server pods to access any range of port and all destinations. | true |
master.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
master.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) | [] |
master.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
master.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
master.ingress.enabled |
Enable ingress record generation for Master Server | false |
master.ingress.pathType |
Ingress path type | ImplementationSpecific |
master.ingress.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
master.ingress.hostname |
Default host for the ingress record | master.seaweedfs.local |
master.ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
master.ingress.path |
Default path for the ingress record | / |
master.ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
master.ingress.tls |
Enable TLS configuration for the host defined at ingress.hostname parameter |
false |
master.ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
master.ingress.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
master.ingress.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
master.ingress.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
master.ingress.secrets |
Custom TLS certificates as secrets | [] |
master.ingress.extraRules |
Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
master.persistence.enabled |
Enable persistence on Master Server using Persistent Volume Claims | true |
master.persistence.mountPath |
Path to mount the volume at. | /data |
master.persistence.subPath |
The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services | "" |
master.persistence.storageClass |
Storage class of backing PVC | "" |
master.persistence.annotations |
Persistent Volume Claim annotations | {} |
master.persistence.accessModes |
Persistent Volume Access Modes | ["ReadWriteOnce"] |
master.persistence.size |
Size of data volume | 8Gi |
master.persistence.existingClaim |
The name of an existing PVC to use for persistence | "" |
master.persistence.selector |
Selector to match an existing Persistent Volume for data PVC | {} |
master.persistence.dataSource |
Custom PVC data source | {} |
Name | Description | Value |
---|---|---|
master.metrics.enabled |
Enable the export of Prometheus metrics | false |
master.metrics.service.port |
Metrics service port | 9327 |
master.metrics.service.annotations |
Annotations for the metrics service. | {} |
master.metrics.serviceMonitor.enabled |
if true , creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true ) |
false |
master.metrics.serviceMonitor.namespace |
Namespace in which Prometheus is running | "" |
master.metrics.serviceMonitor.annotations |
Additional custom annotations for the ServiceMonitor | {} |
master.metrics.serviceMonitor.labels |
Extra labels for the ServiceMonitor | {} |
master.metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in Prometheus | "" |
master.metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric’s labels on collisions with target labels | false |
master.metrics.serviceMonitor.interval |
Interval at which metrics should be scraped. | "" |
master.metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
master.metrics.serviceMonitor.metricRelabelings |
Specify additional relabeling of metrics | [] |
master.metrics.serviceMonitor.relabelings |
Specify general relabeling | [] |
master.metrics.serviceMonitor.selector |
Prometheus instance selector labels | {} |
Name | Description | Value |
---|---|---|
volume.replicaCount |
Number of Volume Server replicas to deploy | 1 |
volume.containerPorts.http |
Volume Server HTTP container port | 8080 |
volume.containerPorts.grpc |
Volume Server GRPC container port | 18080 |
volume.containerPorts.metrics |
Volume Server metrics container port | 9327 |
volume.extraContainerPorts |
Optionally specify extra list of additional ports for Volume Server containers | [] |
volume.livenessProbe.enabled |
Enable livenessProbe on Volume Server containers | true |
volume.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 30 |
volume.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
volume.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 30 |
volume.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
volume.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
volume.readinessProbe.enabled |
Enable readinessProbe on Volume Server containers | true |
volume.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 30 |
volume.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
volume.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 30 |
volume.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
volume.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
volume.startupProbe.enabled |
Enable startupProbe on Volume Server containers | false |
volume.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
volume.startupProbe.periodSeconds |
Period seconds for startupProbe | 5 |
volume.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
volume.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
volume.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
volume.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
volume.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
volume.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
volume.resourcesPreset |
Set Volume Server container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volume.resources is set (volume.resources is recommended for production). | nano |
volume.resources |
Set Volume Server container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
volume.podSecurityContext.enabled |
Enable Volume Server pods’ Security Context | true |
volume.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy for Volume Server pods | Always |
volume.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface for Volume Server pods | [] |
volume.podSecurityContext.supplementalGroups |
Set filesystem extra groups for Volume Server pods | [] |
volume.podSecurityContext.fsGroup |
Set fsGroup in Volume Server pods’ Security Context | 1001 |
volume.containerSecurityContext.enabled |
Enabled Volume Server container’ Security Context | true |
volume.containerSecurityContext.seLinuxOptions |
Set SELinux options in Volume Server container | {} |
volume.containerSecurityContext.runAsUser |
Set runAsUser in Volume Server container’ Security Context | 1001 |
volume.containerSecurityContext.runAsGroup |
Set runAsGroup in Volume Server container’ Security Context | 1001 |
volume.containerSecurityContext.runAsNonRoot |
Set runAsNonRoot in Volume Server container’ Security Context | true |
volume.containerSecurityContext.readOnlyRootFilesystem |
Set readOnlyRootFilesystem in Volume Server container’ Security Context | true |
volume.containerSecurityContext.privileged |
Set privileged in Volume Server container’ Security Context | false |
volume.containerSecurityContext.allowPrivilegeEscalation |
Set allowPrivilegeEscalation in Volume Server container’ Security Context | false |
volume.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped in Volume Server container | ["ALL"] |
volume.containerSecurityContext.seccompProfile.type |
Set seccomp profile in Volume Server container | RuntimeDefault |
volume.logLevel |
Volume Server log level [0|1|2|3|4] | 1 |
volume.bindAddress |
Volume Server bind address | 0.0.0.0 |
volume.publicUrl |
Volume Server public URL | "" |
volume.config |
Volume Server configuration | "" |
volume.existingConfigmap |
The name of an existing ConfigMap with your custom configuration for Volume Server | "" |
volume.command |
Override default Volume Server container command (useful when using custom images) | [] |
volume.args |
Override default Volume Server container args (useful when using custom images) | [] |
volume.automountServiceAccountToken |
Mount Service Account token in Volume Server pods | false |
volume.hostAliases |
Volume Server pods host aliases | [] |
volume.statefulsetAnnotations |
Annotations for Volume Server statefulset | {} |
volume.podLabels |
Extra labels for Volume Server pods | {} |
volume.podAnnotations |
Annotations for Volume Server pods | {} |
volume.podAffinityPreset |
Pod affinity preset. Ignored if volume.affinity is set. Allowed values: soft or hard |
"" |
volume.podAntiAffinityPreset |
Pod anti-affinity preset. Ignored if volume.affinity is set. Allowed values: soft or hard |
soft |
volume.nodeAffinityPreset.type |
Node affinity preset type. Ignored if volume.affinity is set. Allowed values: soft or hard |
"" |
volume.nodeAffinityPreset.key |
Node label key to match. Ignored if volume.affinity is set |
"" |
volume.nodeAffinityPreset.values |
Node label values to match. Ignored if volume.affinity is set |
[] |
volume.affinity |
Affinity for Volume Server pods assignment | {} |
volume.nodeSelector |
Node labels for Volume Server pods assignment | {} |
volume.tolerations |
Tolerations for Volume Server pods assignment | [] |
volume.updateStrategy.type |
Volume Server statefulset strategy type | RollingUpdate |
volume.podManagementPolicy |
Pod management policy for Volume Server statefulset | Parallel |
volume.priorityClassName |
Volume Server pods’ priorityClassName | "" |
volume.topologySpreadConstraints |
Topology Spread Constraints for Volume Server pod assignment spread across your cluster among failure-domains | [] |
volume.schedulerName |
Name of the k8s scheduler (other than default) for Volume Server pods | "" |
volume.terminationGracePeriodSeconds |
Seconds Volume Server pods need to terminate gracefully | "" |
volume.lifecycleHooks |
for Volume Server containers to automate configuration before or after startup | {} |
volume.extraEnvVars |
Array with extra environment variables to add to Volume Server containers | [] |
volume.extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars for Volume Server containers | "" |
volume.extraEnvVarsSecret |
Name of existing Secret containing extra env vars for Volume Server containers | "" |
volume.extraVolumes |
Optionally specify extra list of additional volumes for the Volume Server pods | [] |
volume.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Volume Server containers | [] |
volume.sidecars |
Add additional sidecar containers to the Volume Server pods | [] |
volume.initContainers |
Add additional init containers to the Volume Server pods | [] |
volume.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
volume.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
volume.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both volume.pdb.minAvailable and volume.pdb.maxUnavailable are empty. |
"" |
volume.autoscaling.enabled |
Enable autoscaling for volume | false |
volume.autoscaling.minReplicas |
Minimum number of volume replicas | "" |
volume.autoscaling.maxReplicas |
Maximum number of volume replicas | "" |
volume.autoscaling.targetCPU |
Target CPU utilization percentage | "" |
volume.autoscaling.targetMemory |
Target Memory utilization percentage | "" |
Name | Description | Value |
---|---|---|
volume.service.type |
Volume Server service type | ClusterIP |
volume.service.ports.http |
Volume Server service HTTP port | 8080 |
volume.service.ports.grpc |
Volume Server service GRPC port | 18080 |
volume.service.nodePorts.http |
Node port for HTTP | "" |
volume.service.nodePorts.grpc |
Node port for GRPC | "" |
volume.service.clusterIP |
Volume Server service Cluster IP | "" |
volume.service.loadBalancerIP |
Volume Server service Load Balancer IP | "" |
volume.service.loadBalancerSourceRanges |
Volume Server service Load Balancer sources | [] |
volume.service.externalTrafficPolicy |
Volume Server service external traffic policy | Cluster |
volume.service.annotations |
Additional custom annotations for Volume Server service | {} |
volume.service.extraPorts |
Extra ports to expose in Volume Server service (normally used with the sidecars value) |
[] |
volume.service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
volume.service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
volume.service.headless.annotations |
Annotations for the headless service. | {} |
volume.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created for Volume Server | true |
volume.networkPolicy.allowExternal |
Don’t require server label for connections | true |
volume.networkPolicy.allowExternalEgress |
Allow the Volume Server pods to access any range of port and all destinations. | true |
volume.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
volume.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) | [] |
volume.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
volume.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
volume.ingress.enabled |
Enable ingress record generation for Volume Server | false |
volume.ingress.pathType |
Ingress path type | ImplementationSpecific |
volume.ingress.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
volume.ingress.hostname |
Default host for the ingress record | volume.seaweedfs.local |
volume.ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
volume.ingress.path |
Default path for the ingress record | / |
volume.ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
volume.ingress.tls |
Enable TLS configuration for the host defined at ingress.hostname parameter |
false |
volume.ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
volume.ingress.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
volume.ingress.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
volume.ingress.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
volume.ingress.secrets |
Custom TLS certificates as secrets | [] |
volume.ingress.extraRules |
Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
volume.dataVolumes[0].name |
Name of the data volume | data-0 |
volume.dataVolumes[0].mountPath |
Path to mount the volume at. | /data-0 |
volume.dataVolumes[0].subPath |
The subdirectory of the volume to mount to, useful in dev environments and one PV for multiple services | "" |
volume.dataVolumes[0].persistence.enabled |
Enable persistence on Volume Server using Persistent Volume Claims | true |
volume.dataVolumes[0].persistence.storageClass |
Storage class of backing PVC | "" |
volume.dataVolumes[0].persistence.annotations |
Persistent Volume Claim annotations | {} |
volume.dataVolumes[0].persistence.accessModes |
Persistent Volume Access Modes | ["ReadWriteOnce"] |
volume.dataVolumes[0].persistence.size |
Size of data volume | 8Gi |
volume.dataVolumes[0].persistence.existingClaim |
The name of an existing PVC to use for persistence | "" |
volume.dataVolumes[0].persistence.selector |
Selector to match an existing Persistent Volume for data PVC | {} |
volume.dataVolumes[0].persistence.dataSource |
Custom PVC data source | {} |
Name | Description | Value |
---|---|---|
volume.metrics.enabled |
Enable the export of Prometheus metrics | false |
volume.metrics.service.port |
Metrics service port | 9327 |
volume.metrics.service.annotations |
Annotations for the metrics service. | {} |
volume.metrics.serviceMonitor.enabled |
if true , creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true ) |
false |
volume.metrics.serviceMonitor.namespace |
Namespace in which Prometheus is running | "" |
volume.metrics.serviceMonitor.annotations |
Additional custom annotations for the ServiceMonitor | {} |
volume.metrics.serviceMonitor.labels |
Extra labels for the ServiceMonitor | {} |
volume.metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in Prometheus | "" |
volume.metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric’s labels on collisions with target labels | false |
volume.metrics.serviceMonitor.interval |
Interval at which metrics should be scraped. | "" |
volume.metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
volume.metrics.serviceMonitor.metricRelabelings |
Specify additional relabeling of metrics | [] |
volume.metrics.serviceMonitor.relabelings |
Specify general relabeling | [] |
volume.metrics.serviceMonitor.selector |
Prometheus instance selector labels | {} |
Name | Description | Value |
---|---|---|
filer.enabled |
Enable Filer Server deployment | true |
filer.replicaCount |
Number of Filer Server replicas to deploy | 1 |
filer.containerPorts.http |
Filer Server HTTP container port | 8888 |
filer.containerPorts.grpc |
Filer Server GRPC container port | 18888 |
filer.containerPorts.metrics |
Filer Server metrics container port | 9327 |
filer.extraContainerPorts |
Optionally specify extra list of additional ports for Filer Server containers | [] |
filer.livenessProbe.enabled |
Enable livenessProbe on Filer Server containers | true |
filer.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 30 |
filer.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
filer.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 30 |
filer.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
filer.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
filer.readinessProbe.enabled |
Enable readinessProbe on Filer Server containers | true |
filer.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 30 |
filer.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
filer.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 30 |
filer.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
filer.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
filer.startupProbe.enabled |
Enable startupProbe on Filer Server containers | false |
filer.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
filer.startupProbe.periodSeconds |
Period seconds for startupProbe | 5 |
filer.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
filer.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
filer.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
filer.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
filer.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
filer.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
filer.resourcesPreset |
Set Filer Server container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if filer.resources is set (filer.resources is recommended for production). | nano |
filer.resources |
Set Filer Server container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
filer.podSecurityContext.enabled |
Enable Filer Server pods’ Security Context | true |
filer.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy for Filer Server pods | Always |
filer.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface for Filer Server pods | [] |
filer.podSecurityContext.supplementalGroups |
Set filesystem extra groups for Filer Server pods | [] |
filer.podSecurityContext.fsGroup |
Set fsGroup in Filer Server pods’ Security Context | 1001 |
filer.containerSecurityContext.enabled |
Enabled Filer Server container’ Security Context | true |
filer.containerSecurityContext.seLinuxOptions |
Set SELinux options in Filer Server container | {} |
filer.containerSecurityContext.runAsUser |
Set runAsUser in Filer Server container’ Security Context | 1001 |
filer.containerSecurityContext.runAsGroup |
Set runAsGroup in Filer Server container’ Security Context | 1001 |
filer.containerSecurityContext.runAsNonRoot |
Set runAsNonRoot in Filer Server container’ Security Context | true |
filer.containerSecurityContext.readOnlyRootFilesystem |
Set readOnlyRootFilesystem in Filer Server container’ Security Context | true |
filer.containerSecurityContext.privileged |
Set privileged in Filer Server container’ Security Context | false |
filer.containerSecurityContext.allowPrivilegeEscalation |
Set allowPrivilegeEscalation in Filer Server container’ Security Context | false |
filer.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped in Filer Server container | ["ALL"] |
filer.containerSecurityContext.seccompProfile.type |
Set seccomp profile in Filer Server container | RuntimeDefault |
filer.logLevel |
Filer Server log level [0|1|2|3|4] | 1 |
filer.bindAddress |
Filer Server bind address | 0.0.0.0 |
filer.config |
Filer Server configuration | `[leveldb2] |
enabled = false | |
filer.existingConfigmap| The name of an existing ConfigMap with your custom configuration for Filer Server |
““| |
filer.command| Override default Filer Server container command (useful when using custom images) |
[]| |
filer.args| Override default Filer Server container args (useful when using custom images) |
[]| |
filer.automountServiceAccountToken| Mount Service Account token in Filer Server pods |
false| |
filer.hostAliases| Filer Server pods host aliases |
[]| |
filer.statefulsetAnnotations| Annotations for Filer Server statefulset |
{}| |
filer.podLabels| Extra labels for Filer Server pods |
{}| |
filer.podAnnotations| Annotations for Filer Server pods |
{}| |
filer.podAffinityPreset| Pod affinity preset. Ignored if
filer.affinityis set. Allowed values:
softor
hard|
””| |
filer.podAntiAffinityPreset| Pod anti-affinity preset. Ignored if
filer.affinityis set. Allowed values:
softor
hard|
soft| |
filer.nodeAffinityPreset.type| Node affinity preset type. Ignored if
filer.affinityis set. Allowed values:
softor
hard|
““| |
filer.nodeAffinityPreset.key| Node label key to match. Ignored if
filer.affinityis set |
””| |
filer.nodeAffinityPreset.values| Node label values to match. Ignored if
filer.affinityis set |
[]| |
filer.affinity| Affinity for Filer Server pods assignment |
{}| |
filer.nodeSelector| Node labels for Filer Server pods assignment |
{}| |
filer.tolerations| Tolerations for Filer Server pods assignment |
[]| |
filer.updateStrategy.type| Filer Server statefulset strategy type |
RollingUpdate| |
filer.podManagementPolicy| Pod management policy for Filer Server statefulset |
Parallel| |
filer.priorityClassName| Filer Server pods' priorityClassName |
““| |
filer.topologySpreadConstraints| Topology Spread Constraints for Filer Server pod assignment spread across your cluster among failure-domains |
[]| |
filer.schedulerName| Name of the k8s scheduler (other than default) for Filer Server pods |
””| |
filer.terminationGracePeriodSeconds| Seconds Filer Server pods need to terminate gracefully |
““| |
filer.lifecycleHooks| for Filer Server containers to automate configuration before or after startup |
{}| |
filer.extraEnvVars| Array with extra environment variables to add to Filer Server containers |
[]| |
filer.extraEnvVarsCM| Name of existing ConfigMap containing extra env vars for Filer Server containers |
””| |
filer.extraEnvVarsSecret| Name of existing Secret containing extra env vars for Filer Server containers |
““| |
filer.extraVolumes| Optionally specify extra list of additional volumes for the Filer Server pods |
[]| |
filer.extraVolumeMounts| Optionally specify extra list of additional volumeMounts for the Filer Server containers |
[]| |
filer.sidecars| Add additional sidecar containers to the Filer Server pods |
[]| |
filer.initContainers| Add additional init containers to the Filer Server pods |
[]| |
filer.pdb.create| Enable/disable a Pod Disruption Budget creation |
true| |
filer.pdb.minAvailable| Minimum number/percentage of pods that should remain scheduled |
””| |
filer.pdb.maxUnavailable| Maximum number/percentage of pods that may be made unavailable. Defaults to
1if both
filer.pdb.minAvailableand
filer.pdb.maxUnavailableare empty. |
““| |
filer.autoscaling.enabled| Enable autoscaling for filer |
false| |
filer.autoscaling.minReplicas| Minimum number of filer replicas |
””| |
filer.autoscaling.maxReplicas| Maximum number of filer replicas |
““| |
filer.autoscaling.targetCPU| Target CPU utilization percentage |
””| |
filer.autoscaling.targetMemory| Target Memory utilization percentage |
""` |
Name | Description | Value |
---|---|---|
filer.service.type |
Filer Server service type | ClusterIP |
filer.service.ports.http |
Filer Server service HTTP port | 8888 |
filer.service.ports.grpc |
Filer Server service GRPC port | 18888 |
filer.service.nodePorts.http |
Node port for HTTP | "" |
filer.service.nodePorts.grpc |
Node port for GRPC | "" |
filer.service.clusterIP |
Filer Server service Cluster IP | "" |
filer.service.loadBalancerIP |
Filer Server service Load Balancer IP | "" |
filer.service.loadBalancerSourceRanges |
Filer Server service Load Balancer sources | [] |
filer.service.externalTrafficPolicy |
Filer Server service external traffic policy | Cluster |
filer.service.annotations |
Additional custom annotations for Filer Server service | {} |
filer.service.extraPorts |
Extra ports to expose in Filer Server service (normally used with the sidecars value) |
[] |
filer.service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
filer.service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
filer.service.headless.annotations |
Annotations for the headless service. | {} |
filer.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created for Filer Server | true |
filer.networkPolicy.allowExternal |
Don’t require server label for connections | true |
filer.networkPolicy.allowExternalEgress |
Allow the Filer Server pods to access any range of port and all destinations. | true |
filer.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
filer.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) | [] |
filer.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
filer.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
filer.ingress.enabled |
Enable ingress record generation for Filer Server | false |
filer.ingress.pathType |
Ingress path type | ImplementationSpecific |
filer.ingress.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
filer.ingress.hostname |
Default host for the ingress record | filer.seaweedfs.local |
filer.ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
filer.ingress.path |
Default path for the ingress record | / |
filer.ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
filer.ingress.tls |
Enable TLS configuration for the host defined at ingress.hostname parameter |
false |
filer.ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
filer.ingress.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
filer.ingress.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
filer.ingress.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
filer.ingress.secrets |
Custom TLS certificates as secrets | [] |
filer.ingress.extraRules |
Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
filer.metrics.enabled |
Enable the export of Prometheus metrics | false |
filer.metrics.service.port |
Metrics service port | 9327 |
filer.metrics.service.annotations |
Annotations for the metrics service. | {} |
filer.metrics.serviceMonitor.enabled |
if true , creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true ) |
false |
filer.metrics.serviceMonitor.namespace |
Namespace in which Prometheus is running | "" |
filer.metrics.serviceMonitor.annotations |
Additional custom annotations for the ServiceMonitor | {} |
filer.metrics.serviceMonitor.labels |
Extra labels for the ServiceMonitor | {} |
filer.metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in Prometheus | "" |
filer.metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric’s labels on collisions with target labels | false |
filer.metrics.serviceMonitor.interval |
Interval at which metrics should be scraped. | "" |
filer.metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
filer.metrics.serviceMonitor.metricRelabelings |
Specify additional relabeling of metrics | [] |
filer.metrics.serviceMonitor.relabelings |
Specify general relabeling | [] |
filer.metrics.serviceMonitor.selector |
Prometheus instance selector labels | {} |
Name | Description | Value |
---|---|---|
s3.enabled |
Enable Amazon S3 API deployment | false |
s3.replicaCount |
Number of Amazon S3 API replicas to deploy | 1 |
s3.containerPorts.http |
Amazon S3 API HTTP container port | 8333 |
s3.containerPorts.grpc |
Amazon S3 API GRPC container port | 18333 |
s3.containerPorts.metrics |
Amazon S3 API metrics container port | 9327 |
s3.extraContainerPorts |
Optionally specify extra list of additional ports for Amazon S3 API containers | [] |
s3.livenessProbe.enabled |
Enable livenessProbe on Amazon S3 API containers | true |
s3.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 30 |
s3.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
s3.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 30 |
s3.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
s3.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
s3.readinessProbe.enabled |
Enable readinessProbe on Amazon S3 API containers | true |
s3.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 30 |
s3.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
s3.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 30 |
s3.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
s3.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
s3.startupProbe.enabled |
Enable startupProbe on Amazon S3 API containers | false |
s3.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
s3.startupProbe.periodSeconds |
Period seconds for startupProbe | 5 |
s3.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
s3.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
s3.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
s3.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
s3.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
s3.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
s3.resourcesPreset |
Set Amazon S3 API container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if s3.resources is set (s3.resources is recommended for production). | nano |
s3.resources |
Set Amazon S3 API container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
s3.podSecurityContext.enabled |
Enable Amazon S3 API pods’ Security Context | true |
s3.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy for Amazon S3 API pods | Always |
s3.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface for Amazon S3 API pods | [] |
s3.podSecurityContext.supplementalGroups |
Set filesystem extra groups for Amazon S3 API pods | [] |
s3.podSecurityContext.fsGroup |
Set fsGroup in Amazon S3 API pods’ Security Context | 1001 |
s3.containerSecurityContext.enabled |
Enabled Amazon S3 API container’ Security Context | true |
s3.containerSecurityContext.seLinuxOptions |
Set SELinux options in Amazon S3 API container | {} |
s3.containerSecurityContext.runAsUser |
Set runAsUser in Amazon S3 API container’ Security Context | 1001 |
s3.containerSecurityContext.runAsGroup |
Set runAsGroup in Amazon S3 API container’ Security Context | 1001 |
s3.containerSecurityContext.runAsNonRoot |
Set runAsNonRoot in Amazon S3 API container’ Security Context | true |
s3.containerSecurityContext.readOnlyRootFilesystem |
Set readOnlyRootFilesystem in Amazon S3 API container’ Security Context | true |
s3.containerSecurityContext.privileged |
Set privileged in Amazon S3 API container’ Security Context | false |
s3.containerSecurityContext.allowPrivilegeEscalation |
Set allowPrivilegeEscalation in Amazon S3 API container’ Security Context | false |
s3.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped in Amazon S3 API container | ["ALL"] |
s3.containerSecurityContext.seccompProfile.type |
Set seccomp profile in Amazon S3 API container | RuntimeDefault |
s3.logLevel |
Amazon S3 API log level [0|1|2|3|4] | 1 |
s3.bindAddress |
Amazon S3 API bind address | 0.0.0.0 |
s3.auth.enabled |
Enable Amazon S3 API authentication | false |
s3.auth.existingSecret |
Existing secret with Amazon S3 API authentication configuration | "" |
s3.auth.existingSecretConfigKey |
Key of the above existing secret with S3 API authentication configuration, defaults to config.json |
"" |
s3.auth.adminAccessKeyId |
Amazon S3 API access key with admin privileges. Ignored if security.mTLS.volume.existingSecret is set |
"" |
s3.auth.adminSecretAccessKey |
Amazon S3 API secret key with admin privileges. Ignored if security.mTLS.volume.existingSecret is set |
"" |
s3.auth.readAccessKeyId |
Amazon S3 API read access key with read-only privileges. Ignored if security.mTLS.volume.existingSecret is set |
"" |
s3.auth.readSecretAccessKey |
Amazon S3 API read secret key with read-only privileges. Ignored if security.mTLS.volume.existingSecret is set |
"" |
s3.command |
Override default Amazon S3 API container command (useful when using custom images) | [] |
s3.args |
Override default Amazon S3 API container args (useful when using custom images) | [] |
s3.automountServiceAccountToken |
Mount Service Account token in Amazon S3 API pods | false |
s3.hostAliases |
Amazon S3 API pods host aliases | [] |
s3.statefulsetAnnotations |
Annotations for Amazon S3 API statefulset | {} |
s3.podLabels |
Extra labels for Amazon S3 API pods | {} |
s3.podAnnotations |
Annotations for Amazon S3 API pods | {} |
s3.podAffinityPreset |
Pod affinity preset. Ignored if s3.affinity is set. Allowed values: soft or hard |
"" |
s3.podAntiAffinityPreset |
Pod anti-affinity preset. Ignored if s3.affinity is set. Allowed values: soft or hard |
soft |
s3.nodeAffinityPreset.type |
Node affinity preset type. Ignored if s3.affinity is set. Allowed values: soft or hard |
"" |
s3.nodeAffinityPreset.key |
Node label key to match. Ignored if s3.affinity is set |
"" |
s3.nodeAffinityPreset.values |
Node label values to match. Ignored if s3.affinity is set |
[] |
s3.affinity |
Affinity for Amazon S3 API pods assignment | {} |
s3.nodeSelector |
Node labels for Amazon S3 API pods assignment | {} |
s3.tolerations |
Tolerations for Amazon S3 API pods assignment | [] |
s3.updateStrategy.type |
Amazon S3 API deployment strategy type | RollingUpdate |
s3.priorityClassName |
Amazon S3 API pods’ priorityClassName | "" |
s3.topologySpreadConstraints |
Topology Spread Constraints for Amazon S3 API pod assignment spread across your cluster among failure-domains | [] |
s3.schedulerName |
Name of the k8s scheduler (other than default) for Amazon S3 API pods | "" |
s3.terminationGracePeriodSeconds |
Seconds Amazon S3 API pods need to terminate gracefully | "" |
s3.lifecycleHooks |
for Amazon S3 API containers to automate configuration before or after startup | {} |
s3.extraEnvVars |
Array with extra environment variables to add to Amazon S3 API containers | [] |
s3.extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars for Amazon S3 API containers | "" |
s3.extraEnvVarsSecret |
Name of existing Secret containing extra env vars for Amazon S3 API containers | "" |
s3.extraVolumes |
Optionally specify extra list of additional volumes for the Amazon S3 API pods | [] |
s3.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the Amazon S3 API containers | [] |
s3.sidecars |
Add additional sidecar containers to the Amazon S3 API pods | [] |
s3.initContainers |
Add additional init containers to the Amazon S3 API pods | [] |
s3.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
s3.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
s3.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both s3.pdb.minAvailable and s3.pdb.maxUnavailable are empty. |
"" |
s3.autoscaling.enabled |
Enable autoscaling for s3 | false |
s3.autoscaling.minReplicas |
Minimum number of s3 replicas | "" |
s3.autoscaling.maxReplicas |
Maximum number of s3 replicas | "" |
s3.autoscaling.targetCPU |
Target CPU utilization percentage | "" |
s3.autoscaling.targetMemory |
Target Memory utilization percentage | "" |
Name | Description | Value |
---|---|---|
s3.service.type |
Amazon S3 API service type | ClusterIP |
s3.service.ports.http |
Amazon S3 API service HTTP port | 8333 |
s3.service.ports.grpc |
Amazon S3 API service GRPC port | 18333 |
s3.service.nodePorts.http |
Node port for HTTP | "" |
s3.service.nodePorts.grpc |
Node port for GRPC | "" |
s3.service.clusterIP |
Amazon S3 API service Cluster IP | "" |
s3.service.loadBalancerIP |
Amazon S3 API service Load Balancer IP | "" |
s3.service.loadBalancerSourceRanges |
Amazon S3 API service Load Balancer sources | [] |
s3.service.externalTrafficPolicy |
Amazon S3 API service external traffic policy | Cluster |
s3.service.annotations |
Additional custom annotations for Amazon S3 API service | {} |
s3.service.extraPorts |
Extra ports to expose in Amazon S3 API service (normally used with the sidecars value) |
[] |
s3.service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
s3.service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
s3.service.headless.annotations |
Annotations for the headless service. | {} |
s3.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created for Amazon S3 API | true |
s3.networkPolicy.allowExternal |
Don’t require server label for connections | true |
s3.networkPolicy.allowExternalEgress |
Allow the Amazon S3 API pods to access any range of port and all destinations. | true |
s3.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
s3.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) | [] |
s3.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
s3.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
s3.ingress.enabled |
Enable ingress record generation for Amazon S3 API | false |
s3.ingress.pathType |
Ingress path type | ImplementationSpecific |
s3.ingress.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
s3.ingress.hostname |
Default host for the ingress record | s3.seaweedfs.local |
s3.ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
s3.ingress.path |
Default path for the ingress record | / |
s3.ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
s3.ingress.tls |
Enable TLS configuration for the host defined at ingress.hostname parameter |
false |
s3.ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
s3.ingress.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
s3.ingress.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
s3.ingress.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
s3.ingress.secrets |
Custom TLS certificates as secrets | [] |
s3.ingress.extraRules |
Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
s3.metrics.enabled |
Enable the export of Prometheus metrics | false |
s3.metrics.service.port |
Metrics service port | 9327 |
s3.metrics.service.annotations |
Annotations for the metrics service. | {} |
s3.metrics.serviceMonitor.enabled |
if true , creates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true ) |
false |
s3.metrics.serviceMonitor.namespace |
Namespace in which Prometheus is running | "" |
s3.metrics.serviceMonitor.annotations |
Additional custom annotations for the ServiceMonitor | {} |
s3.metrics.serviceMonitor.labels |
Extra labels for the ServiceMonitor | {} |
s3.metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in Prometheus | "" |
s3.metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric’s labels on collisions with target labels | false |
s3.metrics.serviceMonitor.interval |
Interval at which metrics should be scraped. | "" |
s3.metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
s3.metrics.serviceMonitor.metricRelabelings |
Specify additional relabeling of metrics | [] |
s3.metrics.serviceMonitor.relabelings |
Specify general relabeling | [] |
s3.metrics.serviceMonitor.selector |
Prometheus instance selector labels | {} |
Name | Description | Value |
---|---|---|
webdav.enabled |
Enable WebDAV deployment | false |
webdav.replicaCount |
Number of WebDAV replicas to deploy | 1 |
webdav.containerPorts.http |
WebDAV HTTP container port (HTTPS if webdav.tls.enabled is true ) |
7333 |
webdav.extraContainerPorts |
Optionally specify extra list of additional ports for WebDAV containers | [] |
webdav.livenessProbe.enabled |
Enable livenessProbe on WebDAV containers | true |
webdav.livenessProbe.initialDelaySeconds |
Initial delay seconds for livenessProbe | 30 |
webdav.livenessProbe.periodSeconds |
Period seconds for livenessProbe | 10 |
webdav.livenessProbe.timeoutSeconds |
Timeout seconds for livenessProbe | 30 |
webdav.livenessProbe.failureThreshold |
Failure threshold for livenessProbe | 6 |
webdav.livenessProbe.successThreshold |
Success threshold for livenessProbe | 1 |
webdav.readinessProbe.enabled |
Enable readinessProbe on WebDAV containers | true |
webdav.readinessProbe.initialDelaySeconds |
Initial delay seconds for readinessProbe | 30 |
webdav.readinessProbe.periodSeconds |
Period seconds for readinessProbe | 10 |
webdav.readinessProbe.timeoutSeconds |
Timeout seconds for readinessProbe | 30 |
webdav.readinessProbe.failureThreshold |
Failure threshold for readinessProbe | 6 |
webdav.readinessProbe.successThreshold |
Success threshold for readinessProbe | 1 |
webdav.startupProbe.enabled |
Enable startupProbe on WebDAV containers | false |
webdav.startupProbe.initialDelaySeconds |
Initial delay seconds for startupProbe | 5 |
webdav.startupProbe.periodSeconds |
Period seconds for startupProbe | 5 |
webdav.startupProbe.timeoutSeconds |
Timeout seconds for startupProbe | 1 |
webdav.startupProbe.failureThreshold |
Failure threshold for startupProbe | 15 |
webdav.startupProbe.successThreshold |
Success threshold for startupProbe | 1 |
webdav.customLivenessProbe |
Custom livenessProbe that overrides the default one | {} |
webdav.customReadinessProbe |
Custom readinessProbe that overrides the default one | {} |
webdav.customStartupProbe |
Custom startupProbe that overrides the default one | {} |
webdav.resourcesPreset |
Set WebDAV container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if webdav.resources is set (webdav.resources is recommended for production). | nano |
webdav.resources |
Set WebDAV container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
webdav.podSecurityContext.enabled |
Enable WebDAV pods’ Security Context | true |
webdav.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy for WebDAV pods | Always |
webdav.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface for WebDAV pods | [] |
webdav.podSecurityContext.supplementalGroups |
Set filesystem extra groups for WebDAV pods | [] |
webdav.podSecurityContext.fsGroup |
Set fsGroup in WebDAV pods’ Security Context | 1001 |
webdav.containerSecurityContext.enabled |
Enabled WebDAV container’ Security Context | true |
webdav.containerSecurityContext.seLinuxOptions |
Set SELinux options in WebDAV container | {} |
webdav.containerSecurityContext.runAsUser |
Set runAsUser in WebDAV container’ Security Context | 1001 |
webdav.containerSecurityContext.runAsGroup |
Set runAsGroup in WebDAV container’ Security Context | 1001 |
webdav.containerSecurityContext.runAsNonRoot |
Set runAsNonRoot in WebDAV container’ Security Context | true |
webdav.containerSecurityContext.readOnlyRootFilesystem |
Set readOnlyRootFilesystem in WebDAV container’ Security Context | true |
webdav.containerSecurityContext.privileged |
Set privileged in WebDAV container’ Security Context | false |
webdav.containerSecurityContext.allowPrivilegeEscalation |
Set allowPrivilegeEscalation in WebDAV container’ Security Context | false |
webdav.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped in WebDAV container | ["ALL"] |
webdav.containerSecurityContext.seccompProfile.type |
Set seccomp profile in WebDAV container | RuntimeDefault |
webdav.logLevel |
WebDAV log level [0|1|2|3|4] | 1 |
webdav.tls.enabled |
Enable TLS transport for WebDAV | false |
webdav.tls.autoGenerated.enabled |
Enable automatic generation of certificates for TLS | false |
webdav.tls.autoGenerated.engine |
Mechanism to generate the certificates (allowed values: helm, cert-manager) | helm |
webdav.tls.autoGenerated.certManager.existingIssuer |
The name of an existing Issuer to use for generating the certificates (only for cert-manager engine) |
"" |
webdav.tls.autoGenerated.certManager.existingIssuerKind |
Existing Issuer kind, defaults to Issuer (only for cert-manager engine) |
"" |
webdav.tls.autoGenerated.certManager.keyAlgorithm |
Key algorithm for the certificates (only for cert-manager engine) |
RSA |
webdav.tls.autoGenerated.certManager.keySize |
Key size for the certificates (only for cert-manager engine) |
2048 |
webdav.tls.autoGenerated.certManager.duration |
Duration for the certificates (only for cert-manager engine) |
2160h |
webdav.tls.autoGenerated.certManager.renewBefore |
Renewal period for the certificates (only for cert-manager engine) |
360h |
webdav.tls.existingSecret |
The name of an existing Secret containing the certificates for TLS | "" |
webdav.tls.cert |
Volume Server certificate for TLS. Ignored if webdav.tls.existingSecret is set |
"" |
webdav.tls.key |
Volume Server key for TLS. Ignored if webdav.tls.existingSecret is set |
"" |
webdav.command |
Override default WebDAV container command (useful when using custom images) | [] |
webdav.args |
Override default WebDAV container args (useful when using custom images) | [] |
webdav.automountServiceAccountToken |
Mount Service Account token in WebDAV pods | false |
webdav.hostAliases |
WebDAV pods host aliases | [] |
webdav.statefulsetAnnotations |
Annotations for WebDAV statefulset | {} |
webdav.podLabels |
Extra labels for WebDAV pods | {} |
webdav.podAnnotations |
Annotations for WebDAV pods | {} |
webdav.podAffinityPreset |
Pod affinity preset. Ignored if webdav.affinity is set. Allowed values: soft or hard |
"" |
webdav.podAntiAffinityPreset |
Pod anti-affinity preset. Ignored if webdav.affinity is set. Allowed values: soft or hard |
soft |
webdav.nodeAffinityPreset.type |
Node affinity preset type. Ignored if webdav.affinity is set. Allowed values: soft or hard |
"" |
webdav.nodeAffinityPreset.key |
Node label key to match. Ignored if webdav.affinity is set |
"" |
webdav.nodeAffinityPreset.values |
Node label values to match. Ignored if webdav.affinity is set |
[] |
webdav.affinity |
Affinity for WebDAV pods assignment | {} |
webdav.nodeSelector |
Node labels for WebDAV pods assignment | {} |
webdav.tolerations |
Tolerations for WebDAV pods assignment | [] |
webdav.updateStrategy.type |
WebDAV deployment strategy type | RollingUpdate |
webdav.priorityClassName |
WebDAV pods’ priorityClassName | "" |
webdav.topologySpreadConstraints |
Topology Spread Constraints for WebDAV pod assignment spread across your cluster among failure-domains | [] |
webdav.schedulerName |
Name of the k8s scheduler (other than default) for WebDAV pods | "" |
webdav.terminationGracePeriodSeconds |
Seconds WebDAV pods need to terminate gracefully | "" |
webdav.lifecycleHooks |
for WebDAV containers to automate configuration before or after startup | {} |
webdav.extraEnvVars |
Array with extra environment variables to add to WebDAV containers | [] |
webdav.extraEnvVarsCM |
Name of existing ConfigMap containing extra env vars for WebDAV containers | "" |
webdav.extraEnvVarsSecret |
Name of existing Secret containing extra env vars for WebDAV containers | "" |
webdav.extraVolumes |
Optionally specify extra list of additional volumes for the WebDAV pods | [] |
webdav.extraVolumeMounts |
Optionally specify extra list of additional volumeMounts for the WebDAV containers | [] |
webdav.sidecars |
Add additional sidecar containers to the WebDAV pods | [] |
webdav.initContainers |
Add additional init containers to the WebDAV pods | [] |
webdav.pdb.create |
Enable/disable a Pod Disruption Budget creation | true |
webdav.pdb.minAvailable |
Minimum number/percentage of pods that should remain scheduled | "" |
webdav.pdb.maxUnavailable |
Maximum number/percentage of pods that may be made unavailable. Defaults to 1 if both webdav.pdb.minAvailable and webdav.pdb.maxUnavailable are empty. |
"" |
webdav.autoscaling.enabled |
Enable autoscaling for webdav | false |
webdav.autoscaling.minReplicas |
Minimum number of webdav replicas | "" |
webdav.autoscaling.maxReplicas |
Maximum number of webdav replicas | "" |
webdav.autoscaling.targetCPU |
Target CPU utilization percentage | "" |
webdav.autoscaling.targetMemory |
Target Memory utilization percentage | "" |
Name | Description | Value |
---|---|---|
webdav.service.type |
WebDAV service type | ClusterIP |
webdav.service.ports.http |
WebDAV service HTTP port (HTTPS if webdav.tls.enabled is true ) |
7333 |
webdav.service.nodePorts.http |
Node port for HTTP (HTTPS if webdav.tls.enabled is true ) |
"" |
webdav.service.clusterIP |
WebDAV service Cluster IP | "" |
webdav.service.loadBalancerIP |
WebDAV service Load Balancer IP | "" |
webdav.service.loadBalancerSourceRanges |
WebDAV service Load Balancer sources | [] |
webdav.service.externalTrafficPolicy |
WebDAV service external traffic policy | Cluster |
webdav.service.annotations |
Additional custom annotations for WebDAV service | {} |
webdav.service.extraPorts |
Extra ports to expose in WebDAV service (normally used with the sidecars value) |
[] |
webdav.service.sessionAffinity |
Control where client requests go, to the same pod or round-robin | None |
webdav.service.sessionAffinityConfig |
Additional settings for the sessionAffinity | {} |
webdav.service.headless.annotations |
Annotations for the headless service. | {} |
webdav.networkPolicy.enabled |
Specifies whether a NetworkPolicy should be created for WebDAV | true |
webdav.networkPolicy.allowExternal |
Don’t require server label for connections | true |
webdav.networkPolicy.allowExternalEgress |
Allow the WebDAV pods to access any range of port and all destinations. | true |
webdav.networkPolicy.extraIngress |
Add extra ingress rules to the NetworkPolicy | [] |
webdav.networkPolicy.extraEgress |
Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true) | [] |
webdav.networkPolicy.ingressNSMatchLabels |
Labels to match to allow traffic from other namespaces | {} |
webdav.networkPolicy.ingressNSPodMatchLabels |
Pod labels to match to allow traffic from other namespaces | {} |
webdav.ingress.enabled |
Enable ingress record generation for WebDAV | false |
webdav.ingress.pathType |
Ingress path type | ImplementationSpecific |
webdav.ingress.apiVersion |
Force Ingress API version (automatically detected if not set) | "" |
webdav.ingress.hostname |
Default host for the ingress record | webdav.seaweedfs.local |
webdav.ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
webdav.ingress.path |
Default path for the ingress record | / |
webdav.ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
webdav.ingress.tls |
Enable TLS configuration for the host defined at ingress.hostname parameter |
false |
webdav.ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
webdav.ingress.extraHosts |
An array with additional hostname(s) to be covered with the ingress record | [] |
webdav.ingress.extraPaths |
An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
webdav.ingress.extraTls |
TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
webdav.ingress.secrets |
Custom TLS certificates as secrets | [] |
webdav.ingress.extraRules |
Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
volumePermissions.enabled |
Enable init container that changes the owner/group of the PV mount point to runAsUser:fsGroup |
false |
volumePermissions.image.registry |
OS Shell + Utility image registry | REGISTRY_NAME |
volumePermissions.image.repository |
OS Shell + Utility image repository | REPOSITORY_NAME/os-shell |
volumePermissions.image.pullPolicy |
OS Shell + Utility image pull policy | IfNotPresent |
volumePermissions.image.pullSecrets |
OS Shell + Utility image pull secrets | [] |
volumePermissions.resourcesPreset |
Set init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | nano |
volumePermissions.resources |
Set init container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
volumePermissions.containerSecurityContext.enabled |
Enabled init container’ Security Context | true |
volumePermissions.containerSecurityContext.seLinuxOptions |
Set SELinux options in init container | {} |
volumePermissions.containerSecurityContext.runAsUser |
Set init container’s Security Context runAsUser | 0 |
Name | Description | Value |
---|---|---|
serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
serviceAccount.name |
The name of the ServiceAccount to use. | "" |
serviceAccount.annotations |
Additional Service Account annotations (evaluated as a template) | {} |
serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
Name | Description | Value |
---|---|---|
mariadb.enabled |
Deploy a MariaDB server to satisfy the Filer server database requirements | true |
mariadb.image.registry |
MariaDB image registry | REGISTRY_NAME |
mariadb.image.repository |
MariaDB image repository | REPOSITORY_NAME/mariadb |
mariadb.image.digest |
MariaDB image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag | "" |
mariadb.image.pullPolicy |
MariaDB image pull policy | IfNotPresent |
mariadb.image.pullSecrets |
Specify docker-registry secret names as an array | [] |
mariadb.architecture |
MariaDB architecture. Allowed values: standalone or replication |
standalone |
mariadb.auth.rootPassword |
MariaDB root password | "" |
mariadb.auth.database |
MariaDB custom database | bitnami_seaweedfs |
mariadb.auth.username |
MariaDB custom user name | bn_seaweedfs |
mariadb.auth.password |
MariaDB custom user password | "" |
mariadb.auth.usePasswordFiles |
Mount credentials as a file instead of using an environment variable | false |
mariadb.initdbScripts |
Specify dictionary of scripts to be run at first boot | {} |
mariadb.primary.persistence.enabled |
Enable persistence on MariaDB using PVC(s) | true |
mariadb.primary.persistence.storageClass |
Persistent Volume storage class | "" |
mariadb.primary.persistence.accessModes |
Persistent Volume access modes | [] |
mariadb.primary.persistence.size |
Persistent Volume size | 8Gi |
mariadb.primary.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production). | micro |
mariadb.primary.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
postgresql.enabled |
Deploy a PostgresSQL server to satisfy the Filer server database requirements | false |
postgresql.image.registry |
PostgreSQL image registry | REGISTRY_NAME |
postgresql.image.repository |
PostgreSQL image repository | REPOSITORY_NAME/postgresql |
postgresql.image.digest |
PostgreSQL image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag | "" |
postgresql.image.pullPolicy |
PostgreSQL image pull policy | IfNotPresent |
postgresql.image.pullSecrets |
Specify image pull secrets | [] |
postgresql.architecture |
PostgreSQL architecture (standalone or replication ) |
standalone |
postgresql.auth.postgresPassword |
Password for the “postgres” admin user. Ignored if auth.existingSecret with key postgres-password is provided |
"" |
postgresql.auth.database |
Name for a custom database to create | bitnami_seaweedfs |
postgresql.auth.username |
Name for a custom user to create | bn_seaweedfs |
postgresql.auth.password |
Password for the custom user to create | some-password |
postgresql.auth.existingSecret |
Name of existing secret to use for PostgreSQL credentials | "" |
postgresql.auth.secretKeys.userPasswordKey |
Name of key in existing secret to use for PostgreSQL credentials. Only used when auth.existingSecret is set. |
password |
postgresql.primary.resourcesPreset |
Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if postgresql.primary.resources is set (postgresql.primary.resources is recommended for production). | nano |
postgresql.primary.resources |
Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
postgresql.primary.initdb.scripts |
Dictionary of initdb scripts | {} |
postgresql.primary.persistence.enabled |
Enable PostgreSQL Primary data persistence using PVC(s) | true |
postgresql.primary.persistence.storageClass |
Persistent Volume storage class | "" |
postgresql.primary.persistence.accessModes |
Persistent Volume access modes | [] |
postgresql.primary.persistence.size |
Persistent Volume size | 8Gi |
externalDatabase.enabled |
Enable external database support | false |
externalDatabase.store |
Database store (mariadb, postgresql) | mariadb |
externalDatabase.host |
External Database server host | localhost |
externalDatabase.port |
External Database server port | 3306 |
externalDatabase.user |
External Database username | bn_seaweedfs |
externalDatabase.password |
External Database user password | "" |
externalDatabase.database |
External Database database name | bitnami_seaweedfs |
externalDatabase.existingSecret |
The name of an existing secret with database credentials. Evaluated as a template | "" |
externalDatabase.initDatabaseJob.enabled |
Enable the init external database job | false |
externalDatabase.initDatabaseJob.labels |
Extra labels for the init external database job | {} |
externalDatabase.initDatabaseJob.annotations |
Extra annotations for the init external database job | {} |
externalDatabase.initDatabaseJob.backoffLimit |
Set backoff limit of the init external database job | 10 |
externalDatabase.initDatabaseJob.containerSecurityContext.enabled |
Enabled init external database job containers’ Security Context | true |
externalDatabase.initDatabaseJob.containerSecurityContext.seLinuxOptions |
Set SELinux options in container | {} |
externalDatabase.initDatabaseJob.containerSecurityContext.runAsUser |
Set init external database job containers’ Security Context runAsUser | 1001 |
externalDatabase.initDatabaseJob.containerSecurityContext.runAsGroup |
Set init external database job containers’ Security Context runAsGroup | 1001 |
externalDatabase.initDatabaseJob.containerSecurityContext.runAsNonRoot |
Set init external database job containers’ Security Context runAsNonRoot | true |
externalDatabase.initDatabaseJob.containerSecurityContext.privileged |
Set init external database job containers’ Security Context privileged | false |
externalDatabase.initDatabaseJob.containerSecurityContext.readOnlyRootFilesystem |
Set init external database job containers’ Security Context readOnlyRootFilesystem | true |
externalDatabase.initDatabaseJob.containerSecurityContext.allowPrivilegeEscalation |
Set init external database job containers’ Security Context allowPrivilegeEscalation | false |
externalDatabase.initDatabaseJob.containerSecurityContext.capabilities.drop |
List of capabilities to be dropped | ["ALL"] |
externalDatabase.initDatabaseJob.containerSecurityContext.seccompProfile.type |
Set init external database job containers’ Security Context seccomp profile | RuntimeDefault |
externalDatabase.initDatabaseJob.podSecurityContext.enabled |
Enabled init external database job pods’ Security Context | true |
externalDatabase.initDatabaseJob.podSecurityContext.fsGroupChangePolicy |
Set filesystem group change policy | Always |
externalDatabase.initDatabaseJob.podSecurityContext.sysctls |
Set kernel settings using the sysctl interface | [] |
externalDatabase.initDatabaseJob.podSecurityContext.supplementalGroups |
Set filesystem extra groups | [] |
externalDatabase.initDatabaseJob.podSecurityContext.fsGroup |
Set init external database job pod’s Security Context fsGroup | 1001 |
externalDatabase.initDatabaseJob.resourcesPreset |
Set init external database job container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if externalDatabase.initDatabaseJob.resources is set (externalDatabase.initDatabaseJob.resources is recommended for production). | micro |
externalDatabase.initDatabaseJob.resources |
Set init external database job container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
externalDatabase.initDatabaseJob.automountServiceAccountToken |
Mount Service Account token in external database job pod | false |
The above parameters map to the env variables defined in bitnami/seaweedfs. For more information please refer to the bitnami/seaweedfs image documentation.
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
helm install my-release \
--set filer.enabled=true \
--set s3.enabled=true \
oci://REGISTRY_NAME/REPOSITORY_NAME/seaweedfs
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The above command enables two optional components of the SeaweedFS chart, the Filer and the Amazon S3 API.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/seaweedfs
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
. Tip: You can use the default values.yaml
This major updates the PostgreSQL subchart to its newest major, 16.0.0, which uses PostgreSQL 17.x. Follow the official instructions to upgrade to 17.x.
This major version adds support for using PostgreSQL as an alternative for MariaDB to comply with Filer database requirements. No breaking changes are expected when upgrading to this version if MariaDB is used.
This major release bumps the MariaDB version to 11.4. Follow the upstream instructions for upgrading from MariaDB 11.3 to 11.4. No major issues are expected during the upgrade.
Find more information about how to deal with common errors related to Bitnami’s Helm charts in this troubleshooting guide.
Copyright © 2024 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.