本主题提供了 Prometheus 软件包的参考信息。
关于 Prometheus 和 Altertmanager
Prometheus (https://prometheus.io/) 是一种系统和服务监控系统。Prometheus 以给定的时间间隔从配置的目标收集衡量指标,评估规则表达式并显示结果。Alertmanager 用于在观察到满足某些条件时触发警示。
安装 Prometheus 软件包:
Prometheus 组件
Prometheus 软件包会在 TKG 集群上安装表中列出的容器。此软件包将从软件包存储库中指定的 VMware 公共注册表中提取容器。
容器 | 资源类型 | 副本 | 描述 |
---|---|---|---|
prometheus-alertmanager |
部署 | 1 | 处理 Prometheus 服务器等客户端应用程序发送的警示。 |
prometheus-cadvisor |
DaemonSet | 5 | 分析并公开来自正在运行的容器的资源使用情况和性能数据 |
prometheus-kube-state-metrics |
部署 | 1 | 监控节点状态和容量、副本集合规性、pod、作业和 cronjob 状态、资源请求和限制。 |
prometheus-node-exporter |
DaemonSet | 5 | 由内核公开的硬件和操作系统衡量指标的导出程序。 |
prometheus-pushgateway |
部署 | 1 | 允许您从无法抓取的作业推送衡量指标的服务。 |
prometheus-server |
部署 | 1 | 提供核心功能,包括提取、规则处理和警示。 |
Prometheus 数据值
下面是一个示例 prometheus-data-values.yaml
文件。
请注意以下事项:
- 已启用 Ingress (ingress: enabled: true)。
- 针对以 /alertmanager/ (alertmanager_prefix:) 和 / (prometheus_prefix:) 结尾的 URL 配置了 Ingress。
- Prometheus 的 FQDN 为
prometheus.system.tanzu
(virtual_host_fqdn:)。 - 在 Ingress 部分中提供您自己的自定义证书(tls.crt、tls.key、ca.crt)。
- Alertmanager 的 PVC 为 2 GiB。对于
storageClassName
,提供默认存储策略。 - Prometheus 的 PVC 为 20 GiB。对于
storageClassName
,提供 vSphere 存储策略。
namespace: prometheus-monitoring alertmanager: config: alertmanager_yml: | global: {} receivers: - name: default-receiver templates: - '/etc/alertmanager/templates/*.tmpl' route: group_interval: 5m group_wait: 10s receiver: default-receiver repeat_interval: 3h deployment: replicas: 1 rollingUpdate: maxSurge: 25% maxUnavailable: 25% updateStrategy: Recreate pvc: accessMode: ReadWriteOnce storage: 2Gi storageClassName: default service: port: 80 targetPort: 9093 type: ClusterIP ingress: alertmanager_prefix: /alertmanager/ alertmanagerServicePort: 80 enabled: true prometheus_prefix: / prometheusServicePort: 80 tlsCertificate: ca.crt: | -----BEGIN CERTIFICATE----- MIIFczCCA1ugAwIBAgIQTYJITQ3SZ4BBS9UzXfJIuTANBgkqhkiG9w0BAQsFADBM ... w0oGuTTBfxSMKs767N3G1q5tz0mwFpIqIQtXUSmaJ+9p7IkpWcThLnyYYo1IpWm/ ZHtjzZMQVA== -----END CERTIFICATE----- tls.crt: | -----BEGIN CERTIFICATE----- MIIHxTCCBa2gAwIBAgITIgAAAAQnSpH7QfxTKAAAAAAABDANBgkqhkiG9w0BAQsF ... YYsIjp7/f+Pk1DjzWx8JIAbzItKLucDreAmmDXqk+DrBP9LYqtmjB0n7nSErgK8G sA3kGCJdOkI0kgF10gsinaouG2jVlwNOsw== -----END CERTIFICATE----- tls.key: | -----BEGIN PRIVATE KEY----- MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDOGHT8I12KyQGS ... l1NzswracGQIzo03zk/X3Z6P2YOea4BkZ0Iwh34wOHJnTkfEeSx6y+oSFMcFRthT yfFCZUk/sVCc/C1a4VigczXftUGiRrTR -----END PRIVATE KEY----- virtual_host_fqdn: prometheus.system.tanzu kube_state_metrics: deployment: replicas: 1 service: port: 80 targetPort: 8080 telemetryPort: 81 telemetryTargetPort: 8081 type: ClusterIP node_exporter: daemonset: hostNetwork: false updatestrategy: RollingUpdate service: port: 9100 targetPort: 9100 type: ClusterIP prometheus: pspNames: "vmware-system-restricted" config: alerting_rules_yml: | {} alerts_yml: | {} prometheus_yml: | global: evaluation_interval: 1m scrape_interval: 1m scrape_timeout: 10s rule_files: - /etc/config/alerting_rules.yml - /etc/config/recording_rules.yml - /etc/config/alerts - /etc/config/rules scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'kube-state-metrics' static_configs: - targets: ['prometheus-kube-state-metrics.prometheus.svc.cluster.local:8080'] - job_name: 'node-exporter' static_configs: - targets: ['prometheus-node-exporter.prometheus.svc.cluster.local:9100'] - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: kubernetes-nodes-cadvisor kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - replacement: kubernetes.default.svc:443 target_label: __address__ - regex: (.+) replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor source_labels: - __meta_kubernetes_node_name target_label: __metrics_path__ scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - job_name: kubernetes-apiservers kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: default;kubernetes;https source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token alerting: alertmanagers: - scheme: http static_configs: - targets: - alertmanager.prometheus.svc:80 - kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_namespace] regex: default action: keep - source_labels: [__meta_kubernetes_pod_label_app] regex: prometheus action: keep - source_labels: [__meta_kubernetes_pod_label_component] regex: alertmanager action: keep - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_probe] regex: .* action: keep - source_labels: [__meta_kubernetes_pod_container_port_number] regex: action: drop recording_rules_yml: | groups: - name: kube-apiserver.rules interval: 3m rules: - expr: |2 ( ( sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[1d])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[1d])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[1d])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[1d])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[1d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[1d])) labels: verb: read record: apiserver_request:burnrate1d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[1h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[1h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[1h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[1h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[1h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[1h])) labels: verb: read record: apiserver_request:burnrate1h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[2h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[2h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[2h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[2h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[2h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[2h])) labels: verb: read record: apiserver_request:burnrate2h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[30m])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30m])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[30m])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[30m])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[30m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[30m])) labels: verb: read record: apiserver_request:burnrate30m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[3d])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[3d])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[3d])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[3d])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[3d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[3d])) labels: verb: read record: apiserver_request:burnrate3d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[5m])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[5m])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[5m])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[5m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) labels: verb: read record: apiserver_request:burnrate5m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[6h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[6h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[6h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[6h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[6h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[6h])) labels: verb: read record: apiserver_request:burnrate6h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1d])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[1d])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[1d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1d])) labels: verb: write record: apiserver_request:burnrate1d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[1h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[1h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1h])) labels: verb: write record: apiserver_request:burnrate1h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[2h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[2h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[2h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[2h])) labels: verb: write record: apiserver_request:burnrate2h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[30m])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[30m])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[30m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[30m])) labels: verb: write record: apiserver_request:burnrate30m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[3d])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[3d])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[3d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[3d])) labels: verb: write record: apiserver_request:burnrate3d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[5m])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[5m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) labels: verb: write record: apiserver_request:burnrate5m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[6h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[6h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[6h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[6h])) labels: verb: write record: apiserver_request:burnrate6h - expr: | sum by (code,resource) (rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) labels: verb: read record: code_resource:apiserver_request_total:rate5m - expr: | sum by (code,resource) (rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) labels: verb: write record: code_resource:apiserver_request_total:rate5m - expr: | histogram_quantile(0.99, sum by (le, resource) (rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m]))) > 0 labels: quantile: "0.99" verb: read record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.99, sum by (le, resource) (rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m]))) > 0 labels: quantile: "0.99" verb: write record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: |2 sum(rate(apiserver_request_duration_seconds_sum{subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod) / sum(rate(apiserver_request_duration_seconds_count{subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod) record: cluster:apiserver_request_duration_seconds:mean5m - expr: | histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.99" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.9, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.9" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.5, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.5" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - interval: 3m name: kube-apiserver-availability.rules rules: - expr: |2 1 - ( ( # write too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"POST|PUT|PATCH|DELETE"}[30d])) - sum(increase(apiserver_request_duration_seconds_bucket{verb=~"POST|PUT|PATCH|DELETE",le="1"}[30d])) ) + ( # read too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"LIST|GET"}[30d])) - ( ( sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30d])) or vector(0) ) + sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope="namespace",le="0.5"}[30d])) + sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope="cluster",le="5"}[30d])) ) ) + # errors sum(code:apiserver_request_total:increase30d{code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d) labels: verb: all record: apiserver_request:availability30d - expr: |2 1 - ( sum(increase(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[30d])) - ( # too slow ( sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30d])) or vector(0) ) + sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[30d])) + sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[30d])) ) + # errors sum(code:apiserver_request_total:increase30d{verb="read",code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d{verb="read"}) labels: verb: read record: apiserver_request:availability30d - expr: |2 1 - ( ( # too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"POST|PUT|PATCH|DELETE"}[30d])) - sum(increase(apiserver_request_duration_seconds_bucket{verb=~"POST|PUT|PATCH|DELETE",le="1"}[30d])) ) + # errors sum(code:apiserver_request_total:increase30d{verb="write",code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d{verb="write"}) labels: verb: write record: apiserver_request:availability30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"}) labels: verb: read record: code:apiserver_request_total:increase30d - expr: | sum by (code) (code_verb:apiserver_request_total:increase30d{verb=~"POST|PUT|PATCH|DELETE"}) labels: verb: write record: code:apiserver_request_total:increase30d rules_yml: | {} deployment: configmapReload: containers: args: - --volume-dir=/etc/config - --webhook-url=http://127.0.0.1:9090/-/reload containers: args: - --storage.tsdb.retention.time=42d - --config.file=/etc/config/prometheus.yml - --storage.tsdb.path=/data - --web.console.libraries=/etc/prometheus/console_libraries - --web.console.templates=/etc/prometheus/consoles - --web.enable-lifecycle replicas: 1 rollingUpdate: maxSurge: 25% maxUnavailable: 25% updateStrategy: Recreate pvc: accessMode: ReadWriteOnce storage: 20Gi storageClassName: default service: port: 80 targetPort: 9090 type: ClusterIP pushgateway: deployment: replicas: 1 service: port: 9091 targetPort: 9091 type: ClusterIP
Prometheus 配置
Prometheus 配置在
prometheus-data-values.yaml
文件中设置。下表列出并介绍了可用的参数。
参数 | 描述 | 类型 | 默认 |
---|---|---|---|
monitoring.namespace | 将部署 Prometheus 的命名空间 | string | tanzu-system-monitoring |
monitoring.create_namespace | 标记指示是否创建 monitoring.namespace 指定的命名空间 | 布尔 | false |
monitoring.prometheus_server.config.prometheus_yaml | 要传递到 Prometheus 的 Kubernetes 集群监控器配置详细信息 | Yaml 文件 | prometheus.yaml |
monitoring.prometheus_server.config.alerting_rules_yaml | 在 Prometheus 中定义的详细警示规则 | Yaml 文件 | alerting_rules.yaml |
monitoring.prometheus_server.config.recording_rules_yaml | 在 Prometheus 中定义的详细记录规则 | Yaml 文件 | recording_rules.yaml |
monitoring.prometheus_server.service.type | 用于公开 Prometheus 的服务类型。支持的值:ClusterIP | string | ClusterIP |
monitoring.prometheus_server.enable_alerts.kubernetes_api | 在 Prometheus 中为 Kubernetes API 启用 SLO 警示 | 布尔 | true |
monitoring.prometheus_server.sc.aws_type | 为 AWS 上的 storageclass 定义的 AWS 类型 | string | gp2 |
monitoring.prometheus_server.sc.aws_fsType | 为 AWS 上的 storageclass 定义的 AWS 文件系统类型 | string | ext4 |
monitoring.prometheus_server.sc.allowVolumeExpansion | 定义是否允许对 AWS 上的 storageclass 进行卷扩展 | 布尔 | true |
monitoring.prometheus_server.pvc.annotations | 存储类注释 | 映射 | {} |
monitoring.prometheus_server.pvc.storage_class | 要用于持久卷声明的存储类。默认情况下,此参数为空,并使用默认置备程序 | string | 空 |
monitoring.prometheus_server.pvc.accessMode | 为持久卷声明定义访问模式。支持的值: ReadWriteOnce、ReadOnlyMany、ReadWriteMany | string | ReadWriteOnce |
monitoring.prometheus_server.pvc.storage | 为持久卷声明定义存储大小 | string | 8Gi |
monitoring.prometheus_server.deployment.replicas | prometheus 副本数 | integer | 1 |
monitoring.prometheus_server.image.repository | 包含 Prometheus 映像的存储库的位置。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.prometheus_server.image.name | Prometheus 映像的名称 | string | prometheus |
monitoring.prometheus_server.image.tag | Prometheus 映像标记。如果升级版本,可能需要更新此值。 | string | v2.17.1_vmware.1 |
monitoring.prometheus_server.image.pullPolicy | Prometheus 映像提取策略 | string | IfNotPresent |
monitoring.alertmanager.config.slack_demo | Alertmanager 的 Slack 通知配置 | string | slack_demo: name: slack_demo slack_configs: - api_url: https://hooks.slack.com channel: '#alertmanager-test' |
monitoring.alertmanager.config.email_receiver | Alertmanager 的电子邮件通知配置 | string | email_receiver: name: email-receiver email_configs: - to: [email protected] send_resolved: false from: [email protected] smarthost: smtp.eample.com:25 require_tls: false |
monitoring.alertmanager.service.type | 用于公开 Alertmanager 的服务类型。支持的值:ClusterIP | string | ClusterIP |
monitoring.alertmanager.image.repository | 具有 Alertmanager 映像的存储库的位置。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.alertmanager.image.name | Alertmanager 映像的名称 | string | alertmanager |
monitoring.alertmanager.image.tag | Alertmanager 映像标记。如果升级版本,可能需要更新此值。 | string | v0.20.0_vmware.1 |
monitoring.alertmanager.image.pullPolicy | Alertmanager 映像拉取策略 | string | IfNotPresent |
monitoring.alertmanager.pvc.annotations | 存储类注释 | 映射 | {} |
monitoring.alertmanager.pvc.storage_class | 要用于持久卷声明的存储类。默认情况下,此参数为空,并使用默认置备程序。 | string | 空 |
monitoring.alertmanager.pvc.accessMode | 为持久卷声明定义访问模式。支持的值: ReadWriteOnce、ReadOnlyMany、ReadWriteMany | string | ReadWriteOnce |
monitoring.alertmanager.pvc.storage | 为持久卷声明定义存储大小 | string | 2Gi |
monitoring.alertmanager.deployment.replicas | alertmanager 副本数 | integer | 1 |
monitoring.kube_state_metrics.image.repository | 包含 kube-state-metircs 映像的存储库。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.kube_state_metrics.image.name | kube-state-metircs 映像的名称 | string | kube-state-metrics |
monitoring.kube_state_metrics.image.tag | kube-state-metircs 映像标记。如果升级版本,可能需要更新此值。 | string | v1.9.5_vmware.1 |
monitoring.kube_state_metrics.image.pullPolicy | kube-state-metircs 映像提取策略 | string | IfNotPresent |
monitoring.kube_state_metrics.deployment.replicas | Kube-state-metrics 副本数 | integer | 1 |
monitoring.node_exporter.image.repository | 包含 node-exporter 映像的存储库。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.node_exporter.image.name | node-exporter 映像的名称 | string | node-exporter |
monitoring.node_exporter.image.tag | node-exporter 映像标记。如果升级版本,可能需要更新此值。 | string | v0.18.1_vmware.1 |
monitoring.node_exporter.image.pullPolicy | node-exporter 映像提取策略 | string | IfNotPresent |
monitoring.node_exporter.hostNetwork | 如果设置为 hostNetwork: true ,Pod 可以使用节点的网络命名空间和网络资源。 |
布尔 | false |
monitoring.node_exporter.deployment.replicas | 节点导出副本数 | integer | 1 |
monitoring.pushgateway.image.repository | 包含 pushgateway 映像的存储库。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.pushgateway.image.name | pushgateway 映像的名称 | string | pushgateway |
monitoring.pushgateway.image.tag | pushgateway 映像标记。如果升级版本,可能需要更新此值。 | string | v1.2.0_vmware.1 |
monitoring.pushgateway.image.pullPolicy | pushgateway 映像提取策略 | string | IfNotPresent |
monitoring.pushgateway.deployment.replicas | pushgateway 副本数 | integer | 1 |
monitoring.cadvisor.image.repository | 包含 cadvisor 映像的存储库。默认为公共 VMware 注册表。如果使用的是专用存储库(例如气隙环境),请更改此值。 | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.cadvisor.image.name | cadvisor 映像的名称 | string | cadvisor |
monitoring.cadvisor.image.tag | cadvisor 映像标记。如果升级版本,可能需要更新此值。 | string | v0.36.0_vmware.1 |
monitoring.cadvisor.image.pullPolicy | cadvisor 映像提取策略 | string | IfNotPresent |
monitoring.cadvisor.deployment.replicas | cadvisor 副本数 | integer | 1 |
monitoring.ingress.enabled | 为 prometheus 和 alertmanager 启用/禁用输入 | 布尔 | false 要使用输入,请将此字段设置为 |
monitoring.ingress.virtual_host_fqdn | 用于访问 Prometheus 和 Alertmanager 的主机名 | string | prometheus.system.tanzu |
monitoring.ingress.prometheus_prefix | prometheus 的路径前缀 | string | / |
monitoring.ingress.alertmanager_prefix | alertmanager 的路径前缀 | string | /alertmanager/ |
monitoring.ingress.tlsCertificate.tls.crt | 如果要使用自己的 TLS 证书,则为用于输入的可选证书。默认情况下生成自签名证书 | string | 生成的证书 |
monitoring.ingress.tlsCertificate.tls.key | 如果要使用自己的 TLS 证书,则为用于输入的可选证书私钥。 | string | 生成的证书密钥 |
参数 | 描述 | 类型 | 默认 |
---|---|---|---|
evaluation_interval | 评估规则的频率 | 持续时间 | 1m |
scrape_interval | 抓取目标的频率 | 持续时间 | 1m |
scrape_timeout | 直到抓取请求超时的时长 | 持续时间 | 10s |
rule_files | 规则文件指定一组全局组。从所有匹配的文件中读取规则和警示 | Yaml 文件 | |
scrape_configs | 碎片配置列表。 | 列表 | |
job_name | 默认情况下,分配给已抓取指标的作业名称 | string | |
kubernetes_sd_configs | Kubernetes 服务发现配置列表。 | 列表 | |
relabel_configs | 目标重新标记配置列表。 | 列表 | |
操作 | 要根据正则表达式匹配执行的操作。 | string | |
regex | 与所提取的值匹配的正则表达式。 | string | |
source_labels | 源标签从现有标签中选择值。 | string | |
方案 | 配置用于请求的协议方案。 | string | |
tls_config | 配置抓取请求的 TLS 设置。 | string | |
ca_file | 用于验证 API 服务器证书的 CA 证书。 | 文件名 | |
insecure_skip_verify | 禁用服务器证书验证。 | 布尔 | |
bearer_token_file | 可选持有者令牌文件身份验证信息。 | 文件名 | |
替换 | 正则表达式匹配时,执行正则表达式替换所依据的替换值。 | string | |
target_label | 在替换操作中写入所生成值的标签。 | string |
参数 | 描述 | 类型 | 默认 |
---|---|---|---|
resolve_timeout | 如果警示不包括 EndsAt,则 ResolveTimeout 是 alertmanager 使用的默认值 | 持续时间 | 5m |
smtp_smarthost | 通过其发送电子邮件的 SMTP 主机。 | 持续时间 | 1m |
slack_api_url | Slack webhook URL。 | string | global.slack_api_url |
pagerduty_url | 要将 API 请求发送到的 pagerduty URL。 | string | global.pagerduty_url |
模板 | 从中读取自定义通知模板定义的文件 | 文件路径 | |
group_by | 按标签对警示进行分组 | string | |
group_interval | 设置发送有关添加到组的新警示的通知之前等待的时间 | 持续时间 | 5m |
group_wait | 最初等待针对一组警示发送通知的时长 | 持续时间 | 30 秒 |
repeat_interval | 如果已针对警示成功发送了通知,在再次发送通知之前等待的时长 | 持续时间 | 4 小时 |
接收方 | 通知接收方列表。 | 列表 | |
严重性 | 事件的严重性。 | string | |
通道 | 要发送通知的通道或用户。 | string | |
html | 电子邮件通知的 HTML 正文。 | string | |
文本 | 电子邮件通知的文本正文。 | string | |
send_resolved | 是否通知已解决的警示。 | 文件名 | |
email_configs | 电子邮件集成配置 | 布尔 |
有关 Pod 的注释可以精细控制提取过程。这些注释必须是 Pod 元数据的一部分。如果在其他对象(如 Services 或 DaemonSet)上设置,则这些注释将不起作用。
Pod 注释 | 描述 |
---|---|
prometheus.io/scrape |
默认配置将抓取所有 Pod,如果设置为 false,则此注释会从抓取处理过程中排除 Pod。 |
prometheus.io/path |
如果指标路径不是 /metrics,则用此注释进行定义。 |
prometheus.io/port |
在指示的端口上抓取 Pod,而不是 Pod 的已声明端口(如果声明无,则默认为无端口目标)。 |
下面的 DaemonSet 清单将指示 Prometheus 在端口 9102 上抓取其所有 Pod。
apiVersion: apps/v1beta2 # for versions before 1.8.0 use extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: weave labels: app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9102' spec: containers: - name: fluentd-elasticsearch image: gcr.io/google-containers/fluentd-elasticsearch:1.20