怎么做示爱的网站,项目式学习,注册公司流程和资料,深圳互联网公司排行榜文章目录 一、机器规划二、部署安装 node-exporter、prometheus、Grafana、kube-state-metrics1、创建 monitor-sa 命名空间2、安装node-exporter组件2.1、说明2.2、应用资源清单2.3、通过node-exporter采集数据 3、k8s 集群中部署 prometheus3.1、创建一个 sa 账号3.2、将 sa … 文章目录 一、机器规划二、部署安装 node-exporter、prometheus、Grafana、kube-state-metrics1、创建 monitor-sa 命名空间2、安装node-exporter组件2.1、说明2.2、应用资源清单2.3、通过node-exporter采集数据 3、k8s 集群中部署 prometheus3.1、创建一个 sa 账号3.2、将 sa 账号 monitor 通过 clusterrolebing 绑定到 clusterrole 上3.3、创建数据目录3.4、安装prometheus3.4.1、将 prometheus.yml 文件以 ConfigMap 的形式进行管理3.4.2、应用 cm 资源清单3.4.3、通过 Deployment 部署 prometheus3.4.4、应用 prometheus 资源清单3.4.5、给 prometheus 的 pod 创建一个 svc3.4.6、应用 svc 资源清单 3.5、访问prometheus UI界面3.6、查看配置的服务发现 4、prometheus热更新4.1、热加载 prometheus4.2、暴力重启 prometheus 5、Grafana安装和配置5.1、下载 Grafana 需要的镜像5.2、在 k8s 集群各个节点导入 Grafana 镜像5.3、master 节点创建 grafana.yaml5.4、查看 Grafana 的 pod 和 svc5.5、查看 Grafana UI 界面5.6、给 Grafana 接入 Prometheus 数据源5.7、获取监控模板5.8、导入监控模板 6、安装配置 kube-state-metrics 组件6.1、什么是 kube-state-metrics6.2、创建 sa 并进行授权6.3、创建并应用 kube-state-metrics-deploy.yaml 文件6.4、创建并应用 kube-state-metrics-svc.yaml 文件6.5、获取 kube-state-metrics json 文件6.6、向 Grafana 导入 kube-state-metrics json 文件 三、安装和配置 Alertmanager -- 发送告警到 QQ 邮箱1、将 alertmanager-cm.yaml 文件以 cm 形式进行管理1.1、alertmanager配置文件说明 2、重新生成并应用 prometheus-cfg.yaml 文件3、重新生成 prometheus-deploy.yaml 文件3.1、创建一个名为 etcd-certs 的 Secret3.2、应用 prometheus-deploy.yaml 文件 4、重新生成并创建 alertmanager-svc.yaml 文件5、访问 prometheus UI 界面5.1、【error】kube-controller-manager、etcd、kube-proxy、kube-scheduler 组件 connection refused5.1.1、kube-proxy5.1.2、kube-controller-manager5.1.3、kube-schedule5.1.4、etcd 6、点击Alerts查看7、把controller-manager的cpu使用率大于90%展开8、登录 alertmanager UI9、登录 QQ 邮箱查看告警信息 四、配置 Alertmanager 报警 -- 发送告警到钉钉1、手机端拉群2、创建自定义机器人3、获取钉钉的 Webhook 插件4、启动钉钉告警插件5、对 alertmanager-cm.yaml 文件做备份6、重新生成新的 alertmanager-cm.yaml 文件7、重建资源以生效8、效果 一、机器规划
角色主机名ip 地址masterk8s-master1192.168.112.10nodek8s-node1192.168.112.20nodek8s-node2192.168.112.30
平台VMware Workstation操作系统CentOS Linux release 7.9.2009 (Core)内存、CPU4C4G磁盘大小20G SCSI
二、部署安装 node-exporter、prometheus、Grafana、kube-state-metrics
1、创建 monitor-sa 命名空间 master 节点操作 kubectl create ns monitor-sa2、安装node-exporter组件 master 节点操作 cat node-export.yaml EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: monitor-salabels:name: node-exporter
spec:selector:matchLabels:name: node-exportertemplate:metadata:labels:name: node-exporterspec:hostPID: truehostIPC: truehostNetwork: truecontainers:- name: node-exporterimage: prom/node-exporter:v0.16.0ports:- containerPort: 9100resources:requests:cpu: 0.15securityContext:privileged: trueargs:- --path.procfs- /host/proc- --path.sysfs- /host/sys- --collector.filesystem.ignored-mount-points- ^/(sys|proc|dev|host|etc)($|/)volumeMounts:- name: devmountPath: /host/dev- name: procmountPath: /host/proc- name: sysmountPath: /host/sys- name: rootfsmountPath: /rootfstolerations:- key: node-role.kubernetes.io/masteroperator: Existseffect: NoSchedulevolumes:- name: prochostPath:path: /proc- name: devhostPath:path: /dev- name: syshostPath:path: /sys- name: rootfshostPath:path: /
EOF2.1、说明
主机命名空间共享 (hostPID, hostIPC, hostNetwork) hostPID: true: 允许 Pod 使用主机的 PID 命名空间。Pod 可以看到主机上的所有进程hostIPC: true: 允许 Pod 使用主机的 IPC 命名空间。Pod 可以与其他在主机上运行的进程共享 IPC 资源如信号量、消息队列等。hostNetwork: true: 允许 Pod 使用主机的网络命名空间。Pod 将使用主机的网络接口 命令行参数 (args)--path.procfs /host/proc: 指定 node-exporter 应该从 /host/proc 路径读取进程文件系统的数据。这使得 node-exporter 可以访问宿主机的进程信息。--path.sysfs /host/sys: 指定 node-exporter 应该从 /host/sys 路径读取系统文件系统的数据。这使得 node-exporter 可以访问宿主机的系统信息。--collector.filesystem.ignored-mount-points ^/(sys|proc|dev|host|etc)($|/): 指定哪些文件系统的挂载点应该被忽略不被 node-exporter 收集。这里忽略了 /sys, /proc, /dev, /host, 和 /etc 这些挂载点避免收集不必要的数据。挂载点 (volumeMounts 和 volumes) /proc 挂载 宿主机路径: /proc容器内路径: /host/proc作用: 让 node-exporter 访问宿主机的进程文件系统。 /dev 挂载 宿主机路径: /dev容器内路径: /host/dev作用: 让 node-exporter 访问宿主机的设备文件。 /sys 挂载 宿主机路径: /sys容器内路径: /host/sys作用: 让 node-exporter 访问宿主机的系统文件系统。 / 挂载 宿主机路径: /容器内路径: /rootfs作用: 让 node-exporter 访问宿主机的根文件系统。 容忍度 (tolerations) key: node-role.kubernetes.io/master: 指定容忍的污点键。operator: Exists: 表示只要存在该污点键无论值是什么都予以容忍。effect: NoSchedule: 表示即使节点上有这种污点也不会阻止 Pod 被调度到该节点上。
2.2、应用资源清单
kubectl apply -f node-export.yamlkubectl get pods -n monitor-sa -l namenode-exporter2.3、通过node-exporter采集数据 node-export默认的监听端口是9100可以看到当前主机获取到的所有监控数据 # curl http://master-ip:9100/metricscurl http://192.168.112.10:9100/metrics3、k8s 集群中部署 prometheus
3.1、创建一个 sa 账号
kubectl create serviceaccount monitor -n monitor-sa3.2、将 sa 账号 monitor 通过 clusterrolebing 绑定到 clusterrole 上
kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrolecluster-admin --serviceaccountmonitor-sa:monitor3.3、创建数据目录 所有 node 节点 mkdir /data chmod 777 /data/3.4、安装prometheus master 节点操作 3.4.1、将 prometheus.yml 文件以 ConfigMap 的形式进行管理
cat prometheus-cfg.yaml EOF
---
kind: ConfigMap
apiVersion: v1
metadata:labels:app: prometheusname: prometheus-confignamespace: monitor-sa
data:prometheus.yml: |global:scrape_interval: 15sscrape_timeout: 10sevaluation_interval: 1mscrape_configs:- job_name: kubernetes-nodekubernetes_sd_configs:- role: noderelabel_configs:- source_labels: [__address__]regex: (.*):10250replacement: ${1}:9100target_label: __address__action: replace- action: labelmapregex: __meta_kubernetes_node_label_(.)- job_name: kubernetes-node-cadvisorkubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: kubernetes-apiserverkubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: kubernetes-service-endpointskubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:])(?::\d)?;(\d)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name
EOF3.4.2、应用 cm 资源清单
kubectl apply -f prometheus-cfg.yamlkubectl get cm prometheus-config -n monitor-sa -o yaml需要确保 cm 正确解析了变量 $1、$2 不然 prometheus 获取不到对应的 IP 地址会无法正常监控 3.4.3、通过 Deployment 部署 prometheus
cat prometheus-deploy.yaml EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:name: prometheus-servernamespace: monitor-salabels:app: prometheus
spec:replicas: 2selector:matchLabels:app: prometheuscomponent: server#matchExpressions:#- {key: app, operator: In, values: [prometheus]}#- {key: component, operator: In, values: [server]}template:metadata:labels:app: prometheuscomponent: serverannotations:prometheus.io/scrape: falsespec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- prometheus- key: componentoperator: Invalues:- servertopologyKey: kubernetes.io/hostnameserviceAccountName: monitorcontainers:- name: prometheusimage: quay.io/prometheus/prometheus:latestimagePullPolicy: IfNotPresentcommand:- prometheus- --config.file/etc/prometheus/prometheus.yml- --storage.tsdb.path/prometheus- --storage.tsdb.retention720hports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: /etc/prometheus/prometheus.ymlname: prometheus-configsubPath: prometheus.yml- mountPath: /prometheus/name: prometheus-storage-volumevolumes:- name: prometheus-configconfigMap:name: prometheus-configitems:- key: prometheus.ymlpath: prometheus.ymlmode: 0644- name: prometheus-storage-volumehostPath:path: /datatype: Directory
EOF3.4.4、应用 prometheus 资源清单
kubectl apply -f prometheus-deploy.yaml3.4.5、给 prometheus 的 pod 创建一个 svc
cat prometheus-svc.yaml EOF
---
apiVersion: v1
kind: Service
metadata:name: prometheusnamespace: monitor-salabels:app: prometheus
spec:type: NodePortports:- port: 9090targetPort: 9090protocol: TCPselector:app: prometheuscomponent: server
EOF3.4.6、应用 svc 资源清单
kubectl get svc -n monitor-sa -o wide通过上面可以看到service在宿主机上映射的端口是30172这样我们访问k8s集群的k8s-master1节点的ip:30172就可以访问到prometheus的web ui界面了
3.5、访问prometheus UI界面
# k8s-master1 IP:32032
192.168.112.10:320323.6、查看配置的服务发现
点击页面的Status-Targets可看到如下,说明我们配置的服务发现可以正常采集数据 4、prometheus热更新
4.1、热加载 prometheus
#为了每次修改配置文件可以热加载prometheus也就是不停止prometheus就可以使配置生效如修改prometheus-cfg.yaml想要使配置生效可用如下热加载命令
curl -X POST http://prometheus-pod-ip:9090/-/reloadkubectl get pods -n monitor-sa -l appprometheus -o wide4.2、暴力重启 prometheus 热加载速度比较慢可以暴力重启prometheus 如修改上面的prometheus-cfg.yaml文件之后可执行如下强制删除 kubectl delete -f prometheus-cfg.yamlkubectl delete -f prometheus-deploy.yaml# 然后再通过apply更新kubectl apply -f prometheus-cfg.yamlkubectl apply -f prometheus-deploy.yaml线上最好热加载暴力删除可能造成监控数据的丢失 5、Grafana安装和配置
5.1、下载 Grafana 需要的镜像
链接https://pan.baidu.com/s/1TmVGKxde_cEYrbjiETboEA
提取码052u5.2、在 k8s 集群各个节点导入 Grafana 镜像
docker load -i heapster-grafana-amd64_v5_0_4.tar.gzdocker images | grep grafana5.3、master 节点创建 grafana.yaml
cat grafana.yaml EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: monitoring-grafananamespace: kube-system
spec:replicas: 1selector:matchLabels:task: monitoringk8s-app: grafanatemplate:metadata:labels:task: monitoringk8s-app: grafanaspec:containers:- name: grafanaimage: k8s.gcr.io/heapster-grafana-amd64:v5.0.4ports:- containerPort: 3000protocol: TCPvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certificatesreadOnly: true- mountPath: /varname: grafana-storageenv:- name: INFLUXDB_HOSTvalue: monitoring-influxdb- name: GF_SERVER_HTTP_PORTvalue: 3000# The following env variables are required to make Grafana accessible via# the kubernetes api-server proxy. On production clusters, we recommend# removing these env variables, setup auth for grafana, and expose the grafana# service using a LoadBalancer or a public IP.- name: GF_AUTH_BASIC_ENABLEDvalue: false- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: true- name: GF_AUTH_ANONYMOUS_ORG_ROLEvalue: Admin- name: GF_SERVER_ROOT_URL# If youre only using the API Server proxy, set this value instead:# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxyvalue: /volumes:- name: ca-certificateshostPath:path: /etc/ssl/certs- name: grafana-storageemptyDir: {}
---
apiVersion: v1
kind: Service
metadata:labels:# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)# If you are NOT using this as an addon, you should comment out this line.kubernetes.io/cluster-service: truekubernetes.io/name: monitoring-grafananame: monitoring-grafananamespace: kube-system
spec:# In a production setup, we recommend accessing Grafana through an external Loadbalancer# or through a public IP.# type: LoadBalancer# You could also use NodePort to expose the service at a randomly-generated port# type: NodePortports:- port: 80targetPort: 3000selector:k8s-app: grafanatype: NodePort
EOF5.4、查看 Grafana 的 pod 和 svc 5.5、查看 Grafana UI 界面
# master-ip:grafana-svc-port192.168.112.10:314555.6、给 Grafana 接入 Prometheus 数据源
选择 Create your first data sourceName: Prometheus |Type: Prometheus|HTTP 处的URL写 如下http://prometheus.monitor-sa.svc:9090点击左下角 Save Test出现如下 Data source is working说明 prometheus 数据源成功的被 grafana 接入了
5.7、获取监控模板
可以在 Grafana Dashboard 官网搜索需要的
Grafana dashboards | Grafana Labs
也可以直接克隆 Github 仓库获取 node_exporter.json 、 docker_rev1.json 监控模板
git clone gitgithub.com:misakivv/Grafana-Dashboard.git5.8、导入监控模板
依次点击左侧栏的 号下方的 Import选择 Upload json file选择一个本地的node_exporter.json 文件导入后 Options 选项中会出现 Name 是自动生成的Prometheus 是需要我们选择 Prometheus的点击 Import 即可出现如下界面按照如上操作导入docker_rev1.json监控模板
6、安装配置 kube-state-metrics 组件
6.1、什么是 kube-state-metrics
kube-state-metrics通过监听API Server生成有关资源对象的状态指标比如Deployment、Node、Pod需要注意的是kube-state-metrics只是简单的提供一个metrics数据并不会存储这些指标数据所以我们可以使用Prometheus来抓取这些数据然后存储主要关注的是业务相关的一些元数据比如Deployment、Pod、副本状态等调度了多少个replicas现在可用的有几个多少个Pod是running/stopped/terminated状态Pod重启了多少次有多少job在运行中。
6.2、创建 sa 并进行授权 k8s-master1 节点编写一个 kube-state-metrics-rbac.yaml 文件 cat kube-state-metrics-rbac.yaml EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:name: kube-state-metricsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: kube-state-metrics
rules:
- apiGroups: []resources: [nodes, pods, services, resourcequotas, replicationcontrollers, limitranges, persistentvolumeclaims, persistentvolumes, namespaces, endpoints]verbs: [list, watch]
- apiGroups: [extensions]resources: [daemonsets, deployments, replicasets]verbs: [list, watch]
- apiGroups: [apps]resources: [statefulsets]verbs: [list, watch]
- apiGroups: [batch]resources: [cronjobs, jobs]verbs: [list, watch]
- apiGroups: [autoscaling]resources: [horizontalpodautoscalers]verbs: [list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kube-state-metrics
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kube-state-metrics
subjects:
- kind: ServiceAccountname: kube-state-metricsnamespace: kube-system
EOFkubectl get sa,clusterrole,clusterrolebinding -n kube-system | grep kube-state-metrics6.3、创建并应用 kube-state-metrics-deploy.yaml 文件 k8s-master1 节点操作 cat kube-state-metrics-deploy.yaml EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: kube-state-metricsnamespace: kube-system
spec:replicas: 1selector:matchLabels:app: kube-state-metricstemplate:metadata:labels:app: kube-state-metricsspec:serviceAccountName: kube-state-metricscontainers:- name: kube-state-metrics
# image: gcr.io/google_containers/kube-state-metrics-amd64:v1.3.1image: quay.io/coreos/kube-state-metrics:latestports:- containerPort: 8080
EOFkubectl apply -f kube-state-metrics-deploy.yamlkubectl get pods -n kube-system -l appkube-state-metrics -w拉取 kube-state-metrics 指定镜像版本失败时可以选择在集群各个节点上 docker pull quay.io/coreos/kube-state-metrics:latest 拉取最新 tag 版本 6.4、创建并应用 kube-state-metrics-svc.yaml 文件 k8s-master1 节点操作 cat kube-state-metrics-svc.yaml EOF
apiVersion: v1
kind: Service
metadata:annotations:prometheus.io/scrape: truename: kube-state-metricsnamespace: kube-systemlabels:app: kube-state-metrics
spec:ports:- name: kube-state-metricsport: 8080protocol: TCPselector:app: kube-state-metrics
EOFkubectl apply -f kube-state-metrics-svc.yamlkubectl get svc -n kube-system -l appkube-state-metrics6.5、获取 kube-state-metrics json 文件
git clone gitgithub.com:misakivv/Grafana-Dashboard.git6.6、向 Grafana 导入 kube-state-metrics json 文件
点击左侧栏 号的 Import点击 Upload .json File上传 Kubernetes Cluster (Prometheus)-1577674936972.json查看**同样的导入 Kubernetes cluster monitoring (via Prometheus) (k8s 1.16)-1577691996738.json **
三、安装和配置 Alertmanager – 发送告警到 QQ 邮箱
1、将 alertmanager-cm.yaml 文件以 cm 形式进行管理 k8s-master1 节点操作 cat alertmanager-cm.yaml EOF
kind: ConfigMap
apiVersion: v1
metadata:name: alertmanagernamespace: monitor-sa
data:alertmanager.yml: |-global:resolve_timeout: 1msmtp_smarthost: smtp.qq.com:465smtp_from: 2830909671qq.comsmtp_auth_username: 2830909671qq.comsmtp_auth_password: ajjgpgwwfkpcdgihsmtp_require_tls: falseroute:group_by: [alertname]group_wait: 5sgroup_interval: 5srepeat_interval: 5mreceiver: default-receiverreceivers:- name: default-receiveremail_configs:- to: misakikkqq.comsend_resolved: true
EOFkubectl apply -f alertmanager-cm.yamlkubectl get cm alertmanager -n monitor-sa1.1、alertmanager配置文件说明
smtp_smarthost: smtp.qq.com:465
#用于发送邮件的邮箱的SMTP服务器地址端口。QQ 邮箱 SMTP 服务地址官方地址为 smtp.qq.com 端口为 465 或 587同时要设置开启 POP3/SMTP 服务。
smtp_from: 2830909671qq.com
#这是指定从哪个邮箱发送报警
smtp_auth_password: ajjgpgwwfkpcdgih
#这是发送邮箱的授权码而不是登录密码
email_configs:- to: misakikkqq.com
#to后面指定发送到哪个邮箱2、重新生成并应用 prometheus-cfg.yaml 文件 k8s-master1 节点操作 cat prometheus-cfg.yaml EOF
kind: ConfigMap
apiVersion: v1
metadata:labels:app: prometheusname: prometheus-confignamespace: monitor-sa
data:prometheus.yml: |rule_files:- /etc/prometheus/rules.ymlalerting:alertmanagers:- static_configs:- targets: [localhost:9093]global:scrape_interval: 15sscrape_timeout: 10sevaluation_interval: 1mscrape_configs:- job_name: kubernetes-nodekubernetes_sd_configs:- role: noderelabel_configs:- source_labels: [__address__]regex: (.*):10250replacement: ${1}:9100target_label: __address__action: replace- action: labelmapregex: __meta_kubernetes_node_label_(.)- job_name: kubernetes-node-cadvisorkubernetes_sd_configs:- role: nodescheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: kubernetes-apiserverkubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: kubernetes-service-endpointskubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:])(?::\d)?;(\d)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name - job_name: kubernetes-podskubernetes_sd_configs:- role: podrelabel_configs:- action: keepregex: truesource_labels:- __meta_kubernetes_pod_annotation_prometheus_io_scrape- action: replaceregex: (.)source_labels:- __meta_kubernetes_pod_annotation_prometheus_io_pathtarget_label: __metrics_path__- action: replaceregex: ([^:])(?::\d)?;(\d)replacement: $1:$2source_labels:- __address__- __meta_kubernetes_pod_annotation_prometheus_io_porttarget_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.)- action: replacesource_labels:- __meta_kubernetes_namespacetarget_label: kubernetes_namespace- action: replacesource_labels:- __meta_kubernetes_pod_nametarget_label: kubernetes_pod_name- job_name: kubernetes-schedulescrape_interval: 5sstatic_configs:- targets: [192.168.112.10:10259]- job_name: kubernetes-controller-managerscrape_interval: 5sstatic_configs:- targets: [192.168.112.10:10257]- job_name: kubernetes-kube-proxyscrape_interval: 5sstatic_configs:- targets: [192.168.112.10:10249,192.168.112.20:10249,192.168.112.30:10249]- job_name: kubernetes-etcdscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.crtcert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.crtkey_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.keyscrape_interval: 5sstatic_configs:- targets: [192.168.112.10:2381]rules.yml: |groups:- name: examplerules:- alert: kube-proxy的cpu使用率大于80%expr: rate(process_cpu_seconds_total{job~kubernetes-kube-proxy}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%- alert: kube-proxy的cpu使用率大于90%expr: rate(process_cpu_seconds_total{job~kubernetes-kube-proxy}[1m]) * 100 90for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%- alert: scheduler的cpu使用率大于80%expr: rate(process_cpu_seconds_total{job~kubernetes-schedule}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%- alert: scheduler的cpu使用率大于90%expr: rate(process_cpu_seconds_total{job~kubernetes-schedule}[1m]) * 100 90for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%- alert: controller-manager的cpu使用率大于80%expr: rate(process_cpu_seconds_total{job~kubernetes-controller-manager}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%- alert: controller-manager的cpu使用率大于90%expr: rate(process_cpu_seconds_total{job~kubernetes-controller-manager}[1m]) * 100 0for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%- alert: apiserver的cpu使用率大于80%expr: rate(process_cpu_seconds_total{job~kubernetes-apiserver}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%- alert: apiserver的cpu使用率大于90%expr: rate(process_cpu_seconds_total{job~kubernetes-apiserver}[1m]) * 100 90for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%- alert: etcd的cpu使用率大于80%expr: rate(process_cpu_seconds_total{job~kubernetes-etcd}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%- alert: etcd的cpu使用率大于90%expr: rate(process_cpu_seconds_total{job~kubernetes-etcd}[1m]) * 100 90for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%- alert: kube-state-metrics的cpu使用率大于80%expr: rate(process_cpu_seconds_total{k8s_app~kube-state-metrics}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%value: {{ $value }}%threshold: 80% - alert: kube-state-metrics的cpu使用率大于90%expr: rate(process_cpu_seconds_total{k8s_app~kube-state-metrics}[1m]) * 100 0for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%value: {{ $value }}%threshold: 90% - alert: coredns的cpu使用率大于80%expr: rate(process_cpu_seconds_total{k8s_app~kube-dns}[1m]) * 100 80for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%value: {{ $value }}%threshold: 80% - alert: coredns的cpu使用率大于90%expr: rate(process_cpu_seconds_total{k8s_app~kube-dns}[1m]) * 100 90for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%value: {{ $value }}%threshold: 90% - alert: kube-proxy打开句柄数600expr: process_open_fds{job~kubernetes-kube-proxy} 600for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数600value: {{ $value }}- alert: kube-proxy打开句柄数1000expr: process_open_fds{job~kubernetes-kube-proxy} 1000for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数1000value: {{ $value }}- alert: kubernetes-schedule打开句柄数600expr: process_open_fds{job~kubernetes-schedule} 600for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数600value: {{ $value }}- alert: kubernetes-schedule打开句柄数1000expr: process_open_fds{job~kubernetes-schedule} 1000for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数1000value: {{ $value }}- alert: kubernetes-controller-manager打开句柄数600expr: process_open_fds{job~kubernetes-controller-manager} 600for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数600value: {{ $value }}- alert: kubernetes-controller-manager打开句柄数1000expr: process_open_fds{job~kubernetes-controller-manager} 1000for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数1000value: {{ $value }}- alert: kubernetes-apiserver打开句柄数600expr: process_open_fds{job~kubernetes-apiserver} 600for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数600value: {{ $value }}- alert: kubernetes-apiserver打开句柄数1000expr: process_open_fds{job~kubernetes-apiserver} 1000for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数1000value: {{ $value }}- alert: kubernetes-etcd打开句柄数600expr: process_open_fds{job~kubernetes-etcd} 600for: 2slabels:severity: warnningannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数600value: {{ $value }}- alert: kubernetes-etcd打开句柄数1000expr: process_open_fds{job~kubernetes-etcd} 1000for: 2slabels:severity: criticalannotations:description: {{$labels.instance}}的{{$labels.job}}打开句柄数1000value: {{ $value }}- alert: corednsexpr: process_open_fds{k8s_app~kube-dns} 600for: 2slabels:severity: warnning annotations:description: 插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过600value: {{ $value }}- alert: corednsexpr: process_open_fds{k8s_app~kube-dns} 1000for: 2slabels:severity: criticalannotations:description: 插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过1000value: {{ $value }}- alert: kube-proxyexpr: process_virtual_memory_bytes{job~kubernetes-kube-proxy} 2000000000for: 2slabels:severity: warnningannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: schedulerexpr: process_virtual_memory_bytes{job~kubernetes-schedule} 2000000000for: 2slabels:severity: warnningannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: kubernetes-controller-managerexpr: process_virtual_memory_bytes{job~kubernetes-controller-manager} 2000000000for: 2slabels:severity: warnningannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: kubernetes-apiserverexpr: process_virtual_memory_bytes{job~kubernetes-apiserver} 2000000000for: 2slabels:severity: warnningannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: kubernetes-etcdexpr: process_virtual_memory_bytes{job~kubernetes-etcd} 2000000000for: 2slabels:severity: warnningannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: kube-dnsexpr: process_virtual_memory_bytes{k8s_app~kube-dns} 2000000000for: 2slabels:severity: warnningannotations:description: 插件{{$labels.k8s_app}}({{$labels.instance}}): 使用虚拟内存超过2Gvalue: {{ $value }}- alert: HttpRequestsAvgexpr: sum(rate(rest_client_requests_total{job~kubernetes-kube-proxy|kubernetes-kubelet|kubernetes-schedule|kubernetes-control-manager|kubernetes-apiservers}[1m])) 1000for: 2slabels:team: adminannotations:description: 组件{{$labels.job}}({{$labels.instance}}): TPS超过1000value: {{ $value }}threshold: 1000 - alert: Pod_restartsexpr: kube_pod_container_status_restarts_total{namespace~kube-system|default|monitor-sa} 0for: 2slabels:severity: warnningannotations:description: 在{{$labels.namespace}}名称空间下发现{{$labels.pod}}这个pod下的容器{{$labels.container}}被重启,这个监控指标是由{{$labels.instance}}采集的value: {{ $value }}threshold: 0- alert: Pod_waitingexpr: kube_pod_container_status_waiting_reason{namespace~kube-system|default} 1for: 2slabels:team: adminannotations:description: 空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}启动异常等待中value: {{ $value }}threshold: 1 - alert: Pod_terminatedexpr: kube_pod_container_status_terminated_reason{namespace~kube-system|default|monitor-sa} 1for: 2slabels:team: adminannotations:description: 空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}被删除value: {{ $value }}threshold: 1- alert: Etcd_leaderexpr: etcd_server_has_leader{jobkubernetes-etcd} 0for: 2slabels:team: adminannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 当前没有leadervalue: {{ $value }}threshold: 0- alert: Etcd_leader_changesexpr: rate(etcd_server_leader_changes_seen_total{jobkubernetes-etcd}[1m]) 0for: 2slabels:team: adminannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 当前leader已发生改变value: {{ $value }}threshold: 0- alert: Etcd_failedexpr: rate(etcd_server_proposals_failed_total{jobkubernetes-etcd}[1m]) 0for: 2slabels:team: adminannotations:description: 组件{{$labels.job}}({{$labels.instance}}): 服务失败value: {{ $value }}threshold: 0- alert: Etcd_db_total_sizeexpr: etcd_debugging_mvcc_db_total_size_in_bytes{jobkubernetes-etcd} 10000000000for: 2slabels:team: adminannotations:description: 组件{{$labels.job}}({{$labels.instance}})db空间超过10Gvalue: {{ $value }}threshold: 10G- alert: Endpoint_readyexpr: kube_endpoint_address_not_ready{namespace~kube-system|default} 1for: 2slabels:team: adminannotations:description: 空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.endpoint}}不可用value: {{ $value }}threshold: 1- name: 物理节点状态-监控告警rules:- alert: 物理节点cpu使用率expr: 100-avg(irate(node_cpu_seconds_total{modeidle}[5m])) by(instance)*100 90for: 2slabels:severity: ccriticalannotations:summary: {{ $labels.instance }}cpu使用率过高description: {{ $labels.instance }}的cpu使用率超过90%,当前使用率[{{ $value }}],需要排查处理 - alert: 物理节点内存使用率expr: (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes node_memory_Buffers_bytes node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 90for: 2slabels:severity: criticalannotations:summary: {{ $labels.instance }}内存使用率过高description: {{ $labels.instance }}的内存使用率超过90%,当前使用率[{{ $value }}],需要排查处理- alert: InstanceDownexpr: up 0for: 2slabels:severity: criticalannotations: summary: {{ $labels.instance }}: 服务器宕机description: {{ $labels.instance }}: 服务器延时超过2分钟- alert: 物理节点磁盘的IO性能expr: 100-(avg(irate(node_disk_io_time_seconds_total[1m])) by(instance)* 100) 60for: 2slabels:severity: criticalannotations:summary: {{$labels.mountpoint}} 流入磁盘IO使用率过高description: {{$labels.mountpoint }} 流入磁盘IO大于60%(目前使用:{{$value}})- alert: 入网流量带宽expr: ((sum(rate (node_network_receive_bytes_total{device!~tap.*|veth.*|br.*|docker.*|virbr*|lo*}[5m])) by (instance)) / 100) 102400for: 2slabels:severity: criticalannotations:summary: {{$labels.mountpoint}} 流入网络带宽过高description: {{$labels.mountpoint }}流入网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}- alert: 出网流量带宽expr: ((sum(rate (node_network_transmit_bytes_total{device!~tap.*|veth.*|br.*|docker.*|virbr*|lo*}[5m])) by (instance)) / 100) 102400for: 2slabels:severity: criticalannotations:summary: {{$labels.mountpoint}} 流出网络带宽过高description: {{$labels.mountpoint }}流出网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}- alert: TCP会话expr: node_netstat_Tcp_CurrEstab 1000for: 2slabels:severity: criticalannotations:summary: {{$labels.mountpoint}} TCP_ESTABLISHED过高description: {{$labels.mountpoint }} TCP_ESTABLISHED大于1000%(目前使用:{{$value}}%)- alert: 磁盘容量expr: 100-(node_filesystem_free_bytes{fstype~ext4|xfs}/node_filesystem_size_bytes {fstype~ext4|xfs}*100) 80for: 2slabels:severity: criticalannotations:summary: {{$labels.mountpoint}} 磁盘分区使用率过高description: {{$labels.mountpoint }} 磁盘分区使用大于80%(目前使用:{{$value}}%)
EOF注意 除了kube-proxy 默认在每个节点的 10249 端口上暴露其指标 其余的 kubernetes-schedule、kubernetes-controller-manager、kubernetes-etcd 这些组件Pod 的容器需要根据自己的 k8s 集群情况进行修改 kubectl apply -f prometheus-cfg.yamlkubectl get cm prometheus-config -n monitor-sa -o yaml同样的还是需要检查 cm 文件中是否正确解析了 $1 $2 3、重新生成 prometheus-deploy.yaml 文件 k8s-master1 节点操作 cat prometheus-deploy.yaml EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:name: prometheus-servernamespace: monitor-salabels:app: prometheus
spec:replicas: 2selector:matchLabels:app: prometheuscomponent: server#matchExpressions:#- {key: app, operator: In, values: [prometheus]}#- {key: component, operator: In, values: [server]}template:metadata:labels:app: prometheuscomponent: serverannotations:prometheus.io/scrape: falsespec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- prometheus- key: componentoperator: Invalues:- servertopologyKey: kubernetes.io/hostnameserviceAccountName: monitorcontainers:- name: prometheusimage: quay.io/prometheus/prometheus:latestimagePullPolicy: IfNotPresentcommand:- /bin/prometheusargs:- --config.file/etc/prometheus/prometheus.yml- --storage.tsdb.path/prometheus- --storage.tsdb.retention24h- --web.enable-lifecycleports:- containerPort: 9090protocol: TCPvolumeMounts:- mountPath: /etc/prometheusname: prometheus-config- mountPath: /prometheus/name: prometheus-storage-volume- name: k8s-certsmountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/- name: alertmanagerimage: prom/alertmanager:latestimagePullPolicy: IfNotPresentargs:- --config.file/etc/alertmanager/alertmanager.yml- --log.leveldebugports:- containerPort: 9093protocol: TCPname: alertmanagervolumeMounts:- name: alertmanager-configmountPath: /etc/alertmanager- name: alertmanager-storagemountPath: /alertmanager- name: localtimemountPath: /etc/localtimevolumes:- name: prometheus-configconfigMap:name: prometheus-config- name: prometheus-storage-volumehostPath:path: /datatype: Directory- name: k8s-certssecret:secretName: etcd-certs- name: alertmanager-configconfigMap:name: alertmanager- name: alertmanager-storagehostPath:path: /data/alertmanagertype: DirectoryOrCreate- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai
EOF3.1、创建一个名为 etcd-certs 的 Secret
kubectl -n monitor-sa create secret generic etcd-certs --from-file/etc/kubernetes/pki/etcd/server.key --from-file/etc/kubernetes/pki/etcd/server.crt --from-file/etc/kubernetes/pki/etcd/ca.crt3.2、应用 prometheus-deploy.yaml 文件
kubectl apply -f prometheus-deploy.yamlkubectl get pods -n monitor-sa4、重新生成并创建 alertmanager-svc.yaml 文件
cat alertmanager-svc.yaml EOF
---
apiVersion: v1
kind: Service
metadata:labels:name: prometheuskubernetes.io/cluster-service: truename: alertmanagernamespace: monitor-sa
spec:ports:- name: alertmanagernodePort: 30066port: 9093protocol: TCPtargetPort: 9093selector:app: prometheussessionAffinity: Nonetype: NodePort
EOFkubectl apply -f alertmanager-svc.yamlkubectl get svc alertmanager -n monitor-sa5、访问 prometheus UI 界面 5.1、【error】kube-controller-manager、etcd、kube-proxy、kube-scheduler 组件 connection refused
5.1.1、kube-proxy 默认情况下该服务监听端口只提供给127.0.0.1需修改为0.0.0.0 kubectl edit cm/kube-proxy -n kube-system编辑文件将文件修改允许0.0.0.0即可保存 metricsBindAddress: 0.0.0.0:10249 删除重建 kube-proxy 的 pod
kubectl delete pod -l k8s-appkube-proxy -n kube-system效果 5.1.2、kube-controller-manager 事先说明到这一步我试过网上很多方法都没有成功获取到数据所以我重新创建了 sa 慎用仅供参考 修改 kube-controller-manager 的 yaml 文件
默认监听本地修改为 0.0.0.0
- --bind-address127.0.0.1
# 修改为
- --bind-address0.0.0.0创建ServiceAccount
创建一个新的ServiceAccount用于Prometheus访问 kube-controller-manager。
cat prom-sa EOF
apiVersion: v1
kind: ServiceAccount
metadata:name: prometheus-sanamespace: monitor-sa
EOF创建ClusterRole
创建一个ClusterRole定义Prometheus所需的权限。
cat porm-role EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: prometheus-role
rules:
- nonResourceURLs:- /metricsverbs:- get
EOF创建ClusterRoleBinding
将ServiceAccount绑定到ClusterRole。
cat prom-bind.yaml EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: prometheus-binding
subjects:
- kind: ServiceAccountname: prometheus-sanamespace: monitor-sa
roleRef:kind: ClusterRolename: prometheus-roleapiGroup: rbac.authorization.k8s.io
EOF获取ServiceAccount的Token
获取ServiceAccount的Token以便在Prometheus配置中使用。
TOKEN$(kubectl get secret $(kubectl get sa prometheus-sa -n monitor-sa -o json | jq -r .secrets[].name) -n monitor-sa -o json | jq -r .data.token | base64 --decode)修改Prometheus配置文件cm
- job_name: kubernetes-controller-managerscheme: httpstls_config:insecure_skip_verify: true # 禁用证书验证authorization:credentials: eyJhbGciOiJSUzI1NiIsImtpZCI6IkFEWVNqaWlueWVDMzBUcTZvQk9MRkpxQ0diLWRGWkNoaWlpZkgwR21NcEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InByb21ldGhldXMtc2EtdG9rZW4tbnQ5bm4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1zYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQ4YTA5NDExLTAwMmYtNDE0Ni05YzY4LTBiNmVjOWYzYWZlZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptb25pdG9yLXNhOnByb21ldGhldXMtc2EifQ.DNgCjTVxsrGDltvQZG-x7qPQrh369SO_e0faGrrhjgkBLS4q2sh85wkaBNNZcIjxZcVk7ZU9gQmQkM3AIgGIcIURpQGDMgVVI_xF1JV8iQWe-nL1yHnQAXDjyMAd1826wVvMH8LSKqdKfPVaMHN8t0LScX5yHonSJUqoevxi7Mm7tiUd33IlMQ6xH6M8Tu8bsg-fOVmL6nnGpC1tPgaZy8M_GA_Kh9j8SwHXi4Yd9r75eOSa3J6N4KF6n-EPKxnGmXDooA60G94YptsDFCQMi1t4TLAFR1FKraycWHwPbIwviUZTvA1WXbkiHnh0R6q-y0hHJVbAi_ZXagVXKZFBaw # 替换为实际的Token值scrape_interval: 5sstatic_configs:- targets: [192.168.112.10:10257]重启Prometheus
更新配置后重启Prometheus以应用新的配置。
kubectl rollout restart deployment/prometheus-server -n monitor-sa效果 5.1.3、kube-schedule 和 kube-controller-manager 操作一致 效果 5.1.4、etcd
修改创建 etcd 的 yaml 文件
添加 master 节点 ip etcd port
vim /etc/kubernetes/manifests/etcd.yaml- --listen-metrics-urlshttp://127.0.0.1:2381,http://192.168.112.10:2381修改 prometheus.yaml 文件
改为 http效果 6、点击Alerts查看 7、把controller-manager的cpu使用率大于90%展开 FIRING表示prometheus已经将告警发给alertmanager 在Alertmanager 中可以看到有 alert。 8、登录 alertmanager UI
master-ip:svc-alertmanager-port192.168.112.10:300669、登录 QQ 邮箱查看告警信息 四、配置 Alertmanager 报警 – 发送告警到钉钉
1、手机端拉群
因为 PC 端不好操作 2、创建自定义机器人 自定义机器人安全设置 - 钉钉开放平台 (dingtalk.com) 群设置机器人添加机器人自定义添加机器人名字、安全设置保管好 Webhook
3、获取钉钉的 Webhook 插件 master 节点操作 git clone gitgithub.com:misakivv/prometheus-webhook-dingtalk.gitcd prometheus-webhook-dingtalktar zxvf prometheus-webhook-dingtalk-0.3.0.linux-amd64.tar.gzcd prometheus-webhook-dingtalk-0.3.0.linux-amd644、启动钉钉告警插件
nohup ./prometheus-webhook-dingtalk --web.listen-address0.0.0.0:8060 --ding.profilecluster1https://oapi.dingtalk.com/robot/send?access_tokenfeb3df2c6a987c8c1466c16eb90f4c2d3817c481aacf15cecc46f588f2716f25 5、对 alertmanager-cm.yaml 文件做备份
cp alertmanager-cm.yaml alertmanager-cm.yaml.bak6、重新生成新的 alertmanager-cm.yaml 文件
cat alertmanager-cm.yaml EOF
kind: ConfigMap
apiVersion: v1
metadata:name: alertmanagernamespace: monitor-sa
data:alertmanager.yml: |-global:resolve_timeout: 1msmtp_smarthost: smtp.qq.com:465smtp_from: 2830909671qq.comsmtp_auth_username: 2830909671qq.comsmtp_auth_password: ajjgpgwwfkpcdgihsmtp_require_tls: falseroute:group_by: [alertname]group_wait: 10sgroup_interval: 10srepeat_interval: 10mreceiver: cluster1receivers:- name: cluster1webhook_configs:- url: http://192.168.112.10:8060/dingtalk/cluster1/sendsend_resolved: true
EOF7、重建资源以生效
kubectl delete cm alertmanager -n monitor-sakubectl apply -f alertmanager-cm.yamlkubectl delete -f prometheus-cfg.yamlkubectl apply -f prometheus-cfg.yamlkubectl delete -f prometheus-deploy.yamlkubectl apply -f prometheus-deploy.yaml8、效果 暂时先写到这里其实 alertmanager 还有静默、去重、抑制等功能告警也可以模板化下一篇再共同学习