安装Wordpress个人网站,酷站字体,外国人做僾视频网站,网站开发项目经验和教训你好#xff01;今天分享的是基于Prometheus监控harbor服务。
在之前的文章中分别介绍了harbor基于离线安装的高可用汲取设计和部署。那么#xff0c;如果我们的harbor服务主机或者harbor服务及组件出现异常#xff0c;我们该如何快速处理呢#xff1f;
Harbor v2.2及以上…你好今天分享的是基于Prometheus监控harbor服务。
在之前的文章中分别介绍了harbor基于离线安装的高可用汲取设计和部署。那么如果我们的harbor服务主机或者harbor服务及组件出现异常我们该如何快速处理呢
Harbor v2.2及以上版本支持配置Prometheus监控Harbor所以你的harbor版本必须要大于2.2。
本篇文章以二进制的方式简单的部署Prometheus相关服务可以帮助你快速的的实现Prometheus对harbor的监控。
Prometheus监控Harbor二进制版 一、部署说明
在harbor服务主机上部署: prometheus node-exporter grafana alertmanager
harbor版本2.4.2 主机192.168.2.22 二、Harbor启用metrics服务
2.1 停止Harbor服务
$ cd /app/harbor
$ docker-compose down
2.2 修改harbor.yml配置
修改harbor的配置文件中metrics参数启用harbor-exporter组件。
$ cat harbor.yml
### metrics配置部分
metric:enabled: true #是否启用需要修改为true启用port: 9099 #默认的端口为9090与prometheus的端口会冲突所以需要修改下path: /metrics对harbor不熟悉的建议对配置文件备份下
2.3 配置注入组件
$ ./prepre2.4 install安装harbor
$ ./install.sh --with-notary --with-trivy --with-chartmuseum
$ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
chartmuseum ./docker-entrypoint… chartmuseum running (healthy)
harbor-core /harbor/entrypoint.… core running (healthy)
harbor-db /docker-entrypoint.… postgresql running (healthy)
harbor-exporter /harbor/entrypoint.… exporter running 可以看到多了harbor-exporter组件。 三、Harbor指标说明
在前面启用了harbor-exporter监控组件后可以通过curl命令去查看harbor暴露了哪些指标。
harbor暴露了以下4个关键组件的指标数据。
3.1 harbor-exporter组件指标 exporter组件指标与Harbor 实例配置相关并从 Harbor 数据库中收集一些数据。 指标可在harbor_instance:metrics_port/metrics_path查看 $ curl http://192.168.2.22:9099/metrics1harbor_project_total
harbor_project_total 采集了公共和私人项目总共数量。
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_total
# HELP harbor_project_total Total projects number
# TYPE harbor_project_total gauge
harbor_project_total{publictrue} 1 # 公共项目的数量为“1”
harbor_project_total{publicfalse} 1 #私有项目的数量2harbor_project_repo_total
项目Project中的存储库总数。
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_repo_total
# HELP harbor_project_repo_total Total project repos number
# TYPE harbor_project_repo_total gauge
harbor_project_repo_total{project_namelibrary,publictrue} 03harbor_project_member_total
项目中的成员总数
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_member_total
# HELP harbor_project_member_total Total members number of a project
# TYPE harbor_project_member_total gauge
harbor_project_member_total{project_namelibrary} 1 #项目library下有“1”个用户4harbor_project_quota_usage_byte
一个项目的总使用资源
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_quota_usage_byte
# HELP harbor_project_quota_usage_byte The used resource of a project
# TYPE harbor_project_quota_usage_byte gauge
harbor_project_quota_usage_byte{project_namelibrary} 05harbor_project_quota_byte
项目中设置的配额
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_quota_byte
# HELP harbor_project_quota_byte The quota of a project
# TYPE harbor_project_quota_byte gauge
harbor_project_quota_byte{project_namelibrary} -1 #-1 表示不限制6harbor_artifact_pulled
项目中镜像拉取的总数
$ curl http://192.168.2.22:9099/metrics | grep harbor_artifact_pulled
# HELP harbor_artifact_pulled The pull number of an artifact
# TYPE harbor_artifact_pulled gauge
harbor_artifact_pulled{project_namelibrary} 07harbor_project_artifact_total
项目中的工件类型总数artifact_type , project_name, public ( true, false)
$ curl http://192.168.2.22:9099/metrics | grep harbor_project_artifact_total8harbor_health Harbor状态
$ curl http://192.168.2.22:9099/metrics | grep harbor_health
# HELP harbor_health Running status of Harbor
# TYPE harbor_health gauge
harbor_health 1 #1表示正常0表示异常9harbor_system_info
Harbor 实例的信息auth_mode ( db_auth, ldap_auth, uaa_auth, http_auth, oidc_auth),harbor_version, self_registration( true, false)
$ curl http://192.168.2.22:9099/metrics | grep harbor_system_info
# HELP harbor_system_info Information of Harbor system
# TYPE harbor_system_info gauge
harbor_system_info{auth_modedb_auth,harbor_versionv2.4.2-ef2e2e56,self_registrationfalse} 110harbor_up
Harbor组件运行状态组件 ( chartmuseum, core, database, jobservice, portal, redis, registry, registryctl, trivy)
$ curl http://192.168.2.22:9099/metrics | grep harbor_up
harbor_up Running status of harbor component
# TYPE harbor_up gauge
harbor_up{componentchartmuseum} 1
harbor_up{componentcore} 1
harbor_up{componentdatabase} 1
harbor_up{componentjobservice} 1
harbor_up{componentportal} 1
harbor_up{componentredis} 1
harbor_up{componentregistry} 1
harbor_up{componentregistryctl} 1
harbor_up{componenttrivy} 1 #Trivy扫描器运行状态11harbor_task_queue_size
队列中每种类型的任务总数
$ curl http://192.168.2.22:9099/metrics | grep harbor_task_queue_size
# HELP harbor_task_queue_size Total number of tasks
# TYPE harbor_task_queue_size gauge
harbor_task_queue_size{typeDEMO} 0
harbor_task_queue_size{typeGARBAGE_COLLECTION} 0
harbor_task_queue_size{typeIMAGE_GC} 0
harbor_task_queue_size{typeIMAGE_REPLICATE} 0
harbor_task_queue_size{typeIMAGE_SCAN} 0
harbor_task_queue_size{typeIMAGE_SCAN_ALL} 0
harbor_task_queue_size{typeP2P_PREHEAT} 0
harbor_task_queue_size{typeREPLICATION} 0
harbor_task_queue_size{typeRETENTION} 0
harbor_task_queue_size{typeSCHEDULER} 0
harbor_task_queue_size{typeSLACK} 0
harbor_task_queue_size{typeWEBHOOK} 012harbor_task_queue_latency
多久前要处理的下一个作业按类型排入队列
$ curl http://192.168.2.22:9099/metrics | grep harbor_task_queue_latency
# HELP harbor_task_queue_latency how long ago the next job to be processed was enqueued
# TYPE harbor_task_queue_latency gauge
harbor_task_queue_latency{typeDEMO} 0
harbor_task_queue_latency{typeGARBAGE_COLLECTION} 0
harbor_task_queue_latency{typeIMAGE_GC} 0
harbor_task_queue_latency{typeIMAGE_REPLICATE} 0
harbor_task_queue_latency{typeIMAGE_SCAN} 0
harbor_task_queue_latency{typeIMAGE_SCAN_ALL} 0
harbor_task_queue_latency{typeP2P_PREHEAT} 0
harbor_task_queue_latency{typeREPLICATION} 0
harbor_task_queue_latency{typeRETENTION} 0
harbor_task_queue_latency{typeSCHEDULER} 0
harbor_task_queue_latency{typeSLACK} 0
harbor_task_queue_latency{typeWEBHOOK} 013harbor_task_scheduled_total
计划任务数
$ curl http://192.168.2.22:9099/metrics | grep harbor_task_scheduled_total
# HELP harbor_task_scheduled_total total number of scheduled job
# TYPE harbor_task_scheduled_total gauge
harbor_task_scheduled_total 014harbor_task_concurrency
池Total)上每种类型的并发任务总数
$ curl http://192.168.2.22:9099/metrics | grep harbor_task_concurrency
harbor_task_concurrency{poold4053262b74f0a7b83bc6add,typeGARBAGE_COLLECTION} 03.2 harbor-core组件指标 ✔ Container harbor-core Started Core services(Admin Server): 这是Harbor的核心功能主要提供以下服务 - UI提供图形化界面帮助用户管理registry上的镜像image, 并对用户进行授权。 - webhook为了及时获取registry 上image状态变化的情况 在Registry上配置webhook把状态变化传递给UI模块。 - Auth服务负责根据用户权限给每个docker push/pull命令签发token. Docker 客户端向Regiøstry服务发起的请求,如果不包含token会被重定向到这里获得token后再重新向Registry进行请求。 - API: 提供Harbor RESTful API- Replication Job Service提供多个 Harbor 实例之间的镜像同步功能。- Log collector为了帮助监控Harbor运行负责收集其他组件的log供日后进行分析。
以下是从 Harbor core组件中提取的指标获取格式 harbor_instance:metrics_port/metrics_path?compcore. 1harbor_core_http_inflight_requests
请求总数操作Harbor API operationId中的值。一些遗留端点没有因此标签值为operationIdunknown
harbor-core组件的指标
$ curl http://192.168.2.22:9099/metrics?compcore | grep harbor_core_http_inflight_requests
# HELP harbor_core_http_inflight_requests The total number of requests
# TYPE harbor_core_http_inflight_requests gauge
harbor_core_http_inflight_requests 02harbor_core_http_request_duration_seconds
请求的持续时间方法 ( GET, POST, HEAD, PATCH, PUT), 操作 ( Harbor APIoperationId中的 值。一些遗留端点没有, 所以标签值为), 分位数operationIdunknown
$ curl http://192.168.2.22:9099/metrics?compcore | grep harbor_core_http_request_duration_seconds
# HELP harbor_core_http_request_duration_seconds The time duration of the requests
# TYPE harbor_core_http_request_duration_seconds summary
harbor_core_http_request_duration_seconds{methodGET,operationGetHealth,quantile0.5} 0.001797115
harbor_core_http_request_duration_seconds{methodGET,operationGetHealth,quantile0.9} 0.010445204
harbor_core_http_request_duration_seconds{methodGET,operationGetHealth,quantile0.99} 0.0104452043harbor_core_http_request_total
请求总数方法GET, POST, HEAD, PATCH, PUT操作[Harbor API operationId中的 值。一些遗留端点没有因此标签值为operationIdunknown
$ curl http://192.168.2.22:9099/metrics?compcore | grep harbor_core_http_request_total
# HELP harbor_core_http_request_total The total number of requests
# TYPE harbor_core_http_request_total counter
harbor_core_http_request_total{code200,methodGET,operationGetHealth} 14
harbor_core_http_request_total{code200,methodGET,operationGetInternalconfig} 1
harbor_core_http_request_total{code200,methodGET,operationGetPing} 176
harbor_core_http_request_total{code200,methodGET,operationGetSystemInfo} 143.3 registry 组件指标
注册表以下是从 Docker 发行版中提取的指标查看指标方式 harbor_instance:metrics_port/metrics_path?compregistry. 1registry_http_in_flight_requests
进行中的 HTTP 请求处理程序
$ curl http://192.168.2.22:9099/metrics?compregistry | grep registry_http_in_flight_requests
# HELP registry_http_in_flight_requests The in-flight HTTP requests
# TYPE registry_http_in_flight_requests gauge
registry_http_in_flight_requests{handlerbase} 0
registry_http_in_flight_requests{handlerblob} 0
registry_http_in_flight_requests{handlerblob_upload} 0
registry_http_in_flight_requests{handlerblob_upload_chunk} 0
registry_http_in_flight_requests{handlercatalog} 0
registry_http_in_flight_requests{handlermanifest} 0
registry_http_in_flight_requests{handlertags} 02registry_http_request_duration_seconds
HTTP 请求延迟以秒为单位处理程序、方法( ,,,, GET) POST,文件HEADPATCHPUT
$ curl http://192.168.2.22:9099/metrics?compregistry | grep registry_http_request_duration_seconds3registry_http_request_size_bytes
HTTP 请求大小以字节为单位。
$ curl http://192.168.2.22:9099/metrics?compregistry | grep registry_http_request_size_bytes3.4 jobservice组件指标
以下是从 Harbor Jobservice 提取的指标
可在harbor_instance:metrics_port/metrics_path?compjobservice.查看
1harbor_jobservice_info
Jobservice的信息,
$ curl http://192.168.2.22:9099/metrics?compjobservice | grep harbor_jobservice_info
# HELP harbor_jobservice_info the information of jobservice
# TYPE harbor_jobservice_info gauge
harbor_jobservice_info{nodef47de52e23b7:172.18.0.11,pool35f1301b0e261d18fac7ba41,workers10} 12harbor_jobservice_task_total
每个作业类型处理的任务数
$ curl http://192.168.2.22:9099/metrics?compjobservice | grep harbor_jobservice_task_tota3harbor_jobservice_task_process_time_seconds
任务处理时间的持续时间即任务从开始执行到任务结束用了多少时间。
$ curl http://192.168.2.22:9099/metrics?compjobservice | grep harbor_jobservice_task_process_ 四、部署Prometheus Server二进制
4.1 创建安装目录
$ mkdir /etc/prometheus 4.2 下载安装包
$ wget https://github.com/prometheus/prometheus/releases/download/v2.36.2/prometheus-2.36.2.linux-amd64.tar.gz -c
$ tar zxvf prometheus-2.36.2.linux-amd64.tar.gz -C /etc/prometheus
$ cp prometheus-2.36.2.linux-amd64/{prometheus,promtool} /usr/local/bin/
$ prometheus --version #查看版本
prometheus, version 2.36.2 (branch: HEAD, revision: d7e7b8e04b5ecdc1dd153534ba376a622b72741b)build user: rootf051ce0d6050build date: 20220620-13:21:35go version: go1.18.3platform: linux/amd644.3 修改配置文件
在prometheus的配置文件中指定获取harbor采集的指标数据。
$ cp prometheus-2.36.2.linux-amd64/prometheus.yml /etc/prometheus/
$ cat EOF /etc/prometheus/prometheus.yml
global:scrape_interval: 15sevaluation_interval: 15s
## 指定Alertmanagers地址
alerting:alertmanagers:- static_configs:- targets: [192.168.2.10:9093] #填写Alertmanagers地址
## 配置告警规则文件
rule_files: #指定告警规则- /etc/prometheus/rules.ymlscrape_configs:- job_name: prometheusstatic_configs:- targets: [localhost:9090]- job_name: node-exporterstatic_configs:- targets:- 192.168.2.22:9100- job_name: harbor-exporterscrape_interval: 20sstatic_configs:- targets: [192.168.2.22:9099]- job_name: harbor-coreparams:comp: [core]static_configs:- targets: [192.168.2.22:9099]- job_name: harbor-registryparams:comp: [registry]static_configs:- targets: [192.168.2.22:9099]- job_name: harbor-jobserviceparams:comp: [jobservice]static_configs:- targets: [192.168.2.22:9099]
EOF 4.4 语法检查
检测配置文件的语法是否正确
$ promtool check config /etc/prometheus/prometheus.yml
Checking /etc/prometheus/prometheus.ymlSUCCESS: /etc/prometheus/prometheus.yml is valid prometheus config file syntaxChecking /etc/prometheus/rules.ymlSUCCESS: 6 rules found4.5 创建服务启动文件
$ cat EOF /usr/lib/systemd/system/prometheus.service
[Unit]
DescriptionPrometheus Service
Documentationhttps://prometheus.io/docs/introduction/overview/
wantsnetwork-online.target
Afternetwork-online.target[Service]
Typesimple
Userroot
Grouproot
ExecStart/usr/local/bin/prometheus --config.file/etc/prometheus/prometheus.yml[Install]
WantedBymulti-user.target
EOF4.6 启动服务
$ systemctl daemon-reload
$ systemctl enable --now prometheus.service
$ systemctl status prometheus.service4.7 浏览器访问Prometheus UI
在浏览器地址栏输入主机IP:9090访问Prometheus UI 管理界面。 五、部署node-exporter
node-exporter服务可采集主机的cpu、内存、磁盘等资源指标。
5.1 下载安装包
$ wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
$ tar zxvf node_exporter-1.2.2.linux-amd64.tar.gz
$ cp node_exporter-1.2.2.linux-amd64/node_exporter /usr/local/bin/
$ node_exporter --version
node_exporter, version 1.2.2 (branch: HEAD, revision: 26645363b486e12be40af7ce4fc91e731a33104e)build user: rootb9cb4aa2eb17build date: 20210806-13:44:18go version: go1.16.7platform: linux/amd645.2 创建服务启动文件
$ cat EOF /usr/lib/systemd/system/node-exporter.service
[Unit]
DescriptionPrometheus Node Exporter
Afternetwork.target[Service]
ExecStart/usr/local/bin/node_exporter
#Userprometheus[Install]
WantedBymulti-user.target
EOF5.3 启动服务
$ systemctl daemon-reload
$ systemctl enable --now node-exporter.service
$ systemctl status node-exporter.service
$ ss -ntulp | grep node_exporter
tcp LISTEN 0 128 :::9100 :::* users:((node_exporter,pid36218,fd3)5.4 查看node指标
通过curl获取node-exporter服务采集到的监控数据。
$ curl http://localhost:9100/metrics六、Grafana部署与仪表盘设计
二进制部署Grafana v8.4.4服务。
6.1 下载安装包
$ wget https://dl.grafana.com/enterprise/release/grafana-enterprise-8.4.4.linux-amd64.tar.gz -c
$ tar zxvf grafana-enterprise-8.4.4.linux-amd64.tar.gz -C /etc/
$ mv /etc/grafana-8.4.4 /etc/grafana
$ cp -a /etc/grafana/bin/{grafana-cli,grafana-server} /usr/local/bin/
#安装依赖包
$ yum install -y fontpackages-filesystem.noarch libXfont libfontenc lyx-fonts.noarch xorg-x11-font-utils6.2 安装插件 安装grafana时钟插件
$ grafana-cli plugins install grafana-clock-panel安装Zabbix插件
$ grafana-cli plugins install alexanderzobnin-zabbix-app安装服务器端图像渲染组件
$ yum install -y fontconfig freetype* urw-fonts6.3 创建服务启动文件
$ cat EOF /usr/lib/systemd/system/grafana.service
[Service]
Typenotify
ExecStart/usr/local/bin/grafana-server -homepath /etc/grafana
Restarton-failure[Install]
WantedBymulti-user.target
EOF-homepath指定grafana的工作目录
6.4 启动grafana服务
$ systemctl daemon-reload
$ systemctl enable --now grafana.service
$ systemctl status grafana.service
$ ss -ntulp | grep grafana-server
tcp LISTEN 0 128 :::3000 :::* users:((grafana-server,pid120140,fd9)) 6.5 配置数据源
在浏览器地址栏输入主机IP和grafana服务端口访问Grafana UI界面后添加Prometheus数据源。
默认用户密码admin/admin
6.6 导入json模板
一旦您配置了Prometheus服务器以收集您的 Harbor 指标您就可以使用 Grafana来可视化您的数据。Harbor 存储库中提供了一个 示例 Grafana 仪表板可帮助您开始可视化 Harbor 指标。 Harbor官方提供了一个grafana的json文件模板。下载 https://github.com/goharbor/harbor/blob/main/contrib/grafana-dashborad/metrics-example.json 七、部署AlertManager服务扩展
Alertmanager是一个独立的告警模块接收Prometheus等客户端发来的警报之后通过分组、删除重复等处理并将它们通过路由发送给正确的接收器
7.1 下载安装包
$ wget https://github.com/prometheus/alertmanager/releases/download/v0.23.0/alertmanager-0.23.0.linux-amd64.tar.gz
$ tar zxvf alertmanager-0.23.0.linux-amd64.tar.gz
$ cp alertmanager-0.23.0.linux-amd64/{alertmanager,amtool} /usr/local/bin/7.2 修改配置文件
$ mkdir /etc/alertmanager
$ cat /etc/alertmanager/alertmanager.yml
global:resolve_timeout: 5mroute:group_by: [alertname]group_wait: 10sgroup_interval: 10srepeat_interval: 1hreceiver: web.hook
receivers:
- name: web.hookwebhook_configs:- url: http://127.0.0.1:5001/
inhibit_rules:- source_match:severity: criticaltarget_match:severity: warningequal: [alertname, dev, instance]7.3 创建服务启动文件
$ cat EOF /usr/lib/systemd/system/alertmanager.service
[Unit]
Descriptionalertmanager
fternetwork.target[Service]
ExecStart/usr/local/bin/alertmanager --config.file/etc/alertmanager/alertmanager.yml
ExecReload/bin/kill -HUP $MAINPID
KillModeprocess
Restarton-failure[Install]
WantedBymulti-user.target
EOF7.4 启动服务
$ systemctl daemon-reload
$ systemctl enable --now alertmanager.service
$ systemctl status alertmanager.service
$ ss -ntulp | grep alertmanager7.5 配置告警规则
前面在Prometheus server的配置文件中中指定了告警规则的文件为/etc/prometheus/rules.yml。
$ cat /etc/prometheus/rules.yml
groups:- name: Warningrules:- alert: NodeMemoryUsageexpr: 100 - (node_memory_MemFree_bytes node_memory_Cached_bytes node_memory_Buffers_bytes) / node_memory_MemTotal_bytes*100 80for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: 内存使用率过高description: {{$labels.instance}}: 内存使用率大于 80% (当前值: {{ $value }}- alert: NodeCpuUsageexpr: (1-((sum(increase(node_cpu_seconds_total{modeidle}[1m])) by (instance)) / (sum(increase(node_cpu_seconds_total[1m])) by (instance)))) * 100 70for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: CPU使用率过高description: {{$labels.instance}}: CPU使用率大于 70% (当前值: {{ $value }}- alert: NodeDiskUsageexpr: 100 - node_filesystem_free_bytes{fstype~xfs|ext4} / node_filesystem_size_bytes{fstype~xfs|ext4} * 100 80for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: 分区使用率过高description: {{$labels.instance}}: 分区使用大于 80% (当前值: {{ $value }}- alert: Node-UPexpr: up{jobnode-exporter} 0for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: 服务宕机description: {{$labels.instance}}: 服务中断超过1分钟- alert: TCPexpr: node_netstat_Tcp_CurrEstab 1000for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: TCP连接过高description: {{$labels.instance}}: 连接大于1000 (当前值: {{$value}})- alert: IOexpr: 100 - (avg(irate(node_disk_io_time_seconds_total[1m])) by(instance)* 100) 60for: 1mlabels:status: Warningannotations:summary: {{$labels.instance}}: 流入磁盘IO使用率过高description: {{$labels.instance}}:流入磁盘IO大于60% (当前值:{{$value}})
推荐告警规则 24小时内harbor存储用尽 predict_linear(harbor_system_volumes_bytes{storagefree}[6h], 3600 * 24) 0 harbor存储使用率大于80% sum(harbor_system_volumes_bytes{storagefree}) / sum(harbor_system_volumes_bytes{storagetotal}) 80% 20分钟内一个镜像被拉取5次 increase(harbor_image_pull_count[20m])5 postgres 连接数即将接近上限 harbor_database_connections 45 postgres 不健康 harbor_database_health 1 harbor组件不健康 kube_deployment_status_replicas_available{namespaceharbor-2} 1 harbor数据库不健康 kube_statefulset_status_replicas_ready{namespaceharbor-2} 1 配置grafana面板 参考 grafana 官方dashboard结合 ID: 16686 和 ID: 14075 基本就可以快速作出一个想要的harbor监控面板了。通过该面板可以直观看到harbor实例状态、整体存储量、各个项目核心信息(仓库 数、镜像容量、拉取数等)。
Grafana面板 http://grafana.cpaas.com/d/Nhhla1VGk/harbor-dashbord?orgId1 至此我们已经可以通过prometheus和grafana快速实现对harbor的监控了。通过监控发现harbor 仓库基础信息和存储量对于存储量很大的project我们可以按需在harbor中配置每天的清理测量(例如保留镜像最近10次推送的tag)。