当前位置: 首页 > news >正文

乐山北京网站建设苏州网站建设-中国互联

乐山北京网站建设,苏州网站建设-中国互联,六安人论坛招聘网,c 微信网站开发目录一、下载及部署二、postgres_exporter配置1. 停止脚本stop.sh2. 启动脚本start.sh3. queries.yaml三、PostgreSQL数据库配置1. 修改postgresql.conf配置文件2. 创建用户、表、扩展等四、参考一、下载及部署 下载地址 选一个amd64下载 上传至服务器#xff0c;解压 tax… 目录一、下载及部署二、postgres_exporter配置1. 停止脚本stop.sh2. 启动脚本start.sh3. queries.yaml三、PostgreSQL数据库配置1. 修改postgresql.conf配置文件2. 创建用户、表、扩展等四、参考一、下载及部署 下载地址 选一个amd64下载 上传至服务器解压 tax -xvf postgres_exporter-0.11.1.linux-amd64.tar.gz进入解压后的目录 二、postgres_exporter配置 1. 停止脚本stop.sh 建立停止脚本 stop.sh 。注意unix编码 #!/bin/sh echo stop #!/bin/bashPID$(ps -ef | grep postgres_exporter | grep -v grep | awk { print $2 }) if [ ${PID} ] thenecho Application is stpping...echo kill $PID DONEkill $PID elseecho Application is already stopped... fi2. 启动脚本start.sh 启动脚本start.sh 后面会建立postgres_exporter用户密码为password –web.listen-address为监听的端口 –extend.query-path为自定义查询的文件 sh stop.sh export DATA_SOURCE_NAMEpostgresql://postgres_exporter:password数据库IP:数据库端口/postgres?sslmodedisablenohup ./postgres_exporter --web.listen-address0.0.0.0:8001 --extend.query-pathqueries.yaml nohup.out 21 3. queries.yaml pg_replication:query: SELECT CASE WHEN NOT pg_is_in_recovery() THEN 0 ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))) END AS lagmaster: truemetrics:- lag:usage: GAUGEdescription: Replication lag behind master in secondspg_postmaster:query: SELECT pg_postmaster_start_time as start_time_seconds from pg_postmaster_start_time()master: truemetrics:- start_time_seconds:usage: GAUGEdescription: Time at which postmaster startedpg_stat_user_tables:query: |SELECTcurrent_database() datname,schemaname,relname,seq_scan,seq_tup_read,idx_scan,idx_tup_fetch,n_tup_ins,n_tup_upd,n_tup_del,n_tup_hot_upd,n_live_tup,n_dead_tup,n_mod_since_analyze,COALESCE(last_vacuum, 1970-01-01Z) as last_vacuum,COALESCE(last_autovacuum, 1970-01-01Z) as last_autovacuum,COALESCE(last_analyze, 1970-01-01Z) as last_analyze,COALESCE(last_autoanalyze, 1970-01-01Z) as last_autoanalyze,vacuum_count,autovacuum_count,analyze_count,autoanalyze_countFROMpg_stat_user_tablesmetrics:- datname:usage: LABELdescription: Name of current database- schemaname:usage: LABELdescription: Name of the schema that this table is in- relname:usage: LABELdescription: Name of this table- seq_scan:usage: COUNTERdescription: Number of sequential scans initiated on this table- seq_tup_read:usage: COUNTERdescription: Number of live rows fetched by sequential scans- idx_scan:usage: COUNTERdescription: Number of index scans initiated on this table- idx_tup_fetch:usage: COUNTERdescription: Number of live rows fetched by index scans- n_tup_ins:usage: COUNTERdescription: Number of rows inserted- n_tup_upd:usage: COUNTERdescription: Number of rows updated- n_tup_del:usage: COUNTERdescription: Number of rows deleted- n_tup_hot_upd:usage: COUNTERdescription: Number of rows HOT updated (i.e., with no separate index update required)- n_live_tup:usage: GAUGEdescription: Estimated number of live rows- n_dead_tup:usage: GAUGEdescription: Estimated number of dead rows- n_mod_since_analyze:usage: GAUGEdescription: Estimated number of rows changed since last analyze- last_vacuum:usage: GAUGEdescription: Last time at which this table was manually vacuumed (not counting VACUUM FULL)- last_autovacuum:usage: GAUGEdescription: Last time at which this table was vacuumed by the autovacuum daemon- last_analyze:usage: GAUGEdescription: Last time at which this table was manually analyzed- last_autoanalyze:usage: GAUGEdescription: Last time at which this table was analyzed by the autovacuum daemon- vacuum_count:usage: COUNTERdescription: Number of times this table has been manually vacuumed (not counting VACUUM FULL)- autovacuum_count:usage: COUNTERdescription: Number of times this table has been vacuumed by the autovacuum daemon- analyze_count:usage: COUNTERdescription: Number of times this table has been manually analyzed- autoanalyze_count:usage: COUNTERdescription: Number of times this table has been analyzed by the autovacuum daemonpg_statio_user_tables:query: SELECT current_database() datname, schemaname, relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit, toast_blks_read, toast_blks_hit, tidx_blks_read, tidx_blks_hit FROM pg_statio_user_tablesmetrics:- datname:usage: LABELdescription: Name of current database- schemaname:usage: LABELdescription: Name of the schema that this table is in- relname:usage: LABELdescription: Name of this table- heap_blks_read:usage: COUNTERdescription: Number of disk blocks read from this table- heap_blks_hit:usage: COUNTERdescription: Number of buffer hits in this table- idx_blks_read:usage: COUNTERdescription: Number of disk blocks read from all indexes on this table- idx_blks_hit:usage: COUNTERdescription: Number of buffer hits in all indexes on this table- toast_blks_read:usage: COUNTERdescription: Number of disk blocks read from this tables TOAST table (if any)- toast_blks_hit:usage: COUNTERdescription: Number of buffer hits in this tables TOAST table (if any)- tidx_blks_read:usage: COUNTERdescription: Number of disk blocks read from this tables TOAST table indexes (if any)- tidx_blks_hit:usage: COUNTERdescription: Number of buffer hits in this tables TOAST table indexes (if any)# WARNING: This set of metrics can be very expensive on a busy server as every unique query executed will create an additional time series pg_stat_statements:query: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.useridt2.oid) JOIN pg_database t3 ON (t1.dbidt3.oid) WHERE t2.rolname ! rdsadminmaster: truemetrics:- rolname:usage: LABELdescription: Name of user- datname:usage: LABELdescription: Name of database- queryid:usage: LABELdescription: Query ID- calls:usage: COUNTERdescription: Number of times executed- total_time_seconds:usage: COUNTERdescription: Total time spent in the statement, in milliseconds- min_time_seconds:usage: GAUGEdescription: Minimum time spent in the statement, in milliseconds- max_time_seconds:usage: GAUGEdescription: Maximum time spent in the statement, in milliseconds- mean_time_seconds:usage: GAUGEdescription: Mean time spent in the statement, in milliseconds- stddev_time_seconds:usage: GAUGEdescription: Population standard deviation of time spent in the statement, in milliseconds- rows:usage: COUNTERdescription: Total number of rows retrieved or affected by the statement- shared_blks_hit:usage: COUNTERdescription: Total number of shared block cache hits by the statement- shared_blks_read:usage: COUNTERdescription: Total number of shared blocks read by the statement- shared_blks_dirtied:usage: COUNTERdescription: Total number of shared blocks dirtied by the statement- shared_blks_written:usage: COUNTERdescription: Total number of shared blocks written by the statement- local_blks_hit:usage: COUNTERdescription: Total number of local block cache hits by the statement- local_blks_read:usage: COUNTERdescription: Total number of local blocks read by the statement- local_blks_dirtied:usage: COUNTERdescription: Total number of local blocks dirtied by the statement- local_blks_written:usage: COUNTERdescription: Total number of local blocks written by the statement- temp_blks_read:usage: COUNTERdescription: Total number of temp blocks read by the statement- temp_blks_written:usage: COUNTERdescription: Total number of temp blocks written by the statement- blk_read_time_seconds:usage: COUNTERdescription: Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)- blk_write_time_seconds:usage: COUNTERdescription: Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)pg_process_idle:query: |WITHmetrics AS (SELECTapplication_name,SUM(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change))::bigint)::float AS process_idle_seconds_sum,COUNT(*) AS process_idle_seconds_countFROM pg_stat_activityWHERE state idleGROUP BY application_name),buckets AS (SELECTapplication_name,le,SUM(CASE WHEN EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change)) leTHEN 1ELSE 0END)::bigint AS bucketFROMpg_stat_activity,UNNEST(ARRAY[1, 2, 5, 15, 30, 60, 90, 120, 300]) AS leGROUP BY application_name, leORDER BY application_name, le)SELECTapplication_name,process_idle_seconds_sum as seconds_sum,process_idle_seconds_count as seconds_count,ARRAY_AGG(le) AS seconds,ARRAY_AGG(bucket) AS seconds_bucketFROM metrics JOIN buckets USING (application_name)GROUP BY 1, 2, 3metrics:- application_name:usage: LABELdescription: Application Name- seconds:usage: HISTOGRAMdescription: Idle time of server processes三、PostgreSQL数据库配置 1. 修改postgresql.conf配置文件 先根据命令在服务器上找到配置文件在哪 find / -name postgresql.conf修改配置文件postgresql.conf添加下面三行 shared_preload_libraries pg_stat_statements pg_stat_statements.max 1000 pg_stat_statements.track all重启pg服务pg不同安装方式启动方式可能不同 pg_ctl restart2. 创建用户、表、扩展等 官网的文档里提示比pg10高或低版本的数据库执行的SQL不同但是我pg11只执行高版本的SQL失败了。最终高低版本都执行成功。 最好在postgres库下的public模式执行 版本10的pg以下三段SQL都要执行 -- To use IF statements, hence to be able to check if the user exists before -- attempting creation, we need to switch to procedural SQL (PL/pgSQL) -- instead of standard SQL. -- More: https://www.postgresql.org/docs/9.3/plpgsql-overview.html -- To preserve compatibility with 9.0, DO blocks are not used; instead, -- a function is created and dropped. CREATE OR REPLACE FUNCTION __tmp_create_user() returns void as $$ BEGINIF NOT EXISTS (SELECT -- SELECT list can stay empty for thisFROM pg_catalog.pg_userWHERE usename postgres_exporter) THENCREATE USER postgres_exporter;END IF; END; $$ language plpgsql;SELECT __tmp_create_user(); DROP FUNCTION __tmp_create_user();ALTER USER postgres_exporter WITH PASSWORD password; ALTER USER postgres_exporter SET SEARCH_PATH TO postgres_exporter,pg_catalog;-- If deploying as non-superuser (for example in AWS RDS), uncomment the GRANT -- line below and replace MASTER_USER with your root user. -- GRANT postgres_exporter TO MASTER_USER;GRANT CONNECT ON DATABASE postgres TO postgres_exporter;GRANT pg_monitor to postgres_exporter;版本10的pg只执行下面的SQL CREATE SCHEMA IF NOT EXISTS postgres_exporter; GRANT USAGE ON SCHEMA postgres_exporter TO postgres_exporter;CREATE OR REPLACE FUNCTION get_pg_stat_activity() RETURNS SETOF pg_stat_activity AS $$ SELECT * FROM pg_catalog.pg_stat_activity; $$ LANGUAGE sql VOLATILE SECURITY DEFINER;CREATE OR REPLACE VIEW postgres_exporter.pg_stat_activity ASSELECT * from get_pg_stat_activity();GRANT SELECT ON postgres_exporter.pg_stat_activity TO postgres_exporter;CREATE OR REPLACE FUNCTION get_pg_stat_replication() RETURNS SETOF pg_stat_replication AS $$ SELECT * FROM pg_catalog.pg_stat_replication; $$ LANGUAGE sql VOLATILE SECURITY DEFINER;CREATE OR REPLACE VIEW postgres_exporter.pg_stat_replication ASSELECT * FROM get_pg_stat_replication();GRANT SELECT ON postgres_exporter.pg_stat_replication TO postgres_exporter;CREATE EXTENSION IF NOT EXISTS pg_stat_statements; CREATE OR REPLACE FUNCTION get_pg_stat_statements() RETURNS SETOF pg_stat_statements AS $$ SELECT * FROM public.pg_stat_statements; $$ LANGUAGE sql VOLATILE SECURITY DEFINER;CREATE OR REPLACE VIEW postgres_exporter.pg_stat_statements ASSELECT * FROM get_pg_stat_statements();GRANT SELECT ON postgres_exporter.pg_stat_statements TO postgres_exporter;来到postgres_exporter安装目录启动postgres_exporter sh start.sh观察nohup.out文件看是否有报错信息。 如果集成了Grafana可以发现页面已经能采集到数据了 Grafanaprometheuspostgres_exporter参考 四、参考 Github地址postgres_exporter使用过程中的注意事项
http://www.dnsts.com.cn/news/270161.html

相关文章:

  • 网站建设流程详细北京seo专员
  • 西安网站建设培训中心wordpress多榜单查询
  • 网站的规划建设与分析静态网站开发工具
  • 邯郸建设局网站六安网约车
  • 禅城网站制作百度top风云榜
  • 企业网站管理系统源码知名品牌logo标志设计解析
  • 唐山网站建设赫鸣科技设计公司logo大全
  • 客源网站服装网站建设公司有哪些
  • 贵阳美丽乡村建设网站什么网站可以找人做软件下载
  • 做电影网站要不要收费的中国网络技术有限公司
  • 怎么建立一个博客网站吗深圳软件与信息服务
  • 网站备案号在哪里看建设部投诉网站
  • php网站搭建教程mediwiki 做网站
  • 在哪可以建一个网站佛山宽屏网站建设
  • 做网站特别简单的软件大型的网站建设
  • 价格套餐网站app软件开发app定制开发价格
  • 多语言网站模板南通住房和城乡建设部网站首页
  • 久久诗歌网南宁seo规则
  • 做电影网站怎么挣钱外国有没有中国代做数学作业的网站
  • 怎样看网页的友情链接seo网页优化公司
  • 背投广告典型网站做历史卷子的网站
  • 深圳特区专业网站建设公司网页设计动态效果怎么制作
  • 合肥做机床的公司网站镇海seo专业优化平台
  • 做社情网站犯法怎么办有名的app开发公司
  • 网站在线生成器南京自适应网站
  • 上海建设安全协会网站网站推广的搜索引擎推广
  • 刚做的网站怎么在百度上能搜到网站建设什么
  • 镇江网站建设推广房屋装修设计师培训
  • 网站建设的问题分析如何给企业做网络推广
  • 企业建设网站的目的应税服务网站开发开票