婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av

主頁 > 知識(shí)庫 > Docker 部署 Prometheus的安裝詳細(xì)教程

Docker 部署 Prometheus的安裝詳細(xì)教程

熱門標(biāo)簽:在百度地圖標(biāo)注車輛 威海人工外呼系統(tǒng)供應(yīng)商 藍(lán)點(diǎn)外呼系統(tǒng) 撫順移動(dòng)400電話申請(qǐng) 烏海智能電話機(jī)器人 寧夏房產(chǎn)智能外呼系統(tǒng)要多少錢 貴陽教育行業(yè)電話外呼系統(tǒng) 400電話申請(qǐng)方案 做外呼系統(tǒng)的公司違法嗎

Docker 部署 Prometheus 說明:

監(jiān)控端安裝:
Prometheus Server(普羅米修斯監(jiān)控主服務(wù)器 )
Node Exporter (收集Host硬件和操作系統(tǒng)信息)
cAdvisor (負(fù)責(zé)收集Host上運(yùn)行的容器信息)
Grafana (展示普羅米修斯監(jiān)控界面)

被監(jiān)控安裝:
Node Exporter (收集Host硬件和操作系統(tǒng)信息)
cAdvisor (負(fù)責(zé)收集Host上運(yùn)行的容器信息)

1.安裝Node Exporter

  • 所有服務(wù)器安裝
  • Node Exporter 收集系統(tǒng)信息,用于監(jiān)控CPU、內(nèi)存、磁盤使用率、磁盤讀寫等系統(tǒng)信息
  • –net=host,這樣 Prometheus Server 可以直接與 Node Exporter 通信
docker run -d -p 9100:9100 \

-v "/proc:/host/proc" \

-v "/sys:/host/sys" \

-v "/:/rootfs" \

-v "/etc/localtime:/etc/localtime" \

--net=host \

prom/node-exporter \

--path.procfs /host/proc \

--path.sysfs /host/sys \

--collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"

[root@k8s-m1 ~]# docker ps|grep exporter
ee30add8d207   prom/node-exporter                                  "/bin/node_exporter …"   About a minute ago   Up About a minute                 condescending_shirley

2.安裝cAdvisor

  •  所有服務(wù)器安裝
  • cAdvisor 收集docker信息,用于展示docker的cpu、內(nèi)存、上傳下載等信息
  • –net=host,這樣 Prometheus Server 可以直接與 cAdvisor 通信
docker run -d \

-v "/etc/localtime:/etc/localtime" \

--volume=/:/rootfs:ro \

--volume=/var/run:/var/run:rw \

--volume=/sys:/sys:ro \

--volume=/var/lib/docker/:/var/lib/docker:ro \

--volume=/dev/disk/:/dev/disk:ro \

--publish=18104:8080 \

--detach=true \

--name=cadvisor \

--privileged=true \

google/cadvisor:latest

[root@k8s-m1 ~]# docker ps|grep cadvisor
cf6af6118055        google/cadvisor:latest                            "/usr/bin/cadvisor -…"   38 seconds ago       Up 37 seconds       0.0.0.0:18104->8080/tcp   cadvisor
可以進(jìn)入容器查看:
[root@agent ~]# sudo docker exec -it 容器id /bin/sh

3.安裝 Prometheus Server

監(jiān)控端安裝

1)編輯配置文件

  • 首先在本地創(chuàng)建 prometheus.yml 這是普羅米修斯的配置文件
  • 將下方內(nèi)容寫入到文件中
  • 將監(jiān)聽的地址改為自己本機(jī)地址
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    #監(jiān)聽的地址
    - targets: ['localhost:9090','172.23.0.241:8088','172.23.0.241:9090']

2)啟動(dòng)容器

1> prometheus.yml配置文件

prometheus.yml內(nèi)需配置外網(wǎng)ip,內(nèi)網(wǎng)ip除了本機(jī),在grafana識(shí)別不到!

# my global confi
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    #監(jiān)聽的地址(此處為服務(wù)器內(nèi)網(wǎng)ip)
    - targets: ['10.27.158.33:9090','10.27.158.33:9100','10.27.158.33:18104']
    - targets: ['10.29.46.54:9100','10.29.46.54:18104']
    - targets: ['10.27.163.172:9100','10.27.163.172:18104']

#  - job_name: 'GitLab'
#    metrics_path: '/-/metrics'
#    static_configs:
#    - targets: ['172.23.0.241:10101']

  - job_name: 'jenkins'
    metrics_path: '/prometheus/'
    scheme: http
    bearer_token: bearer_token
    static_configs:
    - targets: ['172.23.0.242:8080']

  - job_name: "Nginx"
    metrics_path: '/status/format/prometheus'
    static_configs:
    - targets: ['172.23.0.242:8088']

2>啟動(dòng)命令

–net=host,這樣 Prometheus Server 可以直接與 Exporter 和 Grafana 通

docker run -d -p 9090:9090 \

-v /root/Prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \

-v "/etc/localtime:/etc/localtime" \

--name prometheus \

--net=host \

prom/prometheus:latest

# 當(dāng)Prometheus容器啟動(dòng)成功后訪問
# PS:服務(wù)器需開啟eth0的外網(wǎng)端口,才可用瀏覽器訪問 9090 0.0.0.0
106.15.0.11:9090

4.創(chuàng)建運(yùn)行Grafana

  • 監(jiān)控服務(wù)器安裝
  • 用于圖像化顯示
docker run -d -i -p 3000:3000 \

-v "/etc/localtime:/etc/localtime" \

-e "GF_SERVER_ROOT_URL=http://grafana.server.name" \

-e "GF_SECURITY_ADMIN_PASSWORD=admin8888" \

--net=host \

grafana/grafana

# PS:服務(wù)器需開啟eth0的外網(wǎng)端口,才可用瀏覽器訪問:3000 0.0.0.0
Grafana啟動(dòng)后,在瀏覽器中打開 172.23.0.241:3000 登錄界面,登錄:
	用戶名:admin
	密碼:admin8888

1)添加普羅米修斯服務(wù)器




然后為添加好的數(shù)據(jù)源做圖形顯示



5.添加監(jiān)控模板

  • 自己手工創(chuàng)建dashboard有點(diǎn)困難,可以借助開元的力量訪問 [監(jiān)控模板地址]https://grafana.com/grafana/dashboards將會(huì)看到很多用于監(jiān)控 Docker 的 Dashboard。監(jiān)控模板地址(多種監(jiān)控模板根據(jù)自己需求下載不同的模板)
  • 監(jiān)控模板地址
  • 有些dashboard可以下載后直接導(dǎo)入,而有些需要修改后再導(dǎo)入,需要看dashboard的overview
  • 最后效果

這時(shí)候可以自己選擇編譯對(duì)應(yīng)的模板,在prometheus上取值
傳到grafana上。就可以了。挺好用的!

6.鍵值查詢

通過指標(biāo) io_namespace_http_requests_total 我們可以:

查詢應(yīng)用的請(qǐng)求總量
	sum(io_namespace_http_requests_total)
查詢每秒Http請(qǐng)求量
	sum(rate(io_wise2c_gateway_requests_total[5m]))
查詢當(dāng)前應(yīng)用請(qǐng)求量Top N的URI
	topk(10, sum(io_namespace_http_requests_total) by (path))

配置Prometheus監(jiān)控Nginx

1、需給Nginx安裝兩個(gè)模塊,才可用Prometheus來監(jiān)控:nginx-module-vts、geoip

2、思路:原來無論是編譯、還是yum裝的nginx,都需要下載同版本的tar包,基于原來安裝選項(xiàng)的基礎(chǔ)上,增加以上兩個(gè)模塊選項(xiàng),進(jìn)行編譯安裝,來替換原來的nginx,最終將原nginx目錄的配置文件如nginx.conf文件、conf.d目錄再移動(dòng)到編譯安裝后的nignx目錄內(nèi),最后啟動(dòng)nginx即可。

這里官方源安裝:
1)配置官方源

[root@web01 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

2)安裝依賴

yum install -y gcc gcc-c++ autoconf pcre pcre-devel make automake wget httpd-tools vim tree

3)安裝nginx

[root@web01 ~]# yum install -y nginx

4)配置nginx

[root@web01 ~]# vim /etc/nginx/nginx.conf
user www;

5)啟動(dòng)服務(wù)

1.方法一:直接啟動(dòng)如果有報(bào)錯(cuò)==》重大錯(cuò)誤,80端口有占用==》查看占用端口的服務(wù)HTTPD,停掉,在重啟nginx
[root@web01 ~]# systemctl start nginx
2.方法二:
[root@web01 ~]# nginx

1.查看當(dāng)前Nginx安裝選項(xiàng)

[root@db01 nginx-1.12.2]# nginx -V
[root@db01 nginx-1.12.2]# ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' 

2.準(zhǔn)備模塊

# 下載、解壓新包
[root@k8s-n1 packages]# wget http://nginx.org/download/nginx-1.16.1.tar.gz
[root@k8s-n1 packages]# tar xf nginx-1.16.1.tar.gz

#克隆下載 nginx-module-vts 模塊
[root@k8s-n1 packages]# git clone https://github.com/vozlt/nginx-module-vts

# .安裝GeoIP模塊
[root@k8s-n1 packages]# yum -y install epel-release geoip-devel

3.停止Nginx服務(wù)

# 停止nginx服務(wù)
[root@k8s-n1 packages]# nginx -s stop

# 備份原nginx啟動(dòng)文件
[root@k8s-n1 packages]# which nginx
/usr/sbin/nginx
[root@k8s-n1 packages]# mv /usr/sbin/nginx /usr/sbin/nginx.bak

# 備份原nignx目錄
[root@k8s-n1 packages]# mv /etc/nginx nginx-1.12.2.bak

4.編譯安裝

1> 安裝所需依賴

編譯安裝時(shí)可能會(huì)出現(xiàn) `make: *** 沒有規(guī)則可以創(chuàng)建“default”需要的目標(biāo)“build”。 停止`的報(bào)錯(cuò),是因?yàn)槿鄙僖蕾噷?dǎo)致

# 管他三七21,裝一波兒在進(jìn)行編譯,否則裝完依賴還得重新./configure ~
yum install -y gcc gcc++ bash-completion vim lrzsz wget expect net-tools nc nmap tree dos2unix htop iftop iotop unzip telnet sl psmisc nethogs glances bc pcre-devel zlib zlib-devel openssl openssl-devel libxml2 libxml2-dev libxslt-devel gd gd-devel perl-devel perl-ExtUtils-Embed GeoIP GeoIP-devel GeoIP-data pcre-devel

2> 編譯安裝

  •  進(jìn)入剛剛解壓的nginx目錄,編譯安裝
  • 基于原來安裝參數(shù),尾部追加連個(gè)參數(shù)

–add-module=/root/packages/nginx-module-vts
–with-http_geoip_module

[root@db01 nginx-1.12.2]# ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-compat --with-debug --with-file-aio --with-google_perftools_module --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_perl_module=dynamic --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-threads --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' --add-module=/root/package/nginx-module-vts --with-http_geoip_module
# 編譯安裝
# -j 多核編譯(配置低的不建議使用此參數(shù),會(huì)卡住~)
[root@k8s-n1 nginx-1.12.2]# make -j && make install

5.配置Nginx

[root@k8s-n1 packages]# cp -r nginx-1.12.2.bak/conf.d/ /etc/nginx/
[root@k8s-n1 packages]# cp -r nginx-1.12.2.bak/nginx.conf /etc/nginx/
[root@k8s-n1 packages]# rm -f /etc/nginx/conf.d/default.conf

配置Nginx配置文件

 http層

server層

	···
http {	
	···
    include /etc/nginx/conf.d/*.conf;

	##################### 1.http層:添加三行配置 ##################### 
    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_host on;
    geoip_country /usr/share/GeoIP/GeoIP.dat;

	##################### 2.server層:指定server層端口號(hào),建議8088端口,不沖突直接復(fù)制粘貼即可#####################
    server {
        listen       8088;
        server_name  localhost;
        # 以下vhost配置寫在此location內(nèi)
        location /status {
        vhost_traffic_status on;	# 流量狀態(tài),默認(rèn)即為on,可不寫此行
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
        vhost_traffic_status_filter_by_set_key $uri uri::$server_name;     #每個(gè)uri訪問量
        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;     #不同國家/區(qū)域請(qǐng)求量
        vhost_traffic_status_filter_by_set_key $status $server_name;     #http code統(tǒng)計(jì)
        vhost_traffic_status_filter_by_set_key $upstream_addr upstream::backend;     #后端>轉(zhuǎn)發(fā)統(tǒng)計(jì)
        vhost_traffic_status_filter_by_set_key $remote_port client::ports::$server_name;     #請(qǐng)求端口統(tǒng)計(jì)
        vhost_traffic_status_filter_by_set_key $remote_addr client::addr::$server_name;     #請(qǐng)求IP統(tǒng)計(jì)

        location ~ ^/storage/(.+)/.*$ {
            set $volume $1;
            vhost_traffic_status_filter_by_set_key $volume storage::$server_name;     #請(qǐng)求路徑統(tǒng)計(jì)
        }
        }
    }
   	##################### server層:可新建一個(gè)server,或在原有的不打緊的配置上修改也可以#####################
}

6.啟動(dòng)Nginx

[root@k8s-n1 packages]# nginx
[root@k8s-n1 packages]# netstat -lntp|grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      62214/nginx: master 
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      62214/nginx: master 

瀏覽器訪問:
	172.23.0.243:80			# nginx 默認(rèn)官方頁面
	172.23.0.243:8088/status # nignx 監(jiān)控項(xiàng)頁面

7.使用Prometheus監(jiān)控

  • prometheus服務(wù)端配置prometheus.yml,并重啟prometheus容器
  • metrics_path:定義接口后綴類型,默認(rèn)為/metrics
  • 即我們輸入ip+端口后,瀏覽器會(huì)自動(dòng)追加/metrics后綴
[root@k8s-m1 ~]# vim prometheus.yml
···
scrape_configs:
  - job_name: "Nginx"
    metrics_path: '/status/format/prometheus'
    static_configs:
    - targets: ['172.23.0.243:8088']
···
[root@k8s-m1 ~]# docker restart prometheus

# 此時(shí)進(jìn)入prometheus管理頁面,則能查詢nginx的監(jiān)控項(xiàng)

8.各個(gè)監(jiān)控項(xiàng)的含義

Nginx-module-vts提供了多種監(jiān)控項(xiàng),了解監(jiān)控項(xiàng)含義,有助于幫助自己生成需要的圖表

# HELP nginx_vts_info Nginx info
# TYPE nginx_vts_info gauge
nginx_vts_info{hostname="hbhly_21_205",version="1.16.1"} 1
# HELP nginx_vts_start_time_seconds Nginx start time
# TYPE nginx_vts_start_time_seconds gauge
nginx_vts_start_time_seconds 1584268136.439
# HELP nginx_vts_main_connections Nginx connections
# TYPE nginx_vts_main_connections gauge

# 區(qū)分狀態(tài)的nginx連接數(shù)
nginx_vts_main_connections{status="accepted"} 9271
nginx_vts_main_connections{status="active"} 7
nginx_vts_main_connections{status="handled"} 9271
nginx_vts_main_connections{status="reading"} 0
nginx_vts_main_connections{status="requests"} 438850
nginx_vts_main_connections{status="waiting"} 6
nginx_vts_main_connections{status="writing"} 1
# HELP nginx_vts_main_shm_usage_bytes Shared memory [ngx_http_vhost_traffic_status] info
# TYPE nginx_vts_main_shm_usage_bytes gauge

# 內(nèi)存使用量
nginx_vts_main_shm_usage_bytes{shared="max_size"} 1048575
nginx_vts_main_shm_usage_bytes{shared="used_size"} 24689
nginx_vts_main_shm_usage_bytes{shared="used_node"} 7
# HELP nginx_vts_server_bytes_total The request/response bytes
# TYPE nginx_vts_server_bytes_total counter
# HELP nginx_vts_server_requests_total The requests counter
# TYPE nginx_vts_server_requests_total counter
# HELP nginx_vts_server_request_seconds_total The request processing time in seconds
# TYPE nginx_vts_server_request_seconds_total counter
# HELP nginx_vts_server_request_seconds The average of request processing times in seconds
# TYPE nginx_vts_server_request_seconds gauge
# HELP nginx_vts_server_request_duration_seconds The histogram of request processing time
# TYPE nginx_vts_server_request_duration_seconds histogram
# HELP nginx_vts_server_cache_total The requests cache counter
# TYPE nginx_vts_server_cache_total counter

# 分Host的進(jìn)出流量
nginx_vts_server_bytes_total{host="10.160.21.205",direction="in"} 22921464
nginx_vts_server_bytes_total{host="10.160.21.205",direction="out"} 1098196005

# 分狀態(tài)碼的請(qǐng)求數(shù)量統(tǒng)計(jì) 1** 2** 3** 4** 5**
nginx_vts_server_requests_total{host="10.160.21.205",code="1xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="2xx"} 86809
nginx_vts_server_requests_total{host="10.160.21.205",code="3xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="4xx"} 2
nginx_vts_server_requests_total{host="10.160.21.205",code="5xx"} 0
nginx_vts_server_requests_total{host="10.160.21.205",code="total"} 86811

# 響應(yīng)時(shí)間
nginx_vts_server_request_seconds_total{host="10.160.21.205"} 0.000
nginx_vts_server_request_seconds{host="10.160.21.205"} 0.000

# 分狀態(tài)的緩存的統(tǒng)計(jì)
nginx_vts_server_cache_total{host="10.160.21.205",status="miss"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="bypass"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="expired"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="stale"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="updating"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="revalidated"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="hit"} 0
nginx_vts_server_cache_total{host="10.160.21.205",status="scarce"} 0
nginx_vts_server_bytes_total{host="devapi.feedback.test",direction="in"} 3044526
nginx_vts_server_bytes_total{host="devapi.feedback.test",direction="out"} 41257028

# 分狀態(tài)的連接數(shù)的統(tǒng)計(jì)
nginx_vts_server_requests_total{host="devapi.feedback.test",code="1xx"} 0
nginx_vts_server_requests_total{host="devapi.feedback.test",code="2xx"} 3983
nginx_vts_server_requests_total{host="devapi.feedback.test",code="3xx"} 0
nginx_vts_server_requests_total{host="devapi.feedback.test",code="4xx"} 24
nginx_vts_server_requests_total{host="devapi.feedback.test",code="5xx"} 11
nginx_vts_server_requests_total{host="devapi.feedback.test",code="total"} 4018
nginx_vts_server_request_seconds_total{host="devapi.feedback.test"} 327.173
nginx_vts_server_request_seconds{host="devapi.feedback.test"} 0.000

# nginx緩存計(jì)算器,精確到狀態(tài)和type
nginx_vts_server_cache_total{host="devapi.feedback.test",status="miss"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="bypass"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="expired"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="stale"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="updating"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="revalidated"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="hit"} 0
nginx_vts_server_cache_total{host="devapi.feedback.test",status="scarce"} 0
nginx_vts_server_bytes_total{host="testapi.feedback.test",direction="in"} 55553573
nginx_vts_server_bytes_total{host="testapi.feedback.test",direction="out"} 9667561188
nginx_vts_server_requests_total{host="testapi.feedback.test",code="1xx"} 0
nginx_vts_server_requests_total{host="testapi.feedback.test",code="2xx"} 347949
nginx_vts_server_requests_total{host="testapi.feedback.test",code="3xx"} 31
nginx_vts_server_requests_total{host="testapi.feedback.test",code="4xx"} 7
nginx_vts_server_requests_total{host="testapi.feedback.test",code="5xx"} 33
nginx_vts_server_requests_total{host="testapi.feedback.test",code="total"} 348020
nginx_vts_server_request_seconds_total{host="testapi.feedback.test"} 2185.177
nginx_vts_server_request_seconds{host="testapi.feedback.test"} 0.001
nginx_vts_server_cache_total{host="testapi.feedback.test",status="miss"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="bypass"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="expired"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="stale"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="updating"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="revalidated"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="hit"} 0
nginx_vts_server_cache_total{host="testapi.feedback.test",status="scarce"} 0
nginx_vts_server_bytes_total{host="*",direction="in"} 81519563
nginx_vts_server_bytes_total{host="*",direction="out"} 10807014221

# 分host請(qǐng)求數(shù)量統(tǒng)計(jì)
nginx_vts_server_requests_total{host="*",code="1xx"} 0
nginx_vts_server_requests_total{host="*",code="2xx"} 438741
nginx_vts_server_requests_total{host="*",code="3xx"} 31
nginx_vts_server_requests_total{host="*",code="4xx"} 33
nginx_vts_server_requests_total{host="*",code="5xx"} 44
nginx_vts_server_requests_total{host="*",code="total"} 438849
nginx_vts_server_request_seconds_total{host="*"} 2512.350
nginx_vts_server_request_seconds{host="*"} 0.007

# 分host緩存統(tǒng)計(jì)
nginx_vts_server_cache_total{host="*",status="miss"} 0
nginx_vts_server_cache_total{host="*",status="bypass"} 0
nginx_vts_server_cache_total{host="*",status="expired"} 0
nginx_vts_server_cache_total{host="*",status="stale"} 0
nginx_vts_server_cache_total{host="*",status="updating"} 0
nginx_vts_server_cache_total{host="*",status="revalidated"} 0
nginx_vts_server_cache_total{host="*",status="hit"} 0
nginx_vts_server_cache_total{host="*",status="scarce"} 0
# HELP nginx_vts_upstream_bytes_total The request/response bytes
# TYPE nginx_vts_upstream_bytes_total counter
# HELP nginx_vts_upstream_requests_total The upstream requests counter
# TYPE nginx_vts_upstream_requests_total counter
# HELP nginx_vts_upstream_request_seconds_total The request Processing time including upstream in seconds
# TYPE nginx_vts_upstream_request_seconds_total counter
# HELP nginx_vts_upstream_request_seconds The average of request processing times including upstream in seconds
# TYPE nginx_vts_upstream_request_seconds gauge
# HELP nginx_vts_upstream_response_seconds_total The only upstream response processing time in seconds
# TYPE nginx_vts_upstream_response_seconds_total counter
# HELP nginx_vts_upstream_response_seconds The average of only upstream response processing times in seconds
# TYPE nginx_vts_upstream_response_seconds gauge
# HELP nginx_vts_upstream_request_duration_seconds The histogram of request processing time including upstream
# TYPE nginx_vts_upstream_request_duration_seconds histogram
# HELP nginx_vts_upstream_response_duration_seconds The histogram of only upstream response processing time
# TYPE nginx_vts_upstream_response_duration_seconds histogram

# 分upstream流量統(tǒng)計(jì)
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.144.227.162:80",direction="in"} 12296
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.144.227.162:80",direction="out"} 13582924
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="2xx"} 25
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="4xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="5xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.144.227.162:80",code="total"} 25
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.144.227.162:80"} 1.483
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.144.227.162:80"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.144.227.162:80"} 1.484
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.144.227.162:80"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.152.218.149:80",direction="in"} 12471
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.152.218.149:80",direction="out"} 11790508
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="2xx"} 24
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="4xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="5xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.152.218.149:80",code="total"} 24
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.152.218.149:80"} 1.169
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.152.218.149:80"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.152.218.149:80"} 1.168
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.152.218.149:80"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8081",direction="in"} 3036924
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8081",direction="out"} 33355357
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="2xx"} 3971
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="3xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="4xx"} 24
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="5xx"} 11
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8081",code="total"} 4006
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.160.21.205:8081"} 326.427
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.160.21.205:8081"} 0.000
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.160.21.205:8081"} 300.722
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.160.21.205:8081"} 0.000
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8082",direction="in"} 55536408
nginx_vts_upstream_bytes_total{upstream="::nogroups",backend="10.160.21.205:8082",direction="out"} 9650089427
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="1xx"} 0
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="2xx"} 347912
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="3xx"} 31
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="4xx"} 7
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="5xx"} 33
nginx_vts_upstream_requests_total{upstream="::nogroups",backend="10.160.21.205:8082",code="total"} 347983
nginx_vts_upstream_request_seconds_total{upstream="::nogroups",backend="10.160.21.205:8082"} 2183.271
nginx_vts_upstream_request_seconds{upstream="::nogroups",backend="10.160.21.205:8082"} 0.001
nginx_vts_upstream_response_seconds_total{upstream="::nogroups",backend="10.160.21.205:8082"} 2180.893
nginx_vts_upstream_response_seconds{upstream="::nogroups",backend="10.160.21.205:8082"} 0.001

9.Prometheus UI中Target表達(dá)式查詢

1)CAdvisor中獲取的典型監(jiān)控指標(biāo)

指標(biāo)名稱 類型 含義
container_cpu_load_average_10s gauge 過去10秒內(nèi)容器CPU的平均負(fù)載
container_cpu_usage_seconds_total counter 容器在每個(gè)CPU內(nèi)核上的累積占用時(shí)間 (單位:秒)
container_cpu_system_seconds_total counter System CPU累積占用時(shí)間(單位:秒)
container_cpu_user_seconds_total counter User CPU累積占用時(shí)間(單位:秒)
container_fs_usge_bytes gauge 容器中文件系統(tǒng)的使用量(單位:字節(jié))
container_network_receive_bytes_total counter 容器網(wǎng)絡(luò)累計(jì)接受數(shù)據(jù)總量(單位: 字節(jié))
container_network_transmit_bytes_total counter 容器網(wǎng)絡(luò)累計(jì)傳輸數(shù)據(jù)總量(單位: 字節(jié))

2)容器相關(guān)

# 容器的CPU使用率
sum(irate(container_cpu_usage_seconds_total{image!=""}[1m])) without (cpu)

# 容器內(nèi)存使用量(單位: 字節(jié))
container_memory_usage_bytes{image!=""}

# 容器網(wǎng)絡(luò)接收量速率(單位: 字節(jié)/秒)
sum(rate(container_network_receive_bytes_total{image!=""}[1m])) without (interface)

# 容器網(wǎng)絡(luò)傳輸量速率
sum(rate(container_network_transmit_bytes_total{image!=""}[1m])) without (interface)

# 容器文件系統(tǒng)讀取速率
sum(rate(container_fs_reads_bytes_total{image!=""}[1m])) without (device)

# 容器文件系統(tǒng)寫入速率(單位: 字節(jié)/秒)
sum(rate(container_fs_writes_bytes_total{image!=""}[1m])) without (device)

3)http相關(guān)

# HTTP請(qǐng)求總數(shù)
prometheus_http_requests_total

# HTTP請(qǐng)求持續(xù)時(shí)間秒桶
prometheus_http_request_duration_seconds_bucket

# HTTP請(qǐng)求持續(xù)時(shí)間秒數(shù)計(jì)數(shù)
prometheus_http_request_duration_seconds_count

# HTTP請(qǐng)求持續(xù)時(shí)間秒數(shù)之和
prometheus_http_request_duration_seconds_sum

# HTTP響應(yīng)大小字節(jié)
prometheus_http_response_size_bytes_bucket

# HTTP響應(yīng)大小字節(jié)計(jì)數(shù)計(jì)數(shù)
prometheus_http_response_size_bytes_count

# HTTP響應(yīng)大小字節(jié)的總和
prometheus_http_response_size_bytes_sum

4)Nginx相關(guān)

# Nginxvts過濾字節(jié)總數(shù)
nginx_vts_filter_bytes_total

# Nginx VTS過濾器緩存總數(shù)
nginx_vts_filter_cache_total

# Nginx VTS過濾請(qǐng)求秒數(shù)
nginx_vts_filter_request_seconds

# Nginx VTS過濾器請(qǐng)求總秒數(shù)
nginx_vts_filter_request_seconds_total

# Nginx VTS過濾器請(qǐng)求總數(shù)
nginx_vts_filter_requests_total

# nginx信息
nginx_vts_info

# Nginx VTS主連接
nginx_vts_main_connections

# Nginx VTS主SHM使用字節(jié)
nginx_vts_main_shm_usage_bytes

# Nginx VTS服務(wù)器字節(jié)總數(shù)
nginx_vts_server_bytes_total

# Nginx VTS服務(wù)器緩存總數(shù)
nginx_vts_server_cache_total

# Nginx_vts服務(wù)器請(qǐng)求秒
nginx_vts_server_request_seconds

# Nginx_vts服務(wù)器請(qǐng)求總秒數(shù)
nginx_vts_server_request_seconds_total

# Nginx_vts服務(wù)總請(qǐng)求數(shù)
nginx_vts_server_requests_total

# Nginx VTS開始時(shí)間秒數(shù)
nginx_vts_start_time_seconds

10.安裝blackbox_exporter

  • blackbox收集服務(wù)狀態(tài)信息,如判斷服務(wù)http請(qǐng)求是否返回200繼而報(bào)警
  • blackbox_exporter是Prometheus 官方提供的 exporter 之一,可以提供 http、dns、tcp、icmp 的監(jiān)控?cái)?shù)據(jù)采集
功能:
HTTP 測(cè)試
    定義 Request Header 信息
    判斷 Http status / Http Respones Header / Http Body 內(nèi)容
    
TCP 測(cè)試
    業(yè)務(wù)組件端口狀態(tài)監(jiān)聽
    應(yīng)用層協(xié)議定義與監(jiān)聽
    
ICMP 測(cè)試
	主機(jī)探活機(jī)制
	
POST 測(cè)試
	接口聯(lián)通性
	
SSL 證書過期時(shí)間

# 下載、解壓
[root@11 Prometheus]# wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.14.0/blackbox_exporter-0.14.0.linux-amd64.tar.gz
[root@11 Prometheus]# tar -xvf blackbox_exporter-0.14.0.linux-amd64.tar.gz
[root@11 Prometheus]# mv blackbox_exporter-0.14.0.linux-amd64 /usr/local/blackbox_exporter

# 查看安裝是否成功
[root@11 Prometheus]# /usr/local/blackbox_exporter/blackbox_exporter --version
blackbox_exporter, version 0.14.0 (branch: HEAD, revision: bba7ef76193948a333a5868a1ab38b864f7d968a)
  build user:       root@63d11aa5b6c6
  build date:       20190315-13:32:31
  go version:       go1.11.5

# 加入systemd管理
[root@11 Prometheus]# cat /usr//lib/systemd/system/blackbox_exporter.service
[Unit]
Description=blackbox_exporter
 
[Service]
User=root
Type=simple
ExecStart=/usr/local/blackbox_exporter/blackbox_exporter --config.file=/usr/local/blackbox_exporter/blackbox.yml
Restart=on-failure
[root@11 Prometheus]# 

# 啟動(dòng)
[root@11 Prometheus]# systemctl daemon-reload
[root@11 Prometheus]# systemctl enable --now blackbox_exporter

11.Docker部署nginx-module-vts模塊

由于yum安裝的nginx,默認(rèn)是沒有nginx-module-vts模塊的,需要下載對(duì)應(yīng)的nginx源碼,進(jìn)行重新編譯才行。

Docker 搭建 Consul集群(未完)

1.啟動(dòng)第一個(gè)consul服務(wù):consul1

docker run --name consul1 -d -p 8500:8500 -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8600:8600 --restart=always consul:latest agent -server -bootstrap-expect 2 -ui -bind=0.0.0.0 -client=0.0.0.0

# 獲取 consul server1 的 ip 地址
docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul1
172.17.0.2

# PS:
    8500 http 端口,用于 http 接口和 web ui
    8300 server rpc 端口,同一數(shù)據(jù)中心 consul server 之間通過該端口通信
    8301 serf lan 端口,同一數(shù)據(jù)中心 consul client 通過該端口通信
    8302 serf wan 端口,不同數(shù)據(jù)中心 consul server 通過該端口通信
    8600 dns 端口,用于服務(wù)發(fā)現(xiàn)
    -bbostrap-expect 2: 集群至少兩臺(tái)服務(wù)器,才能選舉集群leader
    -ui:運(yùn)行 web 控制臺(tái)
    -bind: 監(jiān)聽網(wǎng)口,0.0.0.0 表示所有網(wǎng)口,如果不指定默認(rèn)為127.0.0.1,則無法和容器通信
    -client : 限制某些網(wǎng)口可以訪問

2.啟動(dòng)第二個(gè)consul服務(wù):consul2, 并加入consul1(使用join命令)

docker run -d --name consul2 -d -p 8501:8500 consul agent -server -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2

docker run -d -p 8501:8500 --restart=always -v /XiLife/consul/data/server3:/consul/data -v /XiLife/consul/conf/server2:/consul/config -e CONSUL_BIND_INTERFACE='eth0' --privileged=true --name=consu2 consul agent -server -ui -node=consul2 -client='0.0.0.0' -datacenter=xdp_dc -data-dir /consul/data -config-dir /consul/config -join=172.17.0.2

3.啟動(dòng)第三個(gè)consul服務(wù):consul3,并加入consul1

docker run --name consul3 -d -p 8502:8500 consul agent -server -ui -bind=0.0.0.0 -client=0.0.0.0 -join 172.17.0.2

4.查看運(yùn)行的容器(consul集群狀態(tài))

[root@k8s-m1 consul]# docker exec -it consul1 consul members
Node          Address          Status  Type    Build   Protocol  DC   Segment
013a4a7e74d2  172.17.0.4:8301  alive   server  1.10.0  2         dc1  <all>
3c118fa83d47  172.17.0.3:8301  alive   server  1.10.0  2         dc1  <all>
4b5123c97c2b  172.17.0.5:8301  alive   server  1.10.0  2         dc1  <all>
a7d272ad157a  172.17.0.2:8301  alive   server  1.10.0  2         dc1  <all>

5.服務(wù)注冊(cè)與剔除

  • 接下來,我們要注冊(cè)服務(wù)到 Consul 中,可以通過其提供的 API 標(biāo)準(zhǔn)接口來添加
  • 那么先注冊(cè)一個(gè)測(cè)試服務(wù),該測(cè)試數(shù)據(jù)為本機(jī) node-exporter 服務(wù)信息,服務(wù)地址及端口為 node-exporter 默認(rèn)提供指標(biāo)數(shù)據(jù)的地址,執(zhí)行如下命令
# 注冊(cè)241的 node-exporter 服務(wù)信息
curl -X PUT -d '{"id": "node-exporter","name": "node-exporter-172.23.0.241","address": "172.23.0.241","port": 9100,"tags": ["prometheus"],"checks": [{"http": "http://172.23.0.241:9100/metrics", "interval": "5s"}]}'  http://172.23.0.241:8500/v1/agent/service/register

# 注冊(cè)242的 node-exporter 服務(wù)信息
將上面所有IP地址改為242的即可,端口不變

如果要注銷掉某個(gè)服務(wù),可以通過如下 API 命令操作,例如注銷上邊添加的 node-exporter 服務(wù)

curl -X PUT http://172.23.0.241:8500/v1/agent/service/deregister/node-exporter 

附:升級(jí)Centos6內(nèi)核

rpm -Uvh https://hkg.mirror.rackspace.com/elrepo/kernel/el6/x86_64/RPMS/elrepo-release-6-12.el6.elrepo.noarch.rpm

yum源報(bào)錯(cuò)解決:找不到鏡像源
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.backup
wget http://mirrors.163.com/.help/CentOS6-Base-163.repo
mv CentOS6-Base-163.repo CentOS-Base.repo
yum clean all
wget -O /etc/yum.repos.d/CentOS-Base.repo http://file.kangle.odata.cc/repo/Centos-6.repo
wget -O /etc/yum.repos.d/epel.repo http://file.kangle.odata.cc/repo/epel-6.repo
yum makecache

到此這篇關(guān)于Docker 部署 Prometheus的文章就介紹到這了,更多相關(guān)Docker 部署 Prometheus內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

標(biāo)簽:朝陽 慶陽 銅川 蕪湖 那曲 泰州 松原 周口

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《Docker 部署 Prometheus的安裝詳細(xì)教程》,本文關(guān)鍵詞  Docker,部署,Prometheus,的,安裝,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《Docker 部署 Prometheus的安裝詳細(xì)教程》相關(guān)的同類信息!
  • 本頁收集關(guān)于Docker 部署 Prometheus的安裝詳細(xì)教程的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章
    婷婷综合国产,91蜜桃婷婷狠狠久久综合9色 ,九九九九九精品,国产综合av
    一区二区三区中文字幕精品精品| 成人黄色大片在线观看| 色婷婷av一区| 国产一区在线观看视频| 亚洲一二三区视频在线观看| 国产日韩欧美一区二区三区乱码 | 亚洲人成在线播放网站岛国 | 日韩精品久久理论片| 国产精品理论片在线观看| 日韩免费看的电影| 欧美日韩国产片| 白白色 亚洲乱淫| 粉嫩av亚洲一区二区图片| 日韩电影一二三区| 日本美女一区二区| 婷婷成人激情在线网| 国产精品久久777777| 日本一区二区成人在线| 精品美女在线观看| 欧美高清激情brazzers| 欧美性一区二区| 99久久综合99久久综合网站| 暴力调教一区二区三区| 成人黄页在线观看| 国产成人av影院| 国产不卡视频在线观看| www.av精品| 国产乱码字幕精品高清av | 亚洲电影在线播放| 亚洲一区二区视频| 亚洲一级不卡视频| 午夜激情久久久| 亚洲福利视频一区二区| 亚洲国产精品一区二区久久恐怖片| 亚洲国产aⅴ成人精品无吗| 亚洲午夜羞羞片| 亚洲大尺度视频在线观看| 丝瓜av网站精品一区二区 | 精品视频999| 欧美va日韩va| 精品国产一区二区精华| 国产午夜精品久久| 亚洲一区二区三区爽爽爽爽爽| 日韩精品午夜视频| 东方欧美亚洲色图在线| 欧美日韩二区三区| 国产日韩欧美一区二区三区综合| 一区二区三区视频在线看| 日韩av网站免费在线| 国产黄人亚洲片| 欧美日韩免费观看一区二区三区 | 国产一区二区电影| 欧美制服丝袜第一页| 91麻豆精品91久久久久久清纯 | 一区二区三区影院| 久久成人羞羞网站| 欧洲精品在线观看| 久久亚洲捆绑美女| 亚洲福利视频导航| 粉嫩13p一区二区三区| 在线不卡一区二区| 最近中文字幕一区二区三区| 日韩成人免费在线| aaa欧美色吧激情视频| 91精品国产综合久久精品app| 中文字幕欧美国产| 美女任你摸久久| 91视频你懂的| 中文字幕av一区二区三区| 日韩高清在线不卡| 日本韩国欧美国产| 最新热久久免费视频| 国产盗摄一区二区三区| 欧美日韩的一区二区| 亚洲综合男人的天堂| 国产不卡视频一区| 久久看人人爽人人| 久久精品国产精品亚洲综合| 在线播放亚洲一区| 亚洲国产精品精华液网站| 成人一级黄色片| 久久久久国产精品麻豆ai换脸| 玖玖九九国产精品| 欧美xxxxxxxx| 欧美aa在线视频| 日韩一级片网站| 日本免费在线视频不卡一不卡二| 欧美日韩亚洲综合在线 | 欧美日韩一区二区三区免费看| 综合久久久久综合| 成人app在线| 中文无字幕一区二区三区| 国产suv精品一区二区6| 欧美国产一区在线| 国产精品综合网| 国产精品视频一区二区三区不卡| 国产精品自拍一区| 久久精品亚洲国产奇米99| 国产精品影视网| 欧美激情艳妇裸体舞| av电影在线不卡| 一区二区三区四区乱视频| 色av综合在线| 亚洲国产精品嫩草影院| 欧美三级一区二区| 亚洲一区二区三区国产| 欧美日韩一区成人| 免费在线一区观看| 久久久久久亚洲综合| www.亚洲国产| 亚洲午夜久久久久久久久久久| 欧美日韩亚洲高清一区二区| 免费在线观看一区| 久久久久99精品一区| 91香蕉视频污| 天天色天天操综合| 91精品福利在线一区二区三区| 蜜臀av性久久久久蜜臀av麻豆| 精品av久久707| 91在线播放网址| 视频一区免费在线观看| 久久五月婷婷丁香社区| 99re视频精品| 青椒成人免费视频| 国产精品人妖ts系列视频| 93久久精品日日躁夜夜躁欧美| 婷婷综合久久一区二区三区| 日韩精品一区二区三区在线观看| 国产成人精品综合在线观看| 亚洲永久免费视频| 久久一二三国产| 欧美色综合久久| 国产成人亚洲综合a∨婷婷图片| 亚洲一区在线看| 中文字幕欧美国产| 欧美一级夜夜爽| 波多野洁衣一区| 精品一区精品二区高清| 亚洲精品一二三区| 久久精品亚洲乱码伦伦中文| 欧美精品色综合| 色综合天天综合网天天看片 | 99精品久久99久久久久| 日韩国产欧美视频| 亚洲欧洲三级电影| 午夜久久久久久久久| 久久嫩草精品久久久精品一| 色综合色狠狠天天综合色| 国产乱人伦精品一区二区在线观看| 亚洲电影一级片| 久久精品男人天堂av| 51精品视频一区二区三区| 成人免费毛片app| 日本人妖一区二区| 亚洲激情图片小说视频| 国产性做久久久久久| 欧美r级在线观看| 欧美一区二区三区人| 欧美日韩国产bt| 色婷婷国产精品久久包臀| 国产成人精品一区二区三区网站观看 | 精品捆绑美女sm三区| 欧美专区亚洲专区| 99精品黄色片免费大全| 国产91精品露脸国语对白| 久久精品国产一区二区三区免费看| 午夜欧美视频在线观看| 亚洲国产成人精品视频| 一区二区在线免费观看| 亚洲精品国产a久久久久久 | 91精品福利视频| 成人av电影免费观看| 成人av电影在线观看| kk眼镜猥琐国模调教系列一区二区| 国产电影精品久久禁18| 高清日韩电视剧大全免费| 国内精品视频666| 国产精品综合久久| 丁香天五香天堂综合| 99久久伊人精品| 91麻豆精东视频| 欧美日韩性生活| 亚洲超碰精品一区二区| 午夜久久福利影院| 男女性色大片免费观看一区二区 | 夜夜亚洲天天久久| 一区二区三区在线视频免费 | 亚洲黄色小视频| 亚洲国产视频网站| 天堂资源在线中文精品| 日本一道高清亚洲日美韩| 久久国产综合精品| 国产激情一区二区三区桃花岛亚洲| 国产·精品毛片| 91视频com| 91精品国产日韩91久久久久久| 日韩视频免费直播| 国产精品美女久久久久久2018| 亚洲在线中文字幕| 国产在线视频一区二区三区|