<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      監(jiān)控工具 - 使用Docker快速創(chuàng)建Prometheus-Grafana-Alertmanager監(jiān)控系統(tǒng)

      Prometheus

      相關(guān)命令

      docker network create monitoring
      
      mkdir -p /etc/prometheus
      vim /etc/prometheus/prometheus.yml
      
      docker run -itd --name prometheus \
      --net=monitoring \
      -p 9090:9090 \
      --restart always \
      -v /etc/prometheus:/etc/prometheus \
      -v prometheus-data:/prometheus \
      prom/prometheus:v2.53.2
      

      配置文件
      /etc/prometheus/prometheus.yml

      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
          # - alertmanager:9093
      rule_files:
        # - "first_rules.yml"
        # - "second_rules.yml"
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
      

      運行實例記錄

      [root@k8s-sample ~]# docker network create monitoring
      d622c0cbdd342bb819aa896c057782ac44ec359bcd3b7f9b30bd1cd0064dfc1d
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# mkdir -p /etc/prometheus
      [root@k8s-sample ~]# vim /etc/prometheus/prometheus.yml
      [root@k8s-sample ~]# cat /etc/prometheus/prometheus.yml
      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
          # - alertmanager:9093
      rule_files:
        # - "first_rules.yml"
        # - "second_rules.yml"
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker run -itd --name prometheus --net=monitoring -p 9090:9090 --restart always -v /etc/prometheus:/etc/prometheus -v prometheus-data:/prometheus prom/prometheus:v2.53.2
      060917136c37c3e5f7c12866e25ab828aecfdc031e1bebb92c153c58e24a9051
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker ps 
      CONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS         PORTS                                       NAMES
      060917136c37   prom/prometheus:v2.53.2   "/bin/prometheus --c…"   6 seconds ago   Up 5 seconds   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   prometheus
      [root@k8s-sample ~]#
      

      可直接登錄“http://:9090”頁面


      Grafana

      相關(guān)命令

      docker run -d --name=grafana \
      --net=monitoring \
      -p 3000:3000 \
      --restart always \
      -v grafana-data:/var/lib/grafana \
      grafana/grafana
      

      運行實例記錄

      [root@k8s-sample ~]# docker run -d --name=grafana \
      --net=monitoring \
      -p 3000:3000 \
      --restart always \
      -v grafana-data:/var/lib/grafana \
      grafana/grafana
      3e2ed40167581e3c0d836a9b6155a8b0bc37012a7b8e67baa45b2fdd474b0865
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker ps
      CONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS         PORTS                                       NAMES
      3e2ed4016758   grafana/grafana           "/run.sh"                5 seconds ago   Up 5 seconds   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   grafana
      b0e8d55c2f2c   prom/prometheus:v2.53.2   "/bin/prometheus --c…"   6 minutes ago   Up 6 minutes   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   prometheus
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# 
      

      登錄“http://:3000”頁面,默認賬號admin/admin,登錄后更改密碼。
      添加數(shù)據(jù)源:Home --> Connections --> Data sources --> Add data source --> prometheus --> Connection 填寫 http://:9090 --> Save & test


      Exporter

      Node Exporter

      部署 Node Exporter
      部署完成后,瀏覽器訪問 http://:9100/metrics 頁面可以查看到采集的指標數(shù)據(jù)。

      [root@k8s-sample ~]# tar -xzvf node_exporter-1.8.2.linux-amd64.tar.gz -C /opt
      node_exporter-1.8.2.linux-amd64/
      node_exporter-1.8.2.linux-amd64/NOTICE
      node_exporter-1.8.2.linux-amd64/node_exporter
      node_exporter-1.8.2.linux-amd64/LICENSE
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# cd /opt/
      [root@k8s-sample opt]# ln -sv node_exporter-1.8.2.linux-amd64 node_exporter
      'node_exporter' -> 'node_exporter-1.8.2.linux-amd64'
      [root@k8s-sample opt]# 
      [root@k8s-sample opt]# useradd prometheus && echo "prometheus:prometheus"|chpasswd && chage -M 99999 prometheus
      [root@k8s-sample opt]# 
      [root@k8s-sample opt]# chown -R prometheus:prometheus /opt/node_exporter-1.8.2.linux-amd64/
      [root@k8s-sample opt]# 
      [root@k8s-sample opt]# ll /opt |grep node_exporter
      lrwxrwxrwx  1 root       root        31 Oct 18 22:34 node_exporter -> node_exporter-1.8.2.linux-amd64
      drwxr-xr-x  2 prometheus prometheus  56 Jul 14 19:58 node_exporter-1.8.2.linux-amd64
      [root@k8s-sample opt]# 
      [root@k8s-sample opt]# cd
      [root@k8s-sample ~]# vim /usr/lib/systemd/system/node_exporter.service
      [root@k8s-sample ~]# cat /usr/lib/systemd/system/node_exporter.service
      [Unit]
      Description=node_exporter
      Documentation=https://prometheus.io/
      After=network-online.target
      [Service]
      Type=simple
      User=prometheus
      Group=prometheus
      ExecStart=/opt/node_exporter/node_exporter
      Restart=on-failure
      [Install]
      WantedBy=multi-user.target
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# systemctl daemon-reload
      [root@k8s-sample ~]# systemctl enable node_exporter.service
      Created symlink /etc/systemd/system/multi-user.target.wants/node_exporter.service → /usr/lib/systemd/system/node_exporter.service.
      [root@k8s-sample ~]# systemctl start node_exporter.service
      [root@k8s-sample ~]# systemctl status node_exporter.service
      ● node_exporter.service - node_exporter
           Loaded: loaded (/usr/lib/systemd/system/node_exporter.service; enabled; preset: disable>
           Active: active (running) since Fri 2024-10-18 22:36:19 CST; 7s ago
             Docs: https://prometheus.io/
         Main PID: 8177 (node_exporter)
            Tasks: 5 (limit: 48820)
           Memory: 4.7M
              CPU: 9ms
           CGroup: /system.slice/node_exporter.service
                   └─8177 /opt/node_exporter/node_exporter
      
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.348Z caller=node_expo>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.349Z caller=tls_confi>
      Oct 18 22:36:19 k8s-sample node_exporter[8177]: ts=2024-10-18T14:36:19.349Z caller=tls_confi>
      lines 1-21/21 (END)
      [root@k8s-sample ~]# 
      

      Prometheus添加監(jiān)控指標
      添加完成后,可以在Web UI頁面導(dǎo)航欄的 Status 中選擇 Targets 查看監(jiān)控目標。

      [root@k8s-sample ~]# vim /etc/prometheus/prometheus.yml 
      [root@k8s-sample ~]# cat /etc/prometheus/prometheus.yml 
      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
          # - alertmanager:9093
      rule_files:
        # - "first_rules.yml"
        # - "second_rules.yml"
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
        - job_name: "linux-server"
          metrics_path: "/metrics" # 指標接口路徑,默認/metrics
          scheme: http # 連接協(xié)議,默認http
          static_configs:
            - targets: ["192.168.16.170:9100"]
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker exec -it prometheus kill -HUP 1
      [root@k8s-sample ~]# 
      

      Grafana 導(dǎo)入儀表盤
      Grafana頁面 --> 左側(cè)菜單欄Dashboards --> New --> New dashboard --> Import a dashboard --> 輸入儀表盤ID 12633
      --> Load加載 --> 設(shè)置儀表盤名稱和數(shù)據(jù)源 --> Import完成導(dǎo)入 --> Dashboards看到對應(yīng)的儀表盤頁面。

      cAdvisor Exporter

      [root@k8s-sample ~]# docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      v0.49.1: Pulling from ddn-k8s/gcr.io/cadvisor/cadvisor-amd64
      619be1103602: Pull complete 
      3b8469b194b8: Pull complete 
      6361eeb1639c: Pull complete 
      4f4fb700ef54: Pull complete 
      902eccca70f3: Pull complete 
      Digest: sha256:00ff3424f13db8d6d62778253e26241c45a8d53343ee09944a474bf88d3511ac
      Status: Downloaded newer image for swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/cadvisor/cadvisor-amd64:v0.49.1  gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      [root@k8s-sample ~]# docker rmi swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      [root@k8s-sample ~]# docker images |grep cadvisor
      gcr.io/cadvisor/cadvisor-amd64   v0.49.1   c02cf39d3dba   7 months ago    80.8MB
      [root@k8s-sample ~]# 
      
      [root@k8s-sample ~]# docker images |grep cadvisor
      gcr.io/cadvisor/cadvisor-amd64   v0.49.1   c02cf39d3dba   7 months ago    80.8MB
      [root@k8s-sample ~]# docker run -d --name=cadvisor \
      --publish=8080:8080 \
      --restart always \
      --volume=/:/rootfs:ro \
      --volume=/var/run:/var/run:ro \
      --volume=/sys:/sys:ro \
      --volume=/var/lib/docker/:/var/lib/docker:ro \
      --volume=/dev/disk/:/dev/disk:ro \
      --detach=true \
      --privileged \
      --device=/dev/kmsg \
      gcr.io/cadvisor/cadvisor-amd64:v0.49.1
      56e4af8073bc960dfeffadb9e962c4107ae482d88cb3e29a651ba4c443962ba0
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker ps 
      CONTAINER ID   IMAGE                                    COMMAND                  CREATED         STATUS                            PORTS                                       NAMES
      56e4af8073bc   gcr.io/cadvisor/cadvisor-amd64:v0.49.1   "/usr/bin/cadvisor -…"   7 seconds ago   Up 5 seconds (health: starting)   0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   cadvisor
      279f91ec6f9f   prom/alertmanager                        "/bin/alertmanager -…"   47 hours ago    Up 22 minutes                     0.0.0.0:9093->9093/tcp, :::9093->9093/tcp   alertmanager
      3e2ed4016758   grafana/grafana                          "/run.sh"                2 days ago      Up 22 minutes                     0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   grafana
      b0e8d55c2f2c   prom/prometheus:v2.53.2                  "/bin/prometheus --c…"   2 days ago      Up 22 minutes                     0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   prometheus
      [root@k8s-sample ~]# 
      

      直接訪問如下頁面

      • http://:8080 查看cadvisor的相關(guān)信息
      • http://:8080/metrics 查看采集的指標數(shù)據(jù)

      在Prometheus添加監(jiān)控指標
      添加完成后,可以在Web UI頁面導(dǎo)航欄的 Status 中選擇 Targets 查看監(jiān)控目標。

      [root@k8s-sample ~]# vim /etc/prometheus/prometheus.yml
      [root@k8s-sample ~]# cat /etc/prometheus/prometheus.yml
      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
              - 192.168.16.170:9093
      rule_files:
        - "./rules/linux-server.yml"
        - "./rules/general.yml"
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
        - job_name: "linux-server"
          metrics_path: "/metrics" # 指標接口路徑,默認/metrics
          scheme: http # 連接協(xié)議,默認http
          static_configs:
            - targets: ["192.168.16.170:9100"]
        - job_name: "docker-server"
          static_configs:
            - targets: ["192.168.16.170:8080"]
      [root@k8s-sample ~]#  
      [root@k8s-sample ~]# docker exec -it prometheus kill -HUP 1
      [root@k8s-sample ~]# 
      

      Grafana導(dǎo)入儀表盤
      Grafana頁面 --> 左側(cè)菜單欄Dashboards --> New --> New dashboard --> Import a dashboard --> 輸入儀表盤ID 14282
      --> Load加載 --> 設(shè)置儀表盤名稱和數(shù)據(jù)源 --> Import完成導(dǎo)入 --> Dashboards看到對應(yīng)的儀表盤頁面。


      Alertmanager

      相關(guān)命令

      mkdir -p /etc/alertmanager
      
      vim /etc/alertmanager/alertmanager.yml
      
      docker run -d --name=alertmanager \
      --net=monitoring \
      -v /etc/alertmanager:/etc/alertmanager \
      -p 9093:9093 \
      --restart always \
      prom/alertmanager
      

      編寫配置文件
      /etc/alertmanager/alertmanager.yml

      global:
        resolve_timeout: 5m
        smtp_smarthost: 'smtp.163.com:25'
        smtp_from: 'test@163.com'
        smtp_auth_username: 'test@163.com'
        smtp_auth_password: 'XXXXXX'
        smtp_require_tls: false
      route:
        receiver: 'default-receiver'
        group_by: [alertname]
        group_wait: 1m
        group_interval: 5m
        repeat_interval: 30m
      receivers:
      - name: 'default-receiver'
        email_configs:
        - to: 'test@yeah.net'
          send_resolved: true
      
      [root@k8s-sample ~]# vim /etc/alertmanager/alertmanager.yml
      [root@k8s-sample ~]# cat /etc/alertmanager/alertmanager.yml
      global:
        resolve_timeout: 5m
        smtp_smarthost: 'smtp.163.com:25'
        smtp_from: 'test@163.com'
        smtp_auth_username: 'test@163.com'
        smtp_auth_password: 'XXXXXX'
        smtp_require_tls: false
      route:
        receiver: 'default-receiver'
        group_by: [alertname]
        group_wait: 1m
        group_interval: 5m
        repeat_interval: 30m
      receivers:
      - name: 'default-receiver'
        email_configs:
        - to: 'test@yeah.net'
          send_resolved: true
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker run -d --name=alertmanager --net=monitoring -v /etc/alertmanager:/etc/alertmanager -p 9093:9093 --restart always prom/alertmanager
      279f91ec6f9fe6f154e99b1d110e754361ad7f2c20066967b290990d72b395a0
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker ps
      CONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS         PORTS                                       NAMES
      279f91ec6f9f   prom/alertmanager         "/bin/alertmanager -…"   5 seconds ago   Up 4 seconds   0.0.0.0:9093->9093/tcp, :::9093->9093/tcp   alertmanager
      3e2ed4016758   grafana/grafana           "/run.sh"                2 hours ago     Up 2 hours     0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   grafana
      b0e8d55c2f2c   prom/prometheus:v2.53.2   "/bin/prometheus --c…"   2 hours ago     Up 2 hours     0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   prometheus
      [root@k8s-sample ~]# 
      

      更新Prometheus配置文件,指定alertmanager的訪問地址

      [root@k8s-sample ~]# vim /etc/prometheus/prometheus.yml 
      [root@k8s-sample ~]# cat /etc/prometheus/prometheus.yml 
      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
              - 192.168.16.170:9093
      rule_files:
        # - "first_rules.yml"
        # - "second_rules.yml"
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
        - job_name: "linux-server"
          metrics_path: "/metrics" # 指標接口路徑,默認/metrics
          scheme: http # 連接協(xié)議,默認http
          static_configs:
            - targets: ["192.168.16.170:9100"]
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker exec -it prometheus kill -HUP 1
      [root@k8s-sample ~]# 
      

      可以直接登錄“http://:9093”頁面


      Alertmanager 告警規(guī)則

      相關(guān)命令

      mkdir -p /etc/prometheus/rules
      
      vim /etc/prometheus/rules/linux-server.yml
      vim /etc/prometheus/rules/general.yml
      
      vim /etc/prometheus/prometheus.yml
      docker exec -it prometheus kill -HUP 1
      

      創(chuàng)建告警規(guī)則文件(主機資源使用率)

      groups: # 告警規(guī)則組
      - name: Linux-Server  # 告警規(guī)則組名稱
        rules: # 規(guī)則
        - alert: HighCPUUsage # 告警名稱
          expr: 100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[2m])) by (instance) * 100) > 80 # 觸發(fā)告警的表達式
          for: 2m # 定義觸發(fā)告警的持續(xù)時間
          labels: # 告警事件的標簽
            severity: warning # 定義告警級別
          annotations:
            summary: "{{ $labels.instance }} CPU使用率超過80%"
            description: "{{ $labels.instance }} CPU使用率超過80%,當前值: {{ $value }}"
        - alert: HighMemoryUsage
          expr: 100 - (node_memory_MemFree_bytes+node_memory_Cached_bytes+node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100 > 80
          for: 2m
          labels:
            severity: warning
          annotations:
            summary: "{{ $labels.instance }} 內(nèi)存使用率超過80%"
            description: "{{ $labels.instance }} 內(nèi)存使用率超過80%,當前值: {{ $value }}"
        - alert: HighDiskSpaceUsage
          expr: 100 - (node_filesystem_free_bytes{fstype=~"ext4|xfs"} / node_filesystem_size_bytes{fstype=~"ext4|xfs"} * 100) > 80
          for: 2m
          labels:
            severity: warning
          annotations:
            summary: "{{ $labels.instance }} {{ $labels.mountpoint }} 分區(qū)使用率超過80%"
            description: "{{ $labels.instance }} {{ $labels.mountpoint }} 分區(qū)使用率超過80%,當前值: {{ $value }}"
      

      創(chuàng)建告警規(guī)則文件(監(jiān)控目標無法連接)

      groups:
      - name: General
        rules:
        - alert: InstanceDown
          expr: up == 0 # "up"是內(nèi)置指標,0表示存活狀態(tài),1表示無法連接
          for: 1m
          labels:
            severity: critical
          annotations:
            summary: "{{ $labels.instance }} 連接失敗"
            description: "{{ $labels.instance }} 連接失敗,可能是服務(wù)器故障!"
      

      更新Prometheus的配置文件

      [root@k8s-sample ~]# vim /etc/prometheus/prometheus.yml
      [root@k8s-sample ~]# cat /etc/prometheus/prometheus.yml
      global:
          scrape_interval: 15s
          evaluation_interval: 15s
      alerting:
        alertmanagers:
          - static_configs:
            - targets:
              - 192.168.16.170:9093
      rule_files:
        - "./rules/linux-server.yml"  # 相對路徑
        - "./rules/general.yml" # 相對路徑
      scrape_configs:
        - job_name: "prometheus"
          static_configs:
            - targets: ["localhost:9090"]
        - job_name: "linux-server"
          metrics_path: "/metrics" # 指標接口路徑,默認/metrics
          scheme: http # 連接協(xié)議,默認http
          static_configs:
            - targets: ["192.168.16.170:9100"]
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# docker exec -it prometheus kill -HUP 1
      [root@k8s-sample ~]# 
      

      查看告警規(guī)則信息,在如下頁面均可以查看到已定義告警規(guī)則的相關(guān)信息

      • 告警頁面 http://:9090/alerts
      • 規(guī)則頁面 http://:9090/rules
      • 配置頁面 http://:9090/config

      測試與驗證
      通過壓力測試工具stress模擬cpu使用率過載告警。
      告警觸發(fā)后,可以在 http://:9090/alerts 和 http://:9093/#/alerts 頁面看到告警的相關(guān)信息。

      [root@k8s-sample ~]# dnf install -y epel-release && dnf install stress -y
      [root@k8s-sample ~]# stress --version
      stress 1.0.4
      [root@k8s-sample ~]# 
      [root@k8s-sample ~]# stress --cpu 8
      stress: info: [41378] dispatching hogs: 8 cpu, 0 io, 0 vm, 0 hdd
      ^C
      [root@k8s-sample ~]# 
      

      自定義告警內(nèi)容模版

      自定義告警內(nèi)容可以更直觀顯示關(guān)鍵信息,提高可讀性。

      1. /etc/alertmanager目錄下創(chuàng)建.tmpl結(jié)尾的模版文件
      2. Alertmanager配置文件中,通過templates字段指定告警模版文件的路徑,在接收者配置中指定模版名稱
      3. Alertmanager重新加載配置文件

      posted @ 2024-10-21 23:23  Anliven  閱讀(192)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产精品一区二区不卡91| 亚洲av日韩av永久无码电影| 国产精品一区二区黄色片| 狠狠cao日日穞夜夜穞av| 免费看黄色亚洲一区久久| 爱性久久久久久久久| 国内精品久久久久影院网站| 天堂在线最新版av观看| 亚洲a∨无码一区二区三区| 开心婷婷五月激情综合社区| 精品亚洲无人区一区二区| 久久久久人妻精品一区三寸| 91中文字幕一区在线| 大庆市| 欧美亚洲h在线一区二区| 国产精品一二三区蜜臀av| 国产av黄色一区二区三区| 国产亚洲精品VA片在线播放| 国产乱码1卡二卡3卡四卡5| 色噜噜亚洲男人的天堂| 海兴县| 五月婷之久久综合丝袜美腿| 国产无遮挡裸体免费视频在线观看| 中文成人无字幕乱码精品区| 狠狠色噜噜狠狠狠狠2021| 亚洲国产精品第一二三区| 精品亚洲欧美中文字幕在线看| 中文字幕日韩国产精品| 99久久亚洲综合精品成人网| 宝贝腿开大点我添添公视频免 | 精品乱人伦一区二区三区| 亚洲精品一区二区美女| 国产成人精品亚洲资源| 国产精品一区二区人人爽| 奉化市| 国产激情一区二区三区成人| 日本高清无卡码一区二区久久| 国产精品自拍视频免费看| 少妇人妻偷人精品视频| 91亚洲国产三上悠亚在线播放| 在线天堂最新版资源|